Thursday, December 31, 2009

Continuous Integration of Python Code with Unit Tests and Maven

My main development language is Java, but I also some work in Python for deployment and related tools. Being a big fan of unit testing I write unit tests in Python using PyUnit. Being a big fan of Maven and Continuous Integration, I really want the  Python unit tests to run as part of the build. I wanted to have a solution that met the following criteria:
  • Used commonly available plugins
  • Keep the maven structure of test and src files in the appropriate directories.
  • Have the tests run in the test phase and fail the build when the tests fail.

The simplest approach I came up with to do this was to use the Exec Maven Plugin by adding the following configuration to your (python) project's POM.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>exec-maven-plugin</artifactId>
  <executions>
   <execution>
    <configuration>
     <executable>python</executable>
     <workingDirectory>src/test/python</workingDirectory>
     <arguments>
      <argument>unitTests.py</argument>
     </arguments>    
     <environmentVariables>
       <PYTHONPATH>../../main/python:$PYTHONPATH</PYTHONPATH>
     </environmentVariables>
    </configuration>
    <id>python-test</id>
    <phase>test</phase>
    <goals>
     <goal>exec</goal>
    </goals>
   </execution>
  </executions>
 </plugin>

This works well enough. Setting the PYTHONPATH environment variable allows your pyUnit tests to find the modules you are building in the project. What's less than ideal is that, unlike other maven plugins, the person running the build needs to have python installed and configured correctly. (You can allow for some variations between environments. And if you have a developer on your project who doesn't use python, and doesn't want to there is a property you can set on the exec plugin to skip the tests, so in the end only those who use python, and the continuous integration server, need the correct things installed.

This may be obvious to some, if not many, but in case anyone is looking for an answer to how to run unit tests as part of of your maven build, I hope that this is helpful.

Thursday, December 17, 2009

Versions in Maven and Source

I'm a big fan of Maven, a build (and project) management tool. When you are working with Maven, each artifact  that you develop (jar, or war file for example) has a version that's distinct from version in your SCM system.  The Maven Book has a good discussion about how versions are managed, but some there are often questions on projects about how to use Maven versions when you also have the SCM version. "SCM Version 6453" gives more information that "Artifact version 1.1" for example. yet we have 2 version numbers to manage. Here's one approach that works well for simple projects.

I'm assuming that you know the basics of how to specify dependencies in Maven. If you need a quick intro to Maven, see the Maven Book.

If you are developing a project that has artifacts that external clients will incorporate with Maven, you need to change the artifact version with each release, as you  specify the artifact version in the dependency element in your pom. If you are just using maven to develop components that will never show up in an external repository, adding the artifact version dimension to the SCM version can appear to add little value at the expense of some overhead, unless you have a clear model of how the two interact.

Many teams using maven are using it to manage external dependencies, but their own artifacts are not published to external maven repositories. A simple webapp, for example may have one war file, and have some external dependencies, and for support and validation purposes, you can use the build number or version number. There are a couple of ways to figure out the source version that you built your webapp from.

If you use a continuous integration tool like Bamboo, you can pass in the build number on the build command line by adding -DbuildNumber=${bamboo.buildNumber} to make the property buildNumber available to your project. You can make this available the application, so that you can see the build number on a login screen or about page. From the build number you can infer the SCM version. Or you can often ask the CI system the SCM version number. For exampe, in Bamboo, you can specify -DscmRevision=${custom.svn.revision.number} if you are using subversion as your SCM.

You can also use the build-number maven plugin to embed the version in your war manifest or make it available to your flex application, for example.

Regardless of how you get the revision number the revision number still gives you enough information to identify what code you are working with. The Maven convention to keep in mind is that SNAPSHOT artifacts are for active development, and non-SNAPSHOT versions are for released artifacts, so while you are working on a new projects, you'd be building a 1.0-SNAPSHOT artifact, and when you are ready to create a release branch, you would change it to 1.0.

So when should you change the project version in maven?  Here is a simple approach: One maven major/minor version per codeline. Incremental versions stay on the same codeline.

So if you are using an active development line pattern (see Software Configuration Management Patterns) this means that each Release Line has a maven version, and your Mainline is use to create the SNAPSHOT version of the next release artifact.

For example:
  • Start mainline development with 1.0-SNAPSHOT version
  • When you have a release:
    • create the release branch and change the artifact to 1.0.
    • Change the mainline version to 1.1-SNAPSHOT (or whatever the next active version is)
To do support of the 1.0 release you have 3 choices:
  1. Just work with the 1.1 artifact version and use build numbers to distinguish between the current release and the support release.
  2. Branch off of the 1.1 codeline, change the version to 1.1.1-SNAPSHOT, and merge back when the 1.1.1 release goes out.
  3. Change the version on the support branch to 1.1.1-SNAPSHOT and don't branch. Change the version to 1.1.1 when you are done, and create a tag for the release.
I prefer the third option as the simplest, and best option, assuming that there is only one stream of development work for the maintenance branch. It balances the need for identification, with the desire to minimize overhead.


To summarize:
  • New development for M.N-SNAPSHOT versions on trunk.Use SCM version & Build numbers to identify components internally.
  • M.N versions on their own branches.
  • Use M.N.n versions for maintenance releases; SNAPSHOTS for development, remove the SNAPSHOT when done. 
I'm interested in hearing what others may have done to reconcile SCM and Maven artifact versions for internal components.

Sunday, December 13, 2009

Uncertainty and Agile Requirements

The key value of Agile methods is to help you to manage uncertainty. By being incremental and iterative, you manage risks by not investing a lot of effort in specifying things that may "wrong." At the start of each iteration you can look at what you have and decide that it's the right thing, in which case you can build on it, or the wrong thing, in which case you can try something else. Since you've only invested a small amount of effort relative to the specification you do in a waterfall process, you've wasted less effort and, in the end, money if you are wrong.

This approach of small stories with only some details works really well in many cases. An agile team runs into trouble when the project team confuses "uncertainty" with "vagueness." To be successful, an agile team needs to work off of a backlog that has stories that are precise enough that the team can iterate effectively with the stakeholders at the end of each iteration, and can develop with a high velocity. It's important to add precision even if you have uncertainties. While it's important to be as accurate as possible, don't use your lack of certainty about a requirement as an excuse to accept a lack of precision. When you have a good target to aim for (and you hit it) you can iterate quickly and judge if you are hitting the right targets.

How do you tell that you have enough precision?  This varies from team to team. For a team that has been together for a time and has a clear shared vision, a very brief statement of goals might well be enough. For a project where the vision is less clear,  a longer conversation may be necessary. Three concrete tests are:
  • Can the team estimate a story? (See Agile Estimating and Planning for more about estimating.) If the answer is "there is not enough information to estimate" then the story is too vague, and the team and the product owner need to meet to make sure that they understand the options. If the team estimates a story that the Product Owner thought was simple at 3 weeks, you have a raised a flag that you need more conversation to understand what the PO really wants.
  • Can you provide three options for how to implement the the story, or 3 variation of what the user experience will be? If you find yourself developing many more that seem plausable, the story is too vague, and if you can only develop one or two, then the there is not enough information for you to think through the story. 
  • Can you test the story? If you can't come up with a a reasonable high-level test plan, then  the story is too vague. (Mike Cohn has written an excellent article about the value of planning with the larger team.)
While being able to do all 3 for a story is nice, being able to feel like you can estimate with confidence is the one thing that you should do to feel confident that the stories are well developed.  If you can't estimate based on what you know about the story, the good news is that  the very act of trying to come up with an estimate, options, or a test plan will help you refine the story.

One might say that this is too much planning for an agile process, and that this level of detail sounds kind of like a "waterfall."  And at a high level it seems related to the Cone Of Uncertainty, which is a model for waterfall development. The difference is that we still don't need or want to have fully defined specifications at the start of the project; as we approach a development iteration, we want enough detail to have a development target that a stakeholder can evaluate.

At the end of an iteration when something isn't quite right, you want your stakeholders to say "that's not what I want" rather than argue over what they meant. The latter will still happen, and it's OK when it does. By being precise about what you think you want to build, you will identify the high risk areas of a project early on, so that you can take full advantage of the risk management benefits of agile.

Saturday, November 28, 2009

Silver Bullets and Simple Solutions

Recently I was reading the Boston Globe and saw a letter to the editor about problems with H1N1 vaccine distribution and lamenting that "We’re eight months into a pandemic, and it seems that all the government can do is tell us to wash our hands!" While I understand the writer's frustration about the availability of vaccines, and the technology used to produce them, I was  struck by the writer's attitude that a simple solution couldn't possibly be effective.  I'm not a medical professional, but from what I've read, hand washing, while mundane sounding, is effective in preventing the spread of disease. Since I  am a software development professional, I also thought that the attitude that the more exotic solution  is always better, is common in software development as well.

When people ask me about improving their release management process to be more agile,  they are disappointed when I don't focus on the latest SCM tool, but rather talk about approaches like unit testing, small and frequent commits and updates, continuous integration, and the like. All of these things are dismissed as simple and even mundane. Yet, if they are simple, why not do them? The truth is that the best way to make your release management process more agile is to have fewer codelines and keep the codeline you want to release working. There is no real magic to that, other than the magic that happens organically when a team is disciplined and committed to keeping the code high quality.

Since Fred Brooks wrote No Silver Bullet (which appears in The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition)) software developers have always been looking for technological ways to make projects go more smoothly.  Technology helps, to be sure. A good diff tool can help you merge code when you really have to, and an SCM tool that supports staged integration well can help you get to the point where you can trust your team to keep the codeline working. But the best approach is to small, incremental changes, and small, comprehensible modules.

In the same vein as what the  Pragmatic Programmers tell us in The Pragmatic Programmer: From Journeyman to Master, and as Bob Martin provides further guidance about in Clean Code: A Handbook of Agile Software Craftsmanship, there is a lot you can do to make your team more agile by small changes, keeping things working along the way, and iterating. Think about tools as a way to support how people work, and not the other way around. Tools help you work effectively, but a good tool can't replace good technique.

Sunday, November 22, 2009

Planning Time and Sprint Duration

While having lunch with a friend of mine he mentioned that his team had frequently changing priorities and how the team tried having short (1 week) sprints to be able to adapt to business changes. He discussed how the team felt like the overhead of planning for a 1 week sprint was too high, so the team decided to abandon a sprint model. This conversation reminded me that this kind of question comes up a lot, especially with teams transitioning  to agile.

It makes a lot of sense to tune your sprint length to the rate at which requirements change and the rate at which the team can deliver functionality. Adding work as you go makes it difficult to make commitments and to measure progress, and new "high-priority" work can disrupt flow. If your sprints are 4 weeks long, then there is a greater temptation to add work mid-stream. If a sprint is 1 week long, then it's easier for a Product Owner to be comfortable slotting work into the next sprint.

A sprint isn't just the time spent coding. The planning and review are also important. So, what's a good ratio of planning to "coding" time in a short sprint? In a canonical 4 week sprint, such as described in Agile Project Management with Scrum the team spends 1 day on planning, and about 1 day on review and retrospective. This adds up to 2 days out of 20, or 10%. For a one week sprint, this same ratio gives us 1/2 day for review and planning.

Given the overhead of getting people together, and the dynamics of meetings, the calculation probably isn't linear. But I have worked on teams where we could do a reasonable job planning and reviewing in 1/2 day. This seems like reasonable overhead if:
  • The backlog is well defined by the product owners in advance of the planning meeting so that we can quickly estimate.  
  • Daily scrums start on time, stay focused, and fit within the time box that the team expects (typically 15 minutes)
  • The number of features is small enough that it is possible to have a focused review meeting in an hour or so, with 30 minutes allocated to "retrospective" discussions.
  • There is adequate interaction with a product owner during the sprint so that small issues can be resolved quickly and outside of the review.
This is my experience with planning for 1 week sprints. What are your experiences? How long do you spend in planning and reviewing? Is it enough? What are the prerequisites for an effective 1-week sprint? Please comment!

Tuesday, November 10, 2009

Doing Less to Get Things Done

Have you ever been in a situation where someone walks into the room and announces that they just got off the phone with a customer you need to add some functionality, described in very specific terms. As described the feature could take a lot of work, so you bounce around some ideas about how to do what the customer asked for. Along the way you realize that maybe, perhaps, there is another way that you can add a similar feature that meets the needs at much lower cost. But no one asked the customer what problem they wanted to solve. So what do you do now?

Some options are:
  1. Saying, sorry, we don't really know what the requirement is, so come back when you have more to say.
  2. Spend the next couple of hours discussing how to implement all of the options you think of, and planning how to get them done in detail
  3. List some options for what the customer might really mean, then delegate someone to fine out more, using your options as a basis for conversation.
Option 1 sounds appealing, but doesn't actually help you solve the problem of efficiently building (eventually) what the customer wants. While option 2 has you thinking about the problem and solutions, at some point you're making the solution more expensive than the customer probably wants it to be. This is an easy scenario to fall into since people, and engineers in particular want to solve problems. But a long conversation without data doesn't solve this problem and keeps you away from making progress on other problems that you know enough to solve.

Option third option is a good compromise. Spend some time discussing what problems the customer might want to solve focusing on the problem, not the solution (implementation). Then spend a few minutes figuring out how you might implement each proposed option so that you can attach a cost to each. Then delegate someone to have a follow up conversation with the customer using your options as a starting point. Three options is a good rule of thumb.

It's very easy to get caught up in solving problems without asking if you're solving the right problem. Whenever you're asked to to build something very specific, ask yourself if you really understand the problem. By taking a step back you can save time, and in the end have happier customers.

(For more on figuring out what the problem really is see the appropriately named book: Are Your Lights On?: How to Figure Out What the Problem Really Is)

Sunday, November 8, 2009

Fail, To Succeed

I was listening to a commentary on NPR about a pre-school graduation which mentioned a comment from an education expert Leon Botstein that "we should be rewarding: Curiosity. Creativity. Taking risks. Taking the subjects that you're afraid you might fail. Working hard in those subjects, even if you do fail. We should reward children when they show joy in learning."

This led me to thinking about a reason that some teams struggle with being agile. Agile teams are good at making corrections based on feedback. For this to work you need to be willing to honestly evaluate your progress against a plan, and be willing to revise the plan (and how you work) based on this feedback. This is a hard thing to do if you're used to the idea that any feedback other than "you're doing OK" is bad. (I have more to say about this in a contribution to the 97 Things Every Programmer Should Know project.)

Agile methods help you create an environment where it's safer to try things by providing for feedback at such an interval that things can't go that wrong. By making small steps and evaluating the results, you can take small risks that you believe will be for the better. And if it didn't work out, you haven't lost that much.

This process is visible at all scales in an agile project.

Sprint Reviews give the team a chance to evaluate features every sprint. It happens that, based on review feedback that features are removed as well as added or enhanced. And that's fine because the team only spent a week or two. In other environments such decisions might not be made til it was far too late to change course and either implement something new or avoid embarrassment.

Integration builds give the team rapid feedback when a code change causes an integration problem (even when the developer thought that it was adequately tested).

Unit Tests give you a chance to understand the impact of a refactor before you commit a change. You can see a problem before anyone else is affected. And you can decide to abort a change with unintended consequences.

Frequent commits to a a version management system allow you to recover from changes that become more involved than you thought.

Being willing to fail allows you to improve as long as the failures are small and easily identified. Being agile means being willing to take small risks.

Sunday, October 25, 2009

The 2009 Software Test and Performance Conference

Last week I gave a class on SCM for Agile Teams at the 2009 Software Test and Performance Conference in Cambridge, MA. The conference had a focus on Agile software development. Good SCM is essential to agile (and any) software development, though it's an oft ignored topic, so I applaud the organizers for considering the topic worth a session. And I thank them for inviting me to give the class.

At the risk of making sweeping generalizations, I like testers, and I tend to find that the way good testers think is very well aligned with the way that I think. Maybe this is part of the reason that I enjoy agile software development so much: testing and automation are very closely tied to the development process. One of the messages in my talk on SCM for Agile Teams is that testing is an essential part of the configuration management process. If you have good automated testing, you have to worry less about branching for isolation, and you save the overhead that (unnecessary) branching adds to your process.

If you have testing established as part of everyone's (including the developers) job, the tester's job becomes far more interesting. Rather than executing scripts and reporting "simple" bugs, a tester can explore the product and find interesting edge cases, driving software quality and starting conversations about what functionality the product needs. And testers and developers can collaborate on automation.

My class was during the last session of the last day, and I was happy to have the small number of people in attendance who were there. I hope that they learned something useful, and I hope that I addressed the concerns of the testers and test managers at the session. I know that I learned a lot from sessions that I went to lead by Michael Bolton and Scott Barber, among others.

If you're interested in more about the kinds of topics that were covered at the conference, look at the STP Collaborative. (And if you want to know more about SCM for Agile Teams than is covered in the talk, you can always read my book: Software Configuration Management Patterns: Effective Teamwork, Practical Integration
:) )

Saturday, October 24, 2009

Information, Interaction, and Scrum

A Scrum team that has dependencies on another group often struggles with how to integrate the other group's deliverables into their sprints. Since the other group hasn't signed up for the sprint commitment, and also has commitments to other teams, the Scrum team has the problem of how to commit to a delivery when they depend on someone outside of the team.

One way to manage dependencies is to list them as roadblocks. However, while that's a good start, it's not the whole answer. A few years ago I was working in the IT department of a large (not-a-software) company. My team was building a tool to archive a class of instant messages. We used Scrum, and consistently followed "the three questions" : what I did yesterday, what I plan to do today, roadblocks. A couple of items relating to the operations group were on the roadblock board for quite a while. While we had a designated contact in that group, he had many other commitments, and our requirements kept getting bumped.

After a while I asked the project manager if we could have our operations contact come to our daily Scrum. We promised that it would only be 15 minutes, and that this would be the only time that we asked him about the issues. After attending Scrum reluctantly for a few days, our ops contact started to be more enthusiastic, and our dependencies got done quickly, and the project moved smoothly from then on.

While I don't really know what our operations contact was thinking, it seemed to me that his presence in the Scrum helped him to understand how important his contribution to the project was. We were people he was working for, not just another issue in his work queue. Perhaps being in the Scrum helped him see himself as part of the team; people can commit to a team more readily than to a random issue.

Sometimes people suggest that, in the interest of efficiency, the best way to interact with external organizations is to generate a list of detailed requirements. The challenge with this approach is that by delivering a list of things you need rather than having a conversation of problems that you want to solve, you many end up with the wrong answer. This, combined with the teamwork aspects of having someone in the Scrum, lead me to believe that having someone join the Scrum as soon as roadblock items pile up is something to try sooner than later.

Agile works well because it focuses on individuals and interactions as a path to finding the information you need. While you need to be mindful of having too many people in every Scrum, if you find yourself wanting to pass a spec around rather than inviting someone you need to a Scrum, remember that sometimes interaction is more valuable than information.

Monday, October 19, 2009

Continuous Integration as Metaphor for Agile Methods

Build and SCM practices are essential for agile teams to deliver working software on a periodic basis. Continuous integration is an essential practice for agile teams, as described in the XP book: Extreme Programming Explained: Embrace Change, and Paul Duvall's excellent book: Continuous Integration: Improving Software Quality and Reducing Risk.

For those not familiar with the continuous integration (CI), it's a practice where a team has an automated build that monitors the source code repository for changes and builds and tests the software periodically. CI allows the team to monitor the state of the project and detect mistakes and integration errors quickly, minimizing the risk of someone on the team being stuck because they checked out broken code, and thus enabling the team to deliver quickly.

Continuous integration is also a good metaphor for agile software development not just because it's about providing feedback and enabling change, but because of the way that CI tools work.

While the concept is called "continuous" integration, most, if not all, CI tools work in "sprints." CI tools can be configured to check the repository using two parameters:
  • A polling interval: How often to check the repository
  • A quiet period: How much time should elapse between changes before a build starts.
There are a couple of reasons for not building after every commit. The first is that people can sometimes be undisciplined about committing change sets. You may forget a file and check it in minutes later. In this case the quiet period saves you from failed builds that are "false negatives."

The other reason for the quiet period is that, even if all the developers were perfectly disciplined about committing atomic change sets, the build system might not be able to keep up with the work of the team if it built after every change. Building after a couple of closely related changes gives you the information you need to keep the codeline healthy. Building more frequently could put you at risk of getting more feedback in a less timely manner.

While the "keeping up with the developers" problem can be fixed with technology, the idea of working efficiently by not building more often than necessary also describes how agile teams work. Teams periodically check to see what work there is, and once requirements are "stable enough" they plan and execute. This approach is valuable because chaotic change can lead to churn and reduced productivity. For example, a Sprint starts with a fixed backlog that the team plans and quickly executes. If the requirements change too much during the Sprint, the Sprint is supposed to be terminated and re-planned. In practice many teams allow for some changes to requirements, changing direction too often can waste energy.

Agile is about embracing change, but being agile is not the same as being chaotic. We configure our tools to work effectively to give the feedback we need; we can do the same with our teams.

Thursday, October 1, 2009

Enabling Change

I'm starting to gather my thoughts for the October issues of CM Crossroads which has a theme of "Overcoming Resistance to Change." Some of my favorite books on the topic of enabling change are:
While I can't possibly cover all of the ground that these books do, I can share some observations I've made while trying to help teams to do things differently, such as adopting Scrum or developing a more agile release management approach.

A common reason people resist change is because they follow "rules" that they don't understand no longer apply. When these "rules" are embedded in the culture of an organization the challend to change is greater, since its often more pardonable to fail when you've followed the established practices, than failing when trying something new. This is a big challenge to overcome, and there is no easy answer to addressing this challenge, other than to be aware that this behavior may be happening. But there are a couple of things that help make enable change:
  • Leading by example. Often others just don't understand that other ways are possible.
  • Gather (visible) data. Often others ignore uncomfortable facts.
As an example of the first point, I've been on teams where unit testing was dismissed as simply too hard to do, or a waste of time. By a small group adopting the practice in small cases (and refactoring the more difficult code to enable unit testing), they can demonstrate how the presence of the tests helps improve quality and speed of delivery. In this way a small group of enthusiasts can lead the way for the more timid. (Really Dumb Tests discusses a similar point.)

In other cases, the team isn't aware of a problem. A colleague of mine recently said "you can't change what you can't measure," and gathering data is essential to making a team aware of a need to change. Once you have the data, you then have the ability to make decisions, and then measure whether those decisions have the desired effect.

To make this concrete, imagine a team that is estimating tasks based on a 7 hour ideal day (and an 88hour work day). At the end of each 2 week sprint the team either isn't meeting its goals or is feeling overworked, yet none of the tasks seemed more complicated than expected. One possibly is that that the teams days aren't really 7-hours. To measure this you could keep a chart on the wall, measuring team and organization meetings, adding marks to a bar chart after each activity that seems unrelated to coding. If at the end of a two week sprint this number is much higher than 10 hours (10 days*(8hours@work-7hours-coding)), then you have some options to consider:
  • Run meetings more effectively
  • Decide if all of the meetings add value
  • Decide to estimate based on a shorter ideal day
  • Decide that the work day needs to be longer than 8 hours.
  • (etc)
It's important that:
  • The data collection be lightweight and that everyone understand that the data need not be entirely scientific to be useful. Too much effort to gather data can derail a change effort because of perceived cost.
  • The data be visible and incremental. A hand drawn chart on a wall can be more effective than spreadsheet data that lives on someone's laptop. (But electronic data can also be made visible in the right context)
  • The team evaluates the data with a goal of improving, not blaming. Maybe the extra time was spent in meetings was well-intentioned, or even necessary.
  • The team consider a number of options to change the situation. (See Finding and Evaluating Options for more on evaluating options.)
What's useful about data is that it avoids arguments about who has the most accurate memory. Collecting data may not solve the problem; the data may leave you with more questions than answers, but without data you'll have no good way to decide what to try changing, and if the change had the desired effect.

Change is hard often because people often don't understand the need for change, or the possible changes. By demonstrating the alternatives and their value, and by gathering data to evaluate current practices, you can start the process.

Saturday, September 26, 2009

IDEs and Builds Scripts (September Better Software)

I wrote an article for the September/October issue of Better Software Magazine: IDEs and Build Scripts in Harmony where I discuss how to use build tools, and IDEs while minimizing duplicate work and getting the benefits of both tools.

As usual, the editors did a great job selecting metaphorically appropriate art work, and there is lots of other good content in the issue, including an interesting commentary by Lee Copeland on testing, Scrum, and "testing sprints."

I won't repeat anything I say in the article (since I already said it), but I want to add some philosophical comments. The question of working in environments that use IDEs for development and integration builds that are driven by other tools is one that I care about because it involves some of the issues that often cause a lot of angst and discussion on development teams:
  • Standardization and the effects of tools on individual and team productivity, and
  • Repetition: having the same information in two places.
Many of these problems can be solved by tools that separate information appropriately. For example, dependency information in build scripts, formatting information in IDE settings, so you don't need to have duplicate configurations checked in. Or even having some sort of canonical way to describe formatting rules which you keep with the build scripts in a form that IDEs can leverage. This frees developers to use the tool that they are most accustomed to (and productive in), while maintaining the amount of standardization that helps teams be more productive.

I hope that you find the article interesting and thought-provoking.



Note: There was a production error that caused an old bio to show up in the print edition. I work at Humedica, not where the bio says .

Saturday, September 5, 2009

97 Things Every Programmer Should Know

This past week 97 Things Every Programmer Should Know was made available to the public. This project was driven by Kevlin Henney, who I know from the early Pattern Languages of Programming conferences, . The list of contributors contains many familiar names, and I'm honored to be among them.

My contributions are about the agility, deployment, and their intersection. They include:
This project has contributions from many really interesting and insightful people. Maybe you already know everything on the lists, but you might get a fresh perspective by reading the contributions. You are very likely to start thinking.

On a related note, this past week, there was a conversation in my office between one of my colleagues who is a parent of a toddler, and one who isn't about why one works so hard to sound excited when talking to children about tasks.

You might say in an excited tone:
"Let's brush our teeth! It will be fun!"
instead of
"You need to brush your teeth because it's good for you..."

Being the proud (some who know me IRL might say too proud) parent of a 2-year, 9-month old boy, this got me thinking about why parents do this, and why it works so well. I suspect that the reasons are two-fold:
  • Toddlers don't (yet) get the idea of consequences, but they get the idea of "fun" so framing something as fun and exciting is just the path of least resistance.
  • Toddlers are wired to explore and learn, and every new thing is fun!
I'm realizing that, while as professionals, we need to do things for purely practical reasons learning about how to work better can (and perhaps should) be fun too. Seeing how toddlers learn makes me wonder how much of that joy of discovery we give up as we grow, and whether we need to lose it.

So as you read through the contributions in 97 Things, try to find something new and learn about it not just because it's something you should know, but because you enjoy programming, and because learning new things is fun!

Sunday, August 16, 2009

Streamed Lines at 11

As I was starting to prepare for a class I'm giving at the Software Test and Performance conference in Cambridge MA this October I looked over a paper Brad Appleton and I wrote in 1998 on branching patterns: Streamed Lines, and I started to think about the path from this paper to the SCM Patterns book. Streamed Lines describes a number of branching patterns and when it's appropriate to use each one. From time to time people still tell me or Brad that this is one of the better overviews of branching strategies they have seen.

The paper grew out of material we gathered in a workshop at ChiliPLoP 1998 in Wickenberg AZ. We organized the set of index cards, categorized by color-coded stickers into the paper we prepared for workshopping at PLoP 98 (which was the PLoP conference I was program chair for). Many steps later, with encouragement from John Vlissides, we submitted a book proposal and started working on Software Configuration Management Patterns. The SCM Patterns book says a lot less about branching than Streamed Lines, and more about how SCM and version management practices fit into a pragmatic and agile development environment. Given how the book morphed from the original branching patterns, there is still a place for the information in Streamed Lines.

Streamed Lines may be due for an update to take into account how things like distributed version management systems like git and Mercurial affect the cost of branching. Regardless of tools, branching can have costs relative to not-branching; just because something is easy to do, does not mean that it's the right thing to do, but newer tooling is worth considering as you develop an SCM strategy.

Have you read Streamed Lines? Does it need an update? What would you change about it? How much should tools affect the principles it describes?

Sunday, August 9, 2009

Sprint Review as Agile Enabler

An agile process such as Scrum is built on a number of both project management and engineering practices. The engineering practices support the project management practices and the project practices guide engineering decisions. While it takes more than the presence (or absence) of any one practice to cause your agile project to succeed or fail, some practices can drive your process in a powerful way. Sprint reviews are one practice that, when done with the right attitude, can help teams develop and maintain a good project rhythm.

An iteration process has mechanisms in place to help steer teams. Of all of the practices that support iteration, regular sprint reviews help teams and product owners get the feedback that they need to improve their performance. Reviews are also one practice that I've noticed that teams slack off on, especially when they are first starting out.

A common rationale for not having an iteration review are that there isn't enough to show. When the team is not making progress is the most important time to review progress with the product owner and evaluate how the project is doing and what needs to change. While the first review may be difficult, if everyone on the team is committed to a successful project, a review of less than successful sprint can have many positive consequences:
  • The product owner many be more impressed my the progress than the team thought, freeing the team to take more ownership of their work.
  • The product owner may be better able to understand that the backlog might be the wrong size as the team compares the results of sprint to the backlog, providing the team with the data it needs to have a more attainable backlog.
  • The team may recognize things it can do to work more effectively.
It's important that everyone understand the reviews for what they are: brief, lightweight mechanisms for evaluating progress as a group and and understanding how to do improve. This means that it's OK for things to go wrong in a review. And the team should not spend a lot of time preparing or setting up environments. (With good engineering practices you should be able to build and deploy a "review version" quickly. If you can't considering prioritizing Deploying Early.)

As you identify issues it's also important to review progress on these issues at subsequent reviews. If you let them drop then the people at the review will come to understand that the review feedback is part of an empty exercise. It's OK to acknowledge that a review item wasn't acted upon, or is no longer necessary. But you should discuss each action from a review at the next one.

While this sounds easy, there are some challenges including establishing trust between the product owner and the team, and developing an understanding that they share the same goals. You need to be able to talk about what's not working, and figure out how to make it work, not just assign blame. If the team and the product owner is new to Scrum, and the project is new, you may have to start with a premise of trust. This is hard, but with the regular feedback of a review each sprint, the stakeholders will be able to readily evaluate everyone's dedication to the project.

It's very important to discuss what went well, preferably before covering what needs to be improved. (I tend to favor the attitude I learned from patterns writers workshops.)

Some suggestions for those struggling with agile:
  • Have reviews at the end of each Sprint, using your backlog as an agenda.
  • Show the work done in the simplest way possible.
  • Collect feedback both on the work and the process, and identify things that went well and things to improve.
  • Save the list of things to improve for the next review, and be sure to discuss them.
Agile methods are based on periodic feedback, and a review is a lightweight process to give feedback essential to a team. If your team is struggling with agile, have reviews each sprint, and try to understand what's working and what's not. The review will guide you to the most critical issues (technical and organizational) that you need to address.

Sunday, August 2, 2009

Releasing, Branches, and Agile Teams

When Brad Appleton and I wrote the SCM Patterns book we discussed two branching patterns related to releasing software: Release Line and Release Prep Codeline. The first, is a pattern that allows teams to easily support released code. The second is a pattern to help you "stabilize" a codeline before a release while allowing parallel work for the next release to proceed. Too often teams start Release Prep Codelines at the wrong time or for the wrong reasons, without understanding the costs . Given the rate at which changes happen on agile codelines, the costs of creating a branch can be large if do it for the wrong reasons.

If an ideal agile team had "shippable" software at the end of every iteration, a Release-Prep codeline isn't necessary. You just label your code at the end of the final sprint, and if you need to provide a fix that can't be done from the mainline code, you start a release branch from that label.

Many teams can't ship at the end of every sprint, creating a Release Prep Codeline (branch) is a useful: it avoid some poor alternatives, like having a code freeze. The branch can receive small changes to fix problems, and the mainline can add new features, refactor, and integrate the fixes from the release-prep branch.

As the time between when the branch is created and the project is released grows, the cost of merging changes between the branch and the Mainline increases because the source code diverges. This decreases the velocity of the team and can make the time to release grow more.
A long interval between branching and release often happens for reasons like:
  • Quality issues. There are a lot of problems, so going from "feature complete" to "shippable" takes longer than expected.
  • "Code Freeze" happens before "Feature Freeze." Not explicitly, but after the branch is created you identify new "must-have" features. This gets worse as the time between branch and ship increases.
So what is a team to do? Here are some suggestions:
  • Be agile and prioritize: If the release is the most important task, do that work on the mainline, and have everyone work on it. Don't branch until you are ready to ship.
  • Add automated tests early. Try to be "ready to ship at the end of the sprint," so you can avoid the costs of branching.
  • Don't branch until you really are feature complete, and use the Release-Prep Branch only for a constrained set of fixes.
If you really need to start future work before the current release is ready to ship consider either:
  • Doing all work on the main line and isolate "future work" by architectural techniques. (Make new features plugins, for example).
  • Keeping the work that is highest priority on the main line, and create a Task Branch for the future activity. Those on the task branch are responsible for merging main line code into their branch, and when the release is done, the task branch get copied to the main line.
My tendency is to want to keep the highest priority work on the main line as long as possible. Usually the code you are about to release meets this criterion.

Technically, these are simple technical approaches to implement. Like all SCM issue, there a are many non-technical issues to address. Product owners need to understand the costs of "working in parallel" and an agile team is responsible for making sure that the product owners know these costs so that they can make the correct decisions about when to start a parallel work effort.

How has your team addressed pre-release and release branches? If you read this and have an opinion, please comment!

Monday, July 27, 2009

Yours, Mine, and Ours- Ownership on Agile Teams

During a panel discussion on people issues on agile teams I participated in during a meeting of the New England Agile Bazaar the issue of how to address the problem of people taking credit for work someone else did arose. (The short answer is that you can solve the problem through a combination of agile tracking methods to make progress visible, and some people management techniques.) This entry isn't about that question.

The issue of credit in agile teams reminded me of a recurring puzzlement: when someone on a team says something like "we used Jim's component...." what does that mean? Some of the choices are:
  • I want to give Jim Credit for doing something useful.
  • I want to make it clear who's responsible if it doesn't work.
  • Jim is the only one who knows how it works, and the only one who can make changes.
There may be other choices, but this type of dynamic comes up a lot and I wonder what it has to do with the tension between the collective code ownership value of agile methods, and the innate tendency for people to want to get and give credit.

I tend to think that on an agile project, everyone should be able to work on any part of the code. Some people may have more appropriate skills for a task initially, but if the most "skilled" person is busy, anyone should be able to jump in and make a change. And if I'm doing something that touches Jim's code and something seems wrong in Jim's code, I should be free to fix it.

This form of Collective Code Ownership has advantages:
  • The Truck Number of the project increase as knowledge is shared.
  • The code gets "reviewed" and improved by everyone who touches it. You're less likely to have code that follows idioms only one person understands without the expense of formal reviews.
  • There is more peer pressure to do things well, as other will be looking at the code.
Agile practices enable this kind of collective code ownership by encouraging unit tests, accepted coding standards, and team collaboration to decide how the backlog gets completed.

While giving and getting praise and credit is good, and while anyone working on a feature should feel responsibility to do the best that they can, on an agile project, teams should be wary of a "you touched it, you own it" dynamic. That makes for fragile code, interrupt driven work, and a lower velocity as "the experts" become bottlenecks.

Silos of knowledge don't benefit the team in the medium or long term. Encourage people to work on code that someone else has "authored." On an agile project, the team commits to delivering functionality for a sprint and your code is everyone's code.

Monday, July 20, 2009

Finding and Evaluating Options

Of all the rules, techniques, and heuristics I've tried for making design (and other) decisions, the "Rule of Three" keeps surfacing as one of the simplest and most effective to use.

There are two variations of the Rule of Three. The first, from Secrets of Consulting: A Guide to Giving and Getting Advice Successfully by Jerry Weinberg is about evaluating options:

If you can't think of three things that might go wrong with your plans, then there's something wrong with your thinking.
This sounds rather negative, but if you think for a second, it makes sense, and it's really about being constructive. Design decisions are about evaluating trade-offs. You want to pick the decision that has the problems that you can live with. When we've been struggling with a problem and come up with a plausible option, we tend to want it to work. If you don't think about what might go wrong you're making it harder for a good idea to succeed. The worst that can happen is that you find that your "great" idea doesn't solve a likely problem. This is good. The best is that you discover that your idea addresses issues you didn't consider. This is very good.

Is thinking up three possible problems with a decision foolproof? No. But this approach strikes a good balance between Big Design Up Front and being too impulsive.

Weinberg's original Rule of Three is great when you have an option to evaluate, but how do you develop options? In Behind Closed Doors: Secrets of Great Management (Pragmatic Programmers), Johanna Rothman and Esther Derby describe a related Rule of Three: Always come up with (at least) three alternatives when trying to solve a problem. While 3 can be an aribitrary number, Johanna and Esther explain that:
  • One alternative is a trap.
  • Two alternatives is a dilemma.
  • Three provides a real choice (and gets people in a frame of mind to come up with more).
I've used the combination of these approaches for design decisions with great results. (I describe one case in my first blog post ).

You may not always need three choices when looking at a situation. And when you have a solution in mind, you may not always need to think of three things that can go wrong with it. On the other hand, the cost of following this process is small, and the benefits are great. Make the Rules of Three a habit and you may find yourself making better decisions.

Tuesday, July 14, 2009

July 23 Agile Bazaar: Agile Teamwork: The People Issues

On July 23, I'll be on a panel with Johanna Rothman, Ellen Gottesdiener, and Mike Dwyer: Agile Teamwork: The People Issues. The Agile Bazaar is a great group to know about if you are involved in Agile Software Development in the Boston area. This should be a very interesting meeting.

Sunday, July 12, 2009

Begin with the End in Mind: Deploy Early

In many projects I've worked on the task of developing a deployment process is deferred until close to the release date. This is a bad idea. Deferring deployment-related decisions adds risk to a project as deployment to a target environment exposes development time assumptions and operational requirements that may need to be addressed by the engineering team before an application is released.

I suggest that a project develop (and use) a process to deploy the software to the target platform in the first iteration. Starting on deployment early gets you closer to the agile goal of shippable code at the end of each iteration.

Deployment is important, and something for which there should be a story in your process early on because:
  • Deployment is the first thing a customer sees, but it's only an enabler, it doesn't directly deliver value, so you'd like for the process to be efficient. Incremental improvement of the process is a great way to make it work well.
  • Design decisions can have an impact on the deployment and configuration process, and deployment models may suggest a different design
  • Thinking about deployment early means that you'll have an identifiable, repeatable process. this will make your testing process more valuable.
By a deployment process I mean the mechanism for delivering your software to, and making it usable by a customer (or intially to a QA Team), including any required configuration steps. This process can be as manual or as automated as you like, ranging from a zip file and a README, to a fully automated installer. You can start simple and automate more as you get feedback.

Developing a deployment process early can look like Big Design Up Front if you start out with too detailed of a process that involve guesses about things you just don't know at the beginning of a project. But remember: you can change the process as you learn. Start with a simple process and change and refactor the process as you go to make it more suitable to the target user's needs. You may discover that a manual process works well enough, or you may find that adding some automation make the process simpler and lower risk with little added cost. If you have a test team that is using the deployed artifact, you may find that you can leverage automation developed to support test related deployments and use it in your customer process.

Deploying early and often can also improve your architecture too, as deployment exposes issues about the system architecture that are easy to overlook in an development environment. How the application is designed affects how it can (and should) be packaged (and the reverse is also true). While another groups may be responsible for the operational aspects of deployment, the engineering team is responsible for making the process work in a real environment. Consider configuration as just one example. Using a build tool like Maven or Ant, it's a simple matter to keep a number of configuration files in synch by using filtering, and passing a property in to the build. Once you deploy a package that needs to be configured at a customer site, the fact that there are now many configuration files that need to be edited in the same way becomes an obvious source of wasted effort and possible error.

Even seemingly trivial issues such as how to configure logging in production to debug a problem can influence, the choice of logging framework. The sooner you see the problem, the easier it is to fix with less risk. Michael Nygard 's book Release It, covers some of these issues and is an excellent resource for information about how to build software that can be deployed into production effectively.

Like all things agile, you can start small, and take small steps towards the end result. They key things to consider are:
  • Packaging: have the build generate a deploy-able package. This can be used for Integration Testing, demo environments, and the like.
  • Configuration: understand what parameters need to configured and how to configure them. The first pass could be build properties that are filtered into configuration files. Move towards understanding what needs to be configured at deploy time at a customer site.
  • Requirements of the target platform and operations teams: How automated does the process need to be?
Even if you don't know all of the details of the production deployment, frequent early developments help you to understand what issues you need to decide, and which you can defer. And if you really do understand what you need to do, you may as well give it a try sooner rather than later. At best you'll be validated. At worst, you'll have lots of time to address issues.

Friday, July 10, 2009

STPCon 2009 in Boston

I'll be giving a class about software configuration management for agile teams at the Software Test and Performance Conference in Cambridge, MA on Friday Oct 23. The conference is Oct 19-23, 2009.

The abstract for my talk is:


Version management, build, and release practices are essential elements of any effective development environment, especially for agile teams, which rely on feedback and maintaining high quality during rapid change. Many agile teams are puzzled about how to apply good software configuration management (SCM) practices in an agile environment. This session will provide an overview of SCM concepts and explain the patterns and practices that teams need to maintain an agile SCM environment. You’ll learn how agile testing practices and continuous integration change how teams use SCM, and how to set up the essentials of an agile SCM environment.

Sunday, July 5, 2009

Testing Cost Benefit Analysis

I'm probably one of the first people to advocate writing tests, even for seemingly obvious cases. (See for example, Really Dumb Tests.) There are some cases where I suggest that testing might best be skipped. There are cases where tests may not only have little value but can also add unnecessary cost to change. It's important to honor the principles of agile development and not let the "rule" of a test for everything get in the way of the the "goal" of effective testing for higher productivity and quality.

While writing a unit test can increase the cost of a change (since you're writing the code and the test), but the cost is relatively low because of good frameworks, and the benefits outweigh the costs:
  • The unit test documents how to use the code that you're writing,
  • The test provides a quicker feedback cycle while developing functionality that, say, running the application, and
  • The test ensures that changes that break the functionality will be found quickly during development so that they can be addressed while everyone has the proper context.
Automated integration testing, especially involving GUIs, are ones are harder to write, and cover code that likely was tested with unit tests, so it's easy to stumble onto cases where the tests add little enough value and enough cost that it's worth re-considering the need for an automated test in a particular case.

On one project I worked on the team was extremely disciplined about doing Test Driven Development. Along with unit tests, there were integration tests that tested the display aspects of a web application. For example, a requirement that a color be changed would start with a test that checked a CSS attribute, or a requirement that 2 columns in a grid be swapped would result in a test that made assertions about the rendered HTML.

The test coverage sounded like a good idea, but from time to time a low cost (5 minute), low risk change, would take much longer (1 hour) as tests would need to be updated and run, and unrelated tests would break. And in many cases the tests weren't comprehensive measures of the quality of the application: I remember one time when a colleague asserted that it wasn't necessary to run the application after a change, since we had good test coverage, only to have the client inquire about some buttons that had gone missing from the interface. Also, integration level GUI tests can be fragile, especially if they are based on textual diffs: a change to one component can cause an unrelated test to fail. (Which is why isolated unit tests are so valuable.)

I suspect the reasons for the high cost/value ratio for these UI-oriented tests had a lot to do with the tools available. It's still a lot easier to visually verify display attributes than to automate testing for them. I'm confident that tools will improve. But it's still important to consider cost in addition to benefit when writing tests.

Some thoughts:
  • Integration (especially GUI) tests tend to be high cost relative to value.
  • When in doubt try to write an automated test. If you find that maintaining the tests, or execution time, adds a cost out of proportion to the value of a functionality change, consider another approach.
  • GUI tests can be high cost relative to value, so focus on writing code where the view layer is as simple as possible.
  • If you find yourself skipping GUI testing for these reasons, be especially careful about writing unit tests at the business level. Doing this may drive you to cleaner, more testable, interfaces.
  • Focus automated integration test effort on key end-to-end business functionality rather than visual aspects of an application.
Applications need to be tested at all levels, and automated testing is valuable. It's vital to have some sort of end-to-end automated smoke test. Sometimes there is no practical alternative to simply running and looking at the application. Like all agile principles, testing needs to be applied with a pragmatic perspective, and a goal of adding value, not just following rules blindly.

Sunday, June 28, 2009

Review of Adrenaline Junkies and Template Zombies

I reviewed the book Adrenaline Junkies and Template Zombies, by the Atlantic Systems Guild consultants on StickyMinds.com. Like many of the other books by this group, it's an inspiring read. For more see the review.

Saturday, June 20, 2009

3 Books every engineer should read

There are many excellent books to read if you write software (Software Configuration Management Patterns: Effective Teamwork, Practical Integration) among them :) ). Given that each of us have limited time, its difficult to read everything one might want to, so we tend to focus on reading material that addresses our immediate problems. Having said that, there are 3 books by Jerry Weinberg that have incredibly valuable advice that I use frequently, and I'd like to recommend them to anyone who's serious about software engineering work:
These books are more about working effectively with people to solve technical problems than actual technology problems. This is an area many people neglect; tools and technologies come and go, and you need to maintain your skill level, but knowing how to look at problem solving is a timeless skill. The

I first heard about Are Your Lights On? from Linda Rising who recommended it on the patterns-discussion list. This is an easy to read, entertaining book that illustrates quite clearly what some people never really learn: you need to know what the problem is before looking for a solution. Sounds obvious, but if you've ever spent a lot of time implementing a complicated solution to what turned out to be a non-problem, the techniques in this books will be useful.

Becoming a Technical Leader: An Organic Problem-Solving Approach has lots of good information for technical people at all levels. This book also has some good work-style and problem solving ideas for non-technical people. If you have a role with the word "lead" in your title, the value of this book seems obvious. What's surprising is how the book explains how to be a better technical contributor on a team, regardless of your title. It has many problem solving techniques and guidelines expressed in a clear style. I've read this book a few times and found new insights each time.

If you are in a role where you need to give or receive advice Secrets of Consulting: A Guide to Giving and Getting Advice Successfully has a number of excellent, easy to remember, "rules" to keep in mind. Regardless of whether you are a full time employee, contract employee, or someone who is considering consulting, the techniques in this book are valuable and timeless.

It's rare that a week goes by when I'm not in situations where I apply the advice in one or more of these books. There are other, more recent books that cover similar ground, in particular a few by Johanna Rothman, Esther Derby (among others). Jerry Weinberg is one of the masters of combining the technical and human side of software development, and it's worth your while to read what he's written.

Sunday, June 14, 2009

Feeback, Writers Workshop Style

I was fortunate enough to participate in the first few Pattern Languages of Programs conferences, where I learned quite a bit about technology, writing, problem solving, and giving and receiving feedback. What made the biggest impression on me was the process the patterns workshops used.

Before I started to write patterns, I was used to getting feedback in a way that focused on what was wrong or missing in my presentation. The patterns community approach is to evaluate papers through a shepherding process that ends in a writers workshop.

A patterns writers workshop has the following parts:
  • The author of a paper reads a selection. This gives the other participants a chance to understand something about how what the author thinks is most important about the paper. After this reading the author becomes a passive participant in the session, only speaking if someone makes comment that isn't clear. The author does not defend his writing.
  • Someone in the workshop summarizes the paper. This allows the author to understand if the main points of the paper made it through.
  • The group discusses what they liked about the paper.
  • The group discusses things that could improve about the paper. Notice that this phase is not "things that are wrong about the paper," as the goal is to help the author.
  • Finally, the author can speak again, asking clarifying questions.
The important things about this process are that the feedback starts with positives, and focuses on improvement. Reinforcement of what's working helps the author be more receptive to suggestions for improvement later on and provides guidance for what not to change. Since the "negatives" are cast in terms of actionable items, the author has direction.

I've found the following approach, which is a variant of the workshop format, useful in other circumstances outside of reviewing writing:
  • Starting with things what went well
  • Discuss things to improve, not simply things that are broken.
These guidelines are part of retrospective formats and this is a component of some management techniques. Sad to say, many day-to-day many seem to focus on negatives, because the argument goes, there isn't a need to discuss what you're doing well. This isn't a great argument. Without feedback on the good things, someone won't have the confidence to make improvements or guidance on what strengths to leverage as they make changes to their work.

Sure, the first tendency is to focus on what's broken, since it probably has your attention, and thinking of things to do to improve takes more work than simply registering observations, but if you are being trusted to provide feedback, shouldn't you put in the effort?

Next time someone asks you to give feeback on a work item:
  • Start off by finding at least one thing you liked, and mention it.
  • When you discuss things that need fixing, try to use a sentence that has the form: "this would be better if ..."
See if this approach helps your feedback be more well received and more effective.

For more on the writers workshop process and applying it in technical contexts, have a look at the book Writers' Workshops & the Work of Making Things: Patterns, Poetry.... To learn more about patterns visit the Hillside Group. Esther Derby wrote a great article to guide managers in giving and receiving feedback.

Sunday, June 7, 2009

The Value of Agile Infrastructure

Engineering practices such as Continuous Integration, Refactoring, and Unit Testing are key to the success of agile projects, and you need to develop and maintain some infrastructure maintain these engineering practices. Some teams struggle with how to fit work that supports agile infrastructure into an agile planning process.

One approach is to create "infrastructure" or "technical" stories for tasks that are not directly tied to product features. In these stories the stakeholder may be a developer. For example, a user story for setting up a continuous integration server can go something like:

As a developer I can ensure that all the code integrates without errors so that I can make steady progress.

While there not all work on a project leads directly to a user-visible feature, thinking of infrastructure stories differently than other stories has a number of risks, and can compromise the relationship between the development team and other stakeholders. Agile methods are about delivering value as measured by the product owner (though the team has input in helping the owner make the right cost/benefit decisions). Thinking about tools and as having value that is dependent from end user value subjects you to the same risks that Big Design Up Front does: wasting effort and not delivering value.

I'm a fan of being able to track everything an agile team does to something that can create value for the product owner. This may be a bit idealistic, but having this ideal as a starting point can be useful; even if just try to focus on delivering value you will be more likely to make the right decisions for your team and your project.

One way to make the relationship between "infrastructure" work and product value is to recast infrastructure stories with a focus on value to the product owner. For example, consider the example above for setting up a CI system:

As a project manager, I want to identify when changes break the codeline so that mistakes can be fixed quickly, and the code stays healthy.

This may not be the perfect example, but when I think of infrastructure items this way I have a better understanding of why I'm considering a change, and it prepares me to advocate for the work in case there is push-back. You can apply this approach to other "technical" items such as:
  • Investigating a new testing framework.
  • Refactoring (in the context of implementing a story).
  • Upgrading your SCM system.
  • Setting up a wiki
  • Adding reporting to your maven build.
The benefits of considering the value of technical tasks to all stakeholders, and not just developers, include:
  • A better relationship between engineers and the other stakeholders on a project.
  • A better allocation of resources: If you can't explain the value of something there may be a better solution, or at least a less costly one.
  • A better understanding of how to use engineering techniques to deliver value.
I admit that his approach has some challenges. The value of technical tasks can be difficult to explain, and are often long term at the expense of short term costs. Even if your project dynamics require you to address infrastructure in some other, more indirect, way you can benefit by starting to think in terms of how what you want to do adds value. Software engineering is a means towards the end of solving business problems, and as engineers we should be able to explain what we're doing to any informed stakeholder.

Tuesday, June 2, 2009

Pitching Agile and the Elevator Test

Some time ago I read Crossing the Chasm by Geoffrey Moore, which is about marketing high tech products to mainstream customers. The part of the book that stuck with me, and the one that has the most pages marked in my copy was the section on positioning a product, in particular the section on the elevator test, which I've found to be a useful framework for understanding the value of many things, not just high tech products and companies.

A successful 2 sentence statement of the value of product should pass the elevator test . While the book is focused on making a claim for getting the interest of investors, being able to cast any idea you are trying to sell into this format is a great exercise, as it forces you to be focused and to really understand the value to your audience of the idea you are describing. Once you've got someone interested you can then go into the nuances.

In an elevator pitch you should be able to describe:
  • Your target audience
  • The market alternative
  • The new category that your idea/product fills
  • What problem solving capability this new thing provides
  • An alternative that is in the target audiences mind, and
  • The key product features
I have seen the benefits of agile software development, and I I really understand them. The problem is that my intuitive understanding often gets in the way of my explaining the issues that others are concerned with -- what's obvious to me isn't always obvious to others!). The really short tag lines ("Embrace Change" for example) sometimes lead people to think of agile as just rapid change. and don't get across the discipline that an successful agile project needs. And lots of people have been jaded, having experienced projects billed as "agile" which were more chaotic with a couple of agile technical practices thrown in. I wanted to try to imagine what a pitch for agile would be. (And I welcome feedback).

This is my attempt pitch for agile that can pass the elevator test:
For organizations who are dissatisfied with the overhead and lack of flexibility of conventional software development methods, Agile Software Development is a software development process. Unlike traditional chaotic or document-heavy approaches, Agile Software Development is a lightweight, yet highly disciplined approach that delivers end to end value in frequent iterations, where the stakeholders can re-evaluate priorities based on the state of the application and current market needs at the end of an iteration.
This seems close to what I was aiming for. What's missing, I think, is a good characterization of the alternative to agile. People who are investigating agile might be looking to change from anything from a traditional waterfall process, to (more typically) a chaotic approach. I'm not sure that I captured that entirely.

Even if this isn't the best pitch, thinking about how to express the value of agile clearly and quickly is a good exercise. Much like agile planning, where "The plan is nothing, planning is everything," the exercise of developing a pitch that can pass the elevator test is perhaps more important than the actual pitch.

Friday, May 29, 2009

Really Dumb Tests

When I mention the idea of automated unit testing (or developer testing as J.B. Rainsberger referred to it in his book JUnit Recipies ) people either love it, or more likely are put off because all the tests that they want to write are too hard, and the tests that they can write seem too simple to be of much value. With the exception of tests that "test the framework" that you are using, I don't think that one should worry prematurely about tests that are too simple to be useful, as I've seem people (myself included) spin their wheels when there was a coding error that could have been caught by one of these "simple" tests.

A common example is the Java programmer who accidentally overrides hashcode() and equal() in such a way that 2 items which are equal do not have the same hashcode. This causes mysterious behavior when you add items to a collection and try to find them later (and you don't get a HashCodeNotImplementedCorrectly" exception.) True, you can generate hashcode and equals with your IDE, but it's trivial to test for this, even without a framework that does the hard work for you.

So when I start working with a system that needs tests, I try not to worry too much about how "trivial" the test seem initially. Once I was working in a Interactive Voice Response application for which we would test by dialing in to a special phone number. We'd go through the prompts, and about half-way into our test we'd hear "syntax error." While I really wanted to write unit tests around the voice logic, we didn't have the right tools at the time, so I tried to see if I could make our manual testing more effective.

This IVR system was basically a web application with a phone instead of a web browser as a client, and VXML instead of HTML as the "page description" language. A voice node processed the touch-tone and voice input and sent a request to a J2EE server that sent back Voice XML which told the voice node what to do next. During our testing the app server was generating the VXML dynamically and we'd sometimes generate invalid XML. This is a really simple problem, and one that was both easy to detect, and costly when we didn't find it until integration testing.

I wrote a series of tests that made requests with various request parameter combinations, and tested the generated voice XML to see if it validated to the DTD for VXML. Basically:

String xmlResponse = appServer.login("username", "password");
try{
XMLUtils.validate(xmlResponse);
} catch (Exception e){
fail("error " + e);
}

Initially people on the team dismissed this test as useless: they believed that could write code that generated valid XML. Once the test started to pick up errors as it ran during the Integration Build, the team realized the value of the test in increasing the effectiveness of the manual testing process.

Even though people are careful when the right code, it's easy for a mistake to happen in a complex system. If you can write a test that catches an error before an integration test starts, you'll save everyone a lot of time.

I've since found lots of value writing test that simply parse a configuration file using and fail if there is an error. Without a test like this the problems manifest in some seemingly unrelated way at runtime. With a test you know the problem is in the parsing, and you also know what you just changed. Also, once a broken configuration is committed to your version control system you slow down the whole team.

If your application relies on a configuration resource that changes, take the time to write some sanity tests to ensure that the app will load correctly. These are in essence Really Dumb smoke tests. Having written these dumb tests you'll feel smart when you catch a error at build time instead when someone else is trying to run the code.

Lessons in Change from the Classroom

This is adapted from a story I shared at the Fearless Change Campfire on 22 Sep 2023 I’ve always been someone to ask questions about id...