Monday, July 27, 2009

Yours, Mine, and Ours- Ownership on Agile Teams

During a panel discussion on people issues on agile teams I participated in during a meeting of the New England Agile Bazaar the issue of how to address the problem of people taking credit for work someone else did arose. (The short answer is that you can solve the problem through a combination of agile tracking methods to make progress visible, and some people management techniques.) This entry isn't about that question.

The issue of credit in agile teams reminded me of a recurring puzzlement: when someone on a team says something like "we used Jim's component...." what does that mean? Some of the choices are:
  • I want to give Jim Credit for doing something useful.
  • I want to make it clear who's responsible if it doesn't work.
  • Jim is the only one who knows how it works, and the only one who can make changes.
There may be other choices, but this type of dynamic comes up a lot and I wonder what it has to do with the tension between the collective code ownership value of agile methods, and the innate tendency for people to want to get and give credit.

I tend to think that on an agile project, everyone should be able to work on any part of the code. Some people may have more appropriate skills for a task initially, but if the most "skilled" person is busy, anyone should be able to jump in and make a change. And if I'm doing something that touches Jim's code and something seems wrong in Jim's code, I should be free to fix it.

This form of Collective Code Ownership has advantages:
  • The Truck Number of the project increase as knowledge is shared.
  • The code gets "reviewed" and improved by everyone who touches it. You're less likely to have code that follows idioms only one person understands without the expense of formal reviews.
  • There is more peer pressure to do things well, as other will be looking at the code.
Agile practices enable this kind of collective code ownership by encouraging unit tests, accepted coding standards, and team collaboration to decide how the backlog gets completed.

While giving and getting praise and credit is good, and while anyone working on a feature should feel responsibility to do the best that they can, on an agile project, teams should be wary of a "you touched it, you own it" dynamic. That makes for fragile code, interrupt driven work, and a lower velocity as "the experts" become bottlenecks.

Silos of knowledge don't benefit the team in the medium or long term. Encourage people to work on code that someone else has "authored." On an agile project, the team commits to delivering functionality for a sprint and your code is everyone's code.

Monday, July 20, 2009

Finding and Evaluating Options

Of all the rules, techniques, and heuristics I've tried for making design (and other) decisions, the "Rule of Three" keeps surfacing as one of the simplest and most effective to use.

There are two variations of the Rule of Three. The first, from Secrets of Consulting: A Guide to Giving and Getting Advice Successfully by Jerry Weinberg is about evaluating options:

If you can't think of three things that might go wrong with your plans, then there's something wrong with your thinking.
This sounds rather negative, but if you think for a second, it makes sense, and it's really about being constructive. Design decisions are about evaluating trade-offs. You want to pick the decision that has the problems that you can live with. When we've been struggling with a problem and come up with a plausible option, we tend to want it to work. If you don't think about what might go wrong you're making it harder for a good idea to succeed. The worst that can happen is that you find that your "great" idea doesn't solve a likely problem. This is good. The best is that you discover that your idea addresses issues you didn't consider. This is very good.

Is thinking up three possible problems with a decision foolproof? No. But this approach strikes a good balance between Big Design Up Front and being too impulsive.

Weinberg's original Rule of Three is great when you have an option to evaluate, but how do you develop options? In Behind Closed Doors: Secrets of Great Management (Pragmatic Programmers), Johanna Rothman and Esther Derby describe a related Rule of Three: Always come up with (at least) three alternatives when trying to solve a problem. While 3 can be an aribitrary number, Johanna and Esther explain that:
  • One alternative is a trap.
  • Two alternatives is a dilemma.
  • Three provides a real choice (and gets people in a frame of mind to come up with more).
I've used the combination of these approaches for design decisions with great results. (I describe one case in my first blog post ).

You may not always need three choices when looking at a situation. And when you have a solution in mind, you may not always need to think of three things that can go wrong with it. On the other hand, the cost of following this process is small, and the benefits are great. Make the Rules of Three a habit and you may find yourself making better decisions.

Tuesday, July 14, 2009

July 23 Agile Bazaar: Agile Teamwork: The People Issues

On July 23, I'll be on a panel with Johanna Rothman, Ellen Gottesdiener, and Mike Dwyer: Agile Teamwork: The People Issues. The Agile Bazaar is a great group to know about if you are involved in Agile Software Development in the Boston area. This should be a very interesting meeting.

Sunday, July 12, 2009

Begin with the End in Mind: Deploy Early

In many projects I've worked on the task of developing a deployment process is deferred until close to the release date. This is a bad idea. Deferring deployment-related decisions adds risk to a project as deployment to a target environment exposes development time assumptions and operational requirements that may need to be addressed by the engineering team before an application is released.

I suggest that a project develop (and use) a process to deploy the software to the target platform in the first iteration. Starting on deployment early gets you closer to the agile goal of shippable code at the end of each iteration.

Deployment is important, and something for which there should be a story in your process early on because:
  • Deployment is the first thing a customer sees, but it's only an enabler, it doesn't directly deliver value, so you'd like for the process to be efficient. Incremental improvement of the process is a great way to make it work well.
  • Design decisions can have an impact on the deployment and configuration process, and deployment models may suggest a different design
  • Thinking about deployment early means that you'll have an identifiable, repeatable process. this will make your testing process more valuable.
By a deployment process I mean the mechanism for delivering your software to, and making it usable by a customer (or intially to a QA Team), including any required configuration steps. This process can be as manual or as automated as you like, ranging from a zip file and a README, to a fully automated installer. You can start simple and automate more as you get feedback.

Developing a deployment process early can look like Big Design Up Front if you start out with too detailed of a process that involve guesses about things you just don't know at the beginning of a project. But remember: you can change the process as you learn. Start with a simple process and change and refactor the process as you go to make it more suitable to the target user's needs. You may discover that a manual process works well enough, or you may find that adding some automation make the process simpler and lower risk with little added cost. If you have a test team that is using the deployed artifact, you may find that you can leverage automation developed to support test related deployments and use it in your customer process.

Deploying early and often can also improve your architecture too, as deployment exposes issues about the system architecture that are easy to overlook in an development environment. How the application is designed affects how it can (and should) be packaged (and the reverse is also true). While another groups may be responsible for the operational aspects of deployment, the engineering team is responsible for making the process work in a real environment. Consider configuration as just one example. Using a build tool like Maven or Ant, it's a simple matter to keep a number of configuration files in synch by using filtering, and passing a property in to the build. Once you deploy a package that needs to be configured at a customer site, the fact that there are now many configuration files that need to be edited in the same way becomes an obvious source of wasted effort and possible error.

Even seemingly trivial issues such as how to configure logging in production to debug a problem can influence, the choice of logging framework. The sooner you see the problem, the easier it is to fix with less risk. Michael Nygard 's book Release It, covers some of these issues and is an excellent resource for information about how to build software that can be deployed into production effectively.

Like all things agile, you can start small, and take small steps towards the end result. They key things to consider are:
  • Packaging: have the build generate a deploy-able package. This can be used for Integration Testing, demo environments, and the like.
  • Configuration: understand what parameters need to configured and how to configure them. The first pass could be build properties that are filtered into configuration files. Move towards understanding what needs to be configured at deploy time at a customer site.
  • Requirements of the target platform and operations teams: How automated does the process need to be?
Even if you don't know all of the details of the production deployment, frequent early developments help you to understand what issues you need to decide, and which you can defer. And if you really do understand what you need to do, you may as well give it a try sooner rather than later. At best you'll be validated. At worst, you'll have lots of time to address issues.

Friday, July 10, 2009

STPCon 2009 in Boston

I'll be giving a class about software configuration management for agile teams at the Software Test and Performance Conference in Cambridge, MA on Friday Oct 23. The conference is Oct 19-23, 2009.

The abstract for my talk is:


Version management, build, and release practices are essential elements of any effective development environment, especially for agile teams, which rely on feedback and maintaining high quality during rapid change. Many agile teams are puzzled about how to apply good software configuration management (SCM) practices in an agile environment. This session will provide an overview of SCM concepts and explain the patterns and practices that teams need to maintain an agile SCM environment. You’ll learn how agile testing practices and continuous integration change how teams use SCM, and how to set up the essentials of an agile SCM environment.

Sunday, July 5, 2009

Testing Cost Benefit Analysis

I'm probably one of the first people to advocate writing tests, even for seemingly obvious cases. (See for example, Really Dumb Tests.) There are some cases where I suggest that testing might best be skipped. There are cases where tests may not only have little value but can also add unnecessary cost to change. It's important to honor the principles of agile development and not let the "rule" of a test for everything get in the way of the the "goal" of effective testing for higher productivity and quality.

While writing a unit test can increase the cost of a change (since you're writing the code and the test), but the cost is relatively low because of good frameworks, and the benefits outweigh the costs:
  • The unit test documents how to use the code that you're writing,
  • The test provides a quicker feedback cycle while developing functionality that, say, running the application, and
  • The test ensures that changes that break the functionality will be found quickly during development so that they can be addressed while everyone has the proper context.
Automated integration testing, especially involving GUIs, are ones are harder to write, and cover code that likely was tested with unit tests, so it's easy to stumble onto cases where the tests add little enough value and enough cost that it's worth re-considering the need for an automated test in a particular case.

On one project I worked on the team was extremely disciplined about doing Test Driven Development. Along with unit tests, there were integration tests that tested the display aspects of a web application. For example, a requirement that a color be changed would start with a test that checked a CSS attribute, or a requirement that 2 columns in a grid be swapped would result in a test that made assertions about the rendered HTML.

The test coverage sounded like a good idea, but from time to time a low cost (5 minute), low risk change, would take much longer (1 hour) as tests would need to be updated and run, and unrelated tests would break. And in many cases the tests weren't comprehensive measures of the quality of the application: I remember one time when a colleague asserted that it wasn't necessary to run the application after a change, since we had good test coverage, only to have the client inquire about some buttons that had gone missing from the interface. Also, integration level GUI tests can be fragile, especially if they are based on textual diffs: a change to one component can cause an unrelated test to fail. (Which is why isolated unit tests are so valuable.)

I suspect the reasons for the high cost/value ratio for these UI-oriented tests had a lot to do with the tools available. It's still a lot easier to visually verify display attributes than to automate testing for them. I'm confident that tools will improve. But it's still important to consider cost in addition to benefit when writing tests.

Some thoughts:
  • Integration (especially GUI) tests tend to be high cost relative to value.
  • When in doubt try to write an automated test. If you find that maintaining the tests, or execution time, adds a cost out of proportion to the value of a functionality change, consider another approach.
  • GUI tests can be high cost relative to value, so focus on writing code where the view layer is as simple as possible.
  • If you find yourself skipping GUI testing for these reasons, be especially careful about writing unit tests at the business level. Doing this may drive you to cleaner, more testable, interfaces.
  • Focus automated integration test effort on key end-to-end business functionality rather than visual aspects of an application.
Applications need to be tested at all levels, and automated testing is valuable. It's vital to have some sort of end-to-end automated smoke test. Sometimes there is no practical alternative to simply running and looking at the application. Like all agile principles, testing needs to be applied with a pragmatic perspective, and a goal of adding value, not just following rules blindly.

Lessons in Change from the Classroom

This is adapted from a story I shared at the Fearless Change Campfire on 22 Sep 2023 I’ve always been someone to ask questions about id...