Showing posts with label planning. Show all posts
Showing posts with label planning. Show all posts

Monday, October 21, 2019

Review: Indistractable

Indistractable is an exploration of the reasons why we get distracted and strategies we can use to avoid the distraction. Based on research on motivation and compulsion, the information in the book is similar the things I learned in Drive and The Power of Habit, but from a different angle: being mindful of habits and motivators that take our focus away from important things. Or put another way, the things same inner mechanisms that motivate us, and encourage productive behavior can also have the opposite effect. The key, Eyal tells us, is to focus on the triggers.

I like that the book starts out by saying that technology is not the problem, even as we often place blame on the accessibility and draw of electronic devices. This resonates with me, as I can recall times when, as a child, the newspaper or radio was a way for a parent to ignore me, and now as a parent, when my child was able to ignore those around him while doing “good” tasks” like reading. The historical record bears this out as well, as the book quotes an article bemoaning the impact of the Gramophone on the ability of children to focus. There will always be distractions, and while strict rules about avoiding them can help, it’s more sustainable to solve the core reasons for why you are distracted.

As is appropriate for a book on the subject, the chapters are short, and thus easier to feel like you are making progress as you learn to manage your distractions. The book is both inspiring and actionable. After working through the framework there are sections dedicated to how you manage distractions in your various life domains, including work, relationships, and children. The section on making Indistractable children is particularly worth while (though will be more valuable if you read the earlier sections).

If you are an employee, manager, parent, or any combination, you are likely to find value in this book.

Saturday, July 28, 2012

Problem Solving: Deadlines and Context

One of the more difficult challenges people and teams have in the face of deadline pressure is taking time to consider how to approach a problem rather than just diving in with a solution approach that you know will work.

In college, when I was taking a devices and circuits class,  I found myself stuck on a problem on the first problem set of the semester.  I asked an upperclassman for help and we determined that the problem could be solved by setting up a system of something like 12 equations, and grinding through the process of solving it. His solution was correct of course, but I wondered if this approach might be more complex than it needed to be given that it was  for one of a number of problems in the first problem set of the course, and given that the theme of the first few classes was more conceptual than about doing calculations .

When I later saw the solution to the problem it turned out that by making some simplifying assumptions, the 12 equations reduced to 2, and the solution was quite simple and quick. The lesson that the problem was mean to teach was not how to solve linear equations, but about how to understand when how to use a simple model to understand a complex circuit. The direct, labor intensive approach ignored what the problem was meant to teach.

I don't recall much about the specifics of the problem, but one thing I learned from that experience is that while working harder will often (though not always!) give you a correct solution,  it's always good to think about more than one approach before diving in and solving a problem. 


Sometimes just working harder is the right thing to do. But if you need to get past a deadline, before diving in to solving a problem it's good to think about whether you're doing more work than the situation merits, and that, perhaps, you are overlooking a simpler, quicker,  solution. 







Wednesday, October 19, 2011

Agile or Not: How to Get Things Done

Agile software development always felt intuitive to me. Developing software incrementally, in close collaboration with the customer is the obvious way to deal with the uncertainty inherent in both software requirements and implementation. The technical practices of automating necessary but time consuming tests, and deploying, early and often are the obvious ways to give an team the ability to evaluate the  functionality you have and to to decide if the software works as expected. And it's also important to decide if what you built still makes sense given the current environment. Agile isn't the only way that people build software, and it may not be a perfect approach, but it's one of the best ways of dealing with a system that has unknowns.

Agile software development acknowledges uncertainty. The ability of agile methods to make it very obvious very quickly when a project, or even a process, is failing makes people uncomfortable. The visibility of the failure leads to a desire to return to more "traditional" processes. For example, a common complaint is that agile organizations don't create "good enough" specifications. Someone might blame an unforeseen problem on the lack of a spec that mentioned it. This is possible, but it's only true if the person writing the spec could have foreseen the problem. 

The desire to point back to a lack of specification also points to a lack of buy-in to a fundamental premise of agile: it's a collaboration between the business and the engineering team. Some other possible causes of the problem could be:
  • The development team didn't test thoroughly enough. 
  • The code was not agile enough, so the bad assumptions were embedded into the code so deeply they were difficult to address.
  • Communication was bad.
More complete specifications that address all of the issues a system might encounter are one way to build software. But the best that people can do is to write specifications that address what they know. Quite often no one has complete advance knowledge of what a system needs to do.

There are projects where things are known with enough certainty that a waterfall process can work. People can write good specs, the team can implement to those specs, and the end result is as exactly what everyone wanted.  I've worked on projects where this was true, and in these cases, the specifications were reviewed and tested as much as code might. 

Even  projects that seem suited to waterfall fail. For any method to be successful:
  •  Everyone involved needs to be committed to the approach and 
  • There needs to be a feedback loop to correct errors in time. 
The second is point is more important than the first, since people will make mistakes. But being committed to the process is what lets people accept and offer constructive feedback. The reason that waterfall projects have such a bad reputation is that many are implemented in a way that results in problems surfacing late.

Agile methods when done well have the advantage of having built in feedback loops, so  that customers and teams have ways of identifying problems early. When agile projects fail it's often because people ignore the feedback and let things degrade for longer than necessary. (Failing fast can be considered success!)

So, Agile or not, your process will only work for you if everyone works within the framework to the best of their abilities, and if you have mechanisms in place to help people do the right things. Otherwise you can blame your process, but odds are, that's not where (most of) the fault lies.

Sunday, October 16, 2011

More on Being Done

Continuing the conversation from last week, Andy Singleton followed up on my post on being done with this post. Which is good as this is one of those questions that sounds simple in theory, but in practice contains some subtlety.

While I was advocating a good definition of "Done" to enable you to measure progress along a path, Andy's point seems to be that many teams don't establish enough of a path. He says:
In my opinion, most agile teams aren't doing "Test Driven Development", and they aren't doing scrum iterations where they plan everything in advance. Instead, they are doing "Release Driven Development." They focus on assembling releases, and they do a lot of planning inside the release cycle.
This is probably true in more cases than not, though one could argue whether if you are not doing iterations or TDD whether you are, in fact, doing agile. But even if can concede (which I'm not sure if I am) that you can do agile without planning and acceptance criteria of some sort, what Andy describes above still sounds like it has an element of plan, execute, adjust, though perhaps in a more chaotic way than a "textbook" scrum process, and and perhaps at a smaller scale. So knowing having a clean definition of what you want to do is still important. In the Indivisible Task I discussed how teams get stuck with planning because it's difficult to think of tasks that are small, discrete, and which produce useful work. It is possible to do so, but it is hard.

Perhaps the disagreement here is more of a misunderstanding. I consider being able to measure progress in terms of completed work items an essential part of gathering the data you need to improve your process and understand how to be more agile. While doing it at a macro (sprint) level is good,  doing it on a day-to-day or hour-to-hour basis is essential. If you do this at a fine-grained enough level, and have a good testing infrastructure, you can release software more frequently. So, at the limit, perhaps Andy and I are talking about the same thing.

Since I only summarized, I encourage you to read what Andy has to say for yourself. And I look forward to hearing comments both from Andy and other readers.

Wednesday, October 12, 2011

Being Done

Agile New England (which used to be called the New England Agile Bazaar, and which was started by Ken Schwaber) , has this wonderful activity before the main event each month: they host Agile 101 sessions, where people who know something about agile lead a short (30 minutes) small (about 10 people) class on agile basics for those who want to learn more about some aspect of agile. From time to time I lead a session on Agile Execution, where the goal is to help people understand how to address the following questions:
  • How can software and other project elements be designed and delivered incrementally? What set of management and technical practices would enable this?
  • How do you know whether your Agile project will complete on schedule?
When I lead the sessions, I tend to focus on tracking, defining stories in terms of vertical slices and the importance of continuous integration and testing to making your estimates trackable. Since the classes are so small and since the attendees have diverse experiences, the classes are sometimes more of a conversation than a lecture, and I find that I learn a lot, and sometimes find myself rethinking what I know (or at  least exploring things that I thought I understood well enough).

During the October 2011 meeting I found myself reconsidering the value of defining "done" when writing User Stories. I always have thought that defining done is essential to tracking progress. But what done means is still a difficult question. Andy Singleton of Assembla suggested that
The only useful definition of done is that you approved it to release, in whatever form
While the goal of agile methods is releasing software, I find that this definition, while appealing in its simplicity, misses some things:

  • Agile methods have a goal of continuously shippable code. Of course, "shippable" might not mean "ready to release" and cane mean something closer to "runnable," but you can get there by doing no work since the end of the last release. That isn't the goal of agile.
  • With that large scale definition of "done" you have no way of tracking progress within a sprint. 
  • Without an agreement on what you're going to, it's hard to know when you are changing direction. And acknowledging change is an important part of being agile.
The last point about acknowledging change isn't just about "blame" for things not working out. It's about evaluating how well you understand both the business and technical aspects of your project, and it forms the basic for improvement. 

True, having incremental definitions of done that you can use to track progress does help manage budgets. But that really is the least important aspect of having a good definition of done. Even if I were on a project with an infinite time and money budget, I'd want to have a sense of what our goals are. 

Having an agreement among all of the stakeholders on what being "done" means lets me:
  • Improve communication among team members and between team members and business stakeholders.
  • Evaluate my understanding of the problem and help me identify what I can improve.
  • Set expectations so that it's easier to develop trust between stakeholders and the engineering team that the team can, and will, deliver.
"Ready for Release" is a key component of "done" and and essential part of being agile. But it's not enough.



See Andy's response, and read more in Part 2 of this conversation.

Tuesday, August 3, 2010

Are You Done Yet?

Johanna Rothman recently wrote, commenting on Joshua Kerievsky's proposed definition of done. Both posts are worth a read, if for no other reason than to better understand why we have such a difficult time defining what "done" is, and why defining "done" is one of the major challenges for teams  trying to adopt agile practices.

Thinking about both Joshua's and Johanna's points I wonder if the difference isn't similar to a discussion of whether principles or practices are more important to be successful when adopting agile methods. On the one hand following  practices diligently allows you to develop good habits and even to get some good results early on. The challenge comes when it's time to reflect and improve on your practices. Without a good understanding of practices it's hard to optimize.

Similarly, defining done earlier in the process can cause problems if you are thinking about the meaning of "done" the wrong way.  If "done" means washing your hands of issues ("we met the spec..."),  evaluating done as late as possible makes sense, enforcing the idea that you are not done until the customer is happy is a useful driver.

If, on the other hand, you understand (and believe) that your goal as a developer is to deliver useful, quality software,  and if the customer understands that that they may not have understood the problem until they had a working system in hand, defining done for earlier steps means that you have more tools with which to evaluate your effectiveness, status, and progress.   Done closer to the developer means that you have more, rather than fewer, chances to evaluate, learn, and improve.  By embracing the principle that delivering a useful end product is the goal, you can benefit from having some local completion criteria.



Having the definition of done closer to the team (as Johanna recommends) allows you to measure progress and identify risk. You also need to be able to acknowledge that it is possible that completing all the stories may still mean that there is still work to do. Then you have to inspect, adjust, and adapt. Which is to say: be agile.

Monday, April 5, 2010

Planning is a Gerund

One of the things teams adopting agile struggle with is deciding how much to define a plan before you start executing. Have a plan that's too well developed and you end up risking that your team may not be responsive enough to change. Too little of a plan and you may end up changing course excessively, and have no way to measure your progress towards any sort of deliverable. 

At the core of this confusion over how much to plan is the reality that plans change, and spending too much time and energy creating a plan that ends up being wrong seems wasteful. But the lack of a plan means that you have nothing to measure progress against. One way to reconcile this is to keep in mind the following quote, attributed to Dwight Eisenhower (and a variant attributed to Churchill):
Plans are nothing; Planning is everything.

If we assume that as a project progresses, that things will change, we can still benefit from talking through the options as a team. Capturing the things that we don't know, but would like too, is useful information, and gives the team a good measure of risk.

The time you spend planning is an important consideration. Constrain the amount of planning time based on the duration of your sprint. If you can't come to an understanding of what the problem is or how to approach it, you have a clue that you're trying to do too much. But rather than throw up your hands, you can aim to have some sort of plan. It might be wrong, but even building the wrong thing can increase your understanding.

For the planning activity to be useful it is important that it not be top-down but that it involve the implementation team, as they are the ones who can speak to the the implementation risks, and can propose creative solutions given the ability to probe about real goals.

One thing that may concern people with the approach of involving the team and stakeholders at the same time is that a planning which raises more questions that providing answers can make some people uncomfortable. Senior managers may be uncomfortable with acknowledging ignorance. Team members may be put off by seeing that there are legitimate disagreements among the product ownership team about some issues. And some people are just uncomfortable when you can't just tell them what to do.

This is a cultural issue that may not be easy to overcome, but agile projects work well because the team can pull together to solve problems when given all of the information, and structure their code and their work to  mitigate risk. And if the uncertainty exists it's better to identify it up front.

Regardless of the level of uncertainty about goals and dependencies it is important to exit a planning session with a common vision for the goals and target for when you will re-evaluate the goals. A well run  planning activity can helps to focus the team towards a common goal.



Sunday, March 7, 2010

Agile Portfolio Management

 I've heard people criticize agile methods as being too reactive and focusing too much on the little picture and ignoring larger goals. This is a misunderstanding of a basic idea of agile. Agile methods are't about thinking small.  Agile methods are about making small steps towards a goal, applying programming an management discipline along the way. (For more, have look at an elevator pitch for agile I wrote last year.)

The basic approach of all agile methods is to
  • Define a goal
  • Break the goal into incremental bits so that you can iterate towards the goal
  • Periodically (at the end of each iteration) pause to evaluate both your progress towards the goal, and whether the goal makes sense. 
Teams often skip the second part of the this evaluation, but it's the constant evaluation of the long term goals that makes it possible to reconcile the concepts of being agile and planning.

If you have doubts about whether long-range planning in an agile environment is even possible, read Johanna Rothman's book Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, which I  recently received a review copy of.

A project portfolio is "an organization of projects, by date, and by value, that the organization commits to or is planning to commit to." This sounds like a scaled up version of a product backlog that you might use to organize your work in an agile project, but with a longer time scale. So it's certainly aligned with agility.

In this book, Johanna motivates the importance of the project portfolio to enabling agile development, and also demonstrates how the technical and project management techniques of agile teams make it easier to define and iterate on a project portfolio.

Johanna is an expert on merging the human side and technical sides of projects.  I learned quite a bit about managing people from Behind Closed Doors: Secrets of Great Management which she co-authored with Esther Derby. In  Manage It!: Your Guide to Modern, Pragmatic Project Management Johanna discussed how to manage projects. And one of the more challenging part of managing a project portfolio is overcoming the resistance some people have to defining a goal for a project, a portfolio for a product line, and mission for an organization. In Manage Your Project Portfolio she shows how how to address common obstacles to defining a project portfolio, evolving it, and using it as a tool  to allow everyone to understand where the organization is aiming.

And the benefits of a project portfolio don't just help with "fuzzy" concepts like vision, but can also help reduce and address items such as technical debt. In addition to an overview of concepts, and concrete guidance on how to address problems, the book interleaves stories that establish that this work is based on real-world experience, and help you to relate to the issues the book addresses.

I recommend this book to anyone who has a role in defining projects.

Sunday, February 28, 2010

Estimation Poker

Estimation is a necessary part of software development. Product owners want to know how much work can get done by a deadline, project managers need to make commitments, and developers want to know if they committed to a reasonable amount of work.  While estimates are often inaccurate, estimates  provide landmarks along the way of a project to gauge progress. So, estimates are an inevitable, and useful part of the software development process. Many complain that the process of getting to those estimates, estimation, takes too long, so planning sessions are cut short and teams don't have enough time to discuss issues that have uncertainty. By appropriate use of planning poker, you can balance the needs for good estimates while minimizing time spent estimating.

Sometimes when a team is asked to estimate a backlog item, one or more people with expertise in an area are asked to estimate the item, but this is not the best way for an agile team  to get a good estimate.

There are benefits to involving a larger part of the team in the estimation process. The challenge is that people feel that involving the whole team is wasteful if the estimation process takes too much time. On the other hand, inaccurate estimates have their own costs for the team and the other stakeholders.

Planning Poker, an estimating method popular with Agile Teams can address some of these issues. Briefly, planning poker involves getting the developers on a team together to estimate stories using a deck of cards that have numbers that represent units of work. The numbers are often spaced in a Fibonacci sequence, the theory being that the larger the estimate, the lower the precision. Planning planning poker can be a really useful tool to both improve estimation and discover uncertainty in requirements.

People resist planning poker for reasons like:
  • It seems inaccurate if the person doing the estimating does not having the "appropriate" expertise. A UI developer may not feel qualified to estimate a story that seems to be mostly backend processing, for example.
  • It seems like a waste of time because people believe that one person can estimate for everyone.
  • It seems inaccurate since the person who's been assigned the work should estimate it based on their skills.
Even if you find yourself throwing a wild guess at a planning poker session, the fact that you don't understand the scope of the issue is useful information. The benefit of having the entire cross-functional team understanding and estimating stores is that you can identify challenges across the application. What might be easiest to do in the back-end can add work to the application tier or UI, and also make testing harder. Having one person estimate  can make it hard to identify misunderstandings and issues because we tend to want to agree with "the expert," and there is no forum for identifying misunderstandings.  It's not always clear at the start of a project who the best person for task will be, both for the reasons I just mentioned and because assigning the work up front can lead to inefficiencies if work takes more or less time than estimated.

If you find that your estimates are inaccurate, or your estimation process takes too long, consider the following approach:
  • Gather team members who are working on all aspects of the application. You need not have the whole team, but be sure to represent each "architectural layer". If your team is less than 7 people or so, include everyone.
  • Look at the description of each story or problem report in priority order. Ask the team to pick cards based on what they read.
  • See how close the estimates are. 
    • If they are close, ask someone to explain what they envisioned doing to implement the issue. If someone has a vastly different idea, they should speak up. 
    • If they are different, as someone with one of the extreme estimates to explain their reasoning. This will start a conversation about what the requirement means, and what implementation strategy makes sense.
This process helps you to focus discussion time on the hardest, highest priority issues. You will want to be sure that to allocate an appropriate amount of time to planning and estimating relative to your sprint length. You may still run out of time, but even if you do, you'll  have discussed and estimated the highest priority items as accurately as you could have, knowing what you knew. 

The biggest challenges to having accurate estimates are not having consensus on the "what" and not understanding the details of the "how." The process above is one way to focus discussion on the high-risk items in your backlog, while keeping the time spent on estimating reasonably low.  

Saturday, February 20, 2010

The Indivisible Task

One of the things that makes agile work well is a daily sense of progress that can be reflected in, for example,  a burn-down chart.  For burn-down charts to be meaningful, the estimate of amount of work remaining in a sprint need to be accurate. Re-estimating work remaining in a task is helpful,  but the best measure of progress is the binary "done/not done" state of the items in your backlog.

Assuming that you have a clear definition of "done" for a task,  it's easiest to measure progress when you have tasks that are small enough that you can mark them complete on a daily (or more frequent) basis. Breaking work down into a reasonable number of reasonably sized tasks is something many find challenging. (Note: I'm talking here about development tasks as part of a sprint backlog, rather than splitting User Stories in a product backlog, though there are some parallels.)

 I've worked on teams when people refused to break down large task into 1-day or smaller parts. The common excuse for not breaking down work  is that the person who signed up for the work understood what the work was and the estimate was accurate. Of course, we had no way of knowing that the estimate was at wrong until the work was not done at the end of the week or so.

What was interesting to me is that those most resistant to decomposition weren't less experienced programmers, but rather the people the team acknowledged as "experts" and "good designers" who were good at decomposition as it applied to designs. So the theory of attacking complexity through looking at smaller pieces was something they were comfortable with. Not only that, they actually worked in a way that led to discrete units of work being completed throughout the project, whether in terms of frequent commits or even simply being able to finish a work day with a sense of accomplishment, even if the great task was still incomplete.

Breaking down work isn't as hard as some make it sound.   From a development centric perspective some of the things you already do which can guide you in task breakdown:
  • Thinking about when you might commit code. It's good practice to commit code frequently; consider the Task-Level-Commit pattern from Software Configuration Management Patterns
  • Considering what the tests you write as (or before) you code.  
  • Deciding what you want to accomplish before leaving work each day so that you end the day with a sense of accomplishment.  
What these items have in common is that the define natural boundaries in the development process.

The main difference between doing this kind of planning and good programming practice is making your plan visible to others. This takes discipline, and a certain amount of risk, since if your plan goes awry it's visible.  Part of being a successful agile team is understanding that plans can be wrong, and using that experience to figure out how to do better in the future.

You may discover part way through your planning that the task breakdown you did no longer makes sense in light of something you discovered. That's OK, but at least you have a good sense of what work was done, and you can figure out what tasks are left (and estimate them!)

By breaking down work into smaller parts you have the ability to:
  • Evaluate your progress in a definitive way. as it is often easier to define "done" for a smaller task.
  • Get feedback from your colleagues before you dive in to a problem. 
  • Share effort if any of the work can be done in parallel.
  • Simplify updates and merges, as the changes to the codeline will all be small at any point in time.
It is possible to come up with too many sub-tasks, such that the overhead of tracking them on a backlog negates their value as a tracking tool. In that case,  there is nothing to prevent you from taking note of the very small things you do each day and combing some of them into day or half day items that do appear on the backlog. And if you really only want to have large tasks on the sprint backlog, consider doing your own fine grained breakdown that you can use to help you give a better estimate for time remaining. I tend to favor backlogs with tasks of a half to one day, and then making personal notes about smaller steps  to complete  those tasks on my own.

The value of working in small steps isn't a new idea. in 1999 Johanna Rothman wrote about the value and mechanics of managing your work in terms of  which she calls inch-pebbles (as opposed to "milestones") and Fred Brooks advised "allow no small slips" in  The Mythical Man-Month, and being able to identify these slips is key to having effective sprints.

Agile teams work because they have mechanisms to give frequent feedback on status. Accurate estimates of work remaining are an essential tool for evaluating progress, and small tasks help you estimate accurately. Decomposing work is not easy, and takes discipline, but the benefits are great.

Sunday, February 7, 2010

Tracking what Matters

I'm a big fan of burn-down charts for tracking sprint and release progress.  The basic idea of a burn-down chart is that the team starts with estimates for all of the tasks in the sprint, and then on daily (or more frequent) basis re-estimates the amount of work remaining.

With a burn-down chart, you are tracking the new estimate based on the work that you have done. As you work on the sprint backlog you get a better understanding of the tasks, and thus you can revise estimates for tasks that span more than one day. This is reasonable since the original estimate is, well, an estimate.

Sometimes if you spend 4 hours on an 8 hour task, you'll have 4 hours of work left. Most of the time the time left  will not be the original estimate less the time spent, but more or less. At the end of 4 hours, the remaining work estimate for the same 8 hour task could be 2 hours, or it could be 10 hours if you discovered a snag.  This is important information for everyone involved in the project and allows the team to identify a problem at the daily scrum. Re-estimating is harder than just doing subtraction, but it's valuable.

One thing that happens when teams use an issue tracking tool (like Jira and Greenhopper) to manage their backlog is that re-estimating and effort tracking are combined. The only way to re-estimate is to "log work." You're required to enter the amount of time spent, and the tool will kindly offer to change your estimate to the difference between the original estimate and the time spent. There are two problems with this:

  • It's important to think about the time left for the task based on the assumption that your original estimate had a margin of error. For all but trivial cases, the "calculated new estimate" is always wrong.
  • The "time spent" value isn't really useful to stakeholders. In the best case you are only doing one thing during the time in question, so your time spent entry is accurate, but doesn't answer the question: "when will it be done!" In the worst case, you're not tracking your time accurately, and the time spent number is inaccurate, and provide no real  information.

Like all things agile, when looking at your project tracking approach you need to be clear about what you want to track and why. The main concern for stakeholders on an agile project is whether they will get the functionality they want at the end of the sprint. So the time-remaining number is important.

There are some good reasons for tracking time spent including evaluating estimation accuracy and billing
But in both of these cases you need to evaluate the overhead of the tracking time relative to the value. Tracking total effort for the sprint relative to estimated work done may be more useful than per-task effort to estimate tracking, and analyzing the results in a retrospective may yield more useful information than per-task tracking.

When doing sprint tracking:

  • Make sure that everyone understands the goals of the tracking process so that you get uniformly valuable results. You definitely want to track how close you are to "done," but explain how important tracking effort is. 
  • Make sure that, whatever the goals, that the data are updated daily. If the burn-down chart doesn't change for 2 days is it because people didn't update their estimates, or that the project is at a stand-still?
  • Remind everyone that the estimates are just that: "estimates," and an informed guess that turns out to be wrong is better than no estimate at all. (And the inaccuracy of  the estimate helps to identify unknown complexity.)

Burn-down charts can be a simple, valuable, tool to identify problems during a sprint as long are your teams breaks out of the habit of tracking "effort" as opposed to "effort remaining."

Monday, January 11, 2010

Estimates and Priorities: Which Comes First

When developing a release plan, product owners often want to factor cost (or estimates) into where items go into the backlog. For example, you might hear "Do that soon, unless it's really hard." If this happens once in a while it's a useful way to have a conversation about a feature. If this approach is the norm and nothing gets prioritized until the team estimates it. I'd argue that the default order of operations is that features should get prioritized before anyone spends any effort on estimation. My reasons for this are both philosophical and practical.

The philosophical reason is that the Product Owner should be the one to prioritize work. By asking for the estimate first, the product owner is deferring their authority to the engineering team. This creates a risk that the team may not end up working effectively.

The practical reasons for prioritizing before estimating are:
  • Estimation takes time, and if you don't start with a prioritized list to estimate with, you spend a lot of time estimating items that may never be worked on. (And yes, you may need to re-estimate items when  they hit the top of the list, as estimates may change based on experience, staffing and architecture.)
  • If you estimate as work appears, you lose some of the benefits of fixed, time-boxed sprints, and you increase the overhead cost of planning. 
  • By allowing the team to estimate first, and pushing an item off the list because it is too expensive, you are missing an opportunity for a conversation about how best to meet the business need.
Often the first version of a story may seem large because it includes more functionality than needed. If the the team knows that there is a critical feature to implement in a sprint, but that there isn't time to complete it, there may be a simpler, less costly, version of the feature that meets most of the business needs. If the product owner simply let's a large estimate defer the item, then the conversation will never happen and the business needs may not be met, which would be bad for everyone. Likewise if the expensive feature is lower on the list, then you need not have the conversation until later.

This balancing act between estimates and priorities underscores a key principle of agile planning: User Stories are an invitation to a conversation.  By prioritizing first, you can understand where to focus energy on analysis and design. You also keep the agile team focused on delivering business value by placing priority first, and having the engineering team and the product owners communicating actively.

Saturday, January 2, 2010

Estimation: Precision and Accuracy (and Economics)

I was listening to a episode of the NPR show Planet Money on the accuracy of economic forecasts and how economic forecasts often have precision, but are often inaccurate. Consider this exchange between two of the show's hosts:

Alex Blumberg: On average, the leading forecasters, like Prakken's Macroeconomic Advisors, have around a one percentage point margin of error, which might sound pretty good. But let's say you forecast a two percent growth rate for the year. That means you're saying the actual rate will be somewhere between one percent and three percent.

Adam Davisdon: Now those are two totally different economies. One percent, that doesn't even keep up with population growth. That's a bad year. Unemployment will go up. It'll feel bad in the U.S. Three percent is the opposite, a pretty good year.

In one of the interviews, Simon Johnson, (Economist, MIT, Peterson Institute for International Economics) said this about economic estimates:
I would call it a necessary evil. There's so much imprecision. There's so much, you know, lagging in terms of our updating, that in some sense, we'd be better off without forecasts. We'd better off, you know, making up our minds afresh every day. But the problem is the businesses, the institutions, all involve thinking about the future and planning for the future. And you can't do that without taking a view of the future.
Which sounds quite analogous to estimates on software projects. One of the reasons that Planning Poker estimation uses a non-linear scale (for example, a Fibonacci sequence)) is that, in general, as tasks get larger, you are less accurate. While it might be meaningful to talk about a 1 hour task as opposed to a 2 hour task,  it's less likely to be useful to estimate 6 hours as opposed to 5 hours for a task. (A Planning Poker desk might have 1,2,3,5,8, 13, etc, hours as possible values.)

Planning Poker is intuitive, and often matches experience, yet many people insist on estimating a task that takes more than 3 hours as 4 rather than 5. This is probably because everyone forgets that estimates are just that: "estimates." Estimation gets better as a team works together, but the estimates are only one part of the planning process. If the goal of the team is to make a commitment to deliver functionality, then the non-linear jump in estimates is a measure of risk. The best way to reduce risk is to work in smaller pieces. When faced with a task that is bigger than an 8, and 13 seems too big, see if you can decompose the task in to smaller tasks, thus improving your ability to estimate accurately.

Estimation is difficult, not just for software teams, but we need something to help us plan, so just be mindful to give estimates the appropriate precision. And reestimate work remaining as you go.

Lessons in Change from the Classroom

This is adapted from a story I shared at the Fearless Change Campfire on 22 Sep 2023 I’ve always been someone to ask questions about id...