If you didn't already know that the key to reliably deploy quality software is to take a cross-functional, full-lifecycle approach, Jez Humble and David Farley's book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation will help you to understand. Much like Jim Coplien describes in Lean Architecture: for Agile Software Development the "secret" to successful lean projects is to work with "Everyone, All Together, from Early On."
While the authors have experience in, an a pre-disposition for, agile techniques, the principles described in this book apply to any organization, whatever the process, though if you take the approach to heart, you will find yourself becoming more agile, which is to say, more responsive to customer needs.
The book covers the full lifecycle from requirements, and design, and coding, to acceptance testing, deployment and operations. There are even discussions on test design, data migration and performance optimization and capacity planning. All of this underscores the point that the goal of building software is delivering it to users, a point which some other books understate. Continuous Delivery develops the concept of a Deployment Pipeline, showing you the impact development and testing practices have on deployment and operations, and emphasizing the value of involving the operations team early in the development process.
There is a lot of material here, but there are also pointers to other resources. In the SCM and release management are, the authors build on works including Software Configuration Management Patterns: Effective Teamwork, Practical Integration, Release It!: Design and Deploy Production-Ready Software (Pragmatic Programmers) and Configuration Management Best Practices: Practical Methods that Work in the Real World.
Not only do you walk away from this book with an understanding of how to start implementing a continuous deployment pipeline, you may also find yourself writing down a list of things to try, and tools to use. While not a tool-centric book, the authors provide many examples of tools to help you implement each phase of the process.
Having written about the version and release management process before, and how it relates to architecture and organization, I was excited get a review copy and see the authors give a thorough discussion of how the SCM and Release Management practices relate to the rest of the software development ecosystem.
While having a lot of material, the book is well organized and written. It's not a quick read, and you'll want to have a notebook or post-its handy to capture the idea it helps you to generate, but if you are interested in improving your deployment process you will find this book very valuable, whether you are a developer, tester, release engineer, or someone who manages people in those roles. When you finish this book you will not only have knowledge about how to implement a deployment pipeline, but also the encouragement to know that it is possible, no matter how complex your project might seem.
Thoughts about agile software development, software configuration management, and the intersection between them.
Thursday, December 23, 2010
Sunday, November 28, 2010
What Agile QA Really Does: Testing Requirements
Teams transitioning to agile struggle with understanding the role of QA. Much of what you read about agile focuses developer testing. Every project I've worked on which had good QA people had a much higher velocity, and higher quality. And there are limits to what developer Unit Testing can test.
In an "ideal" agile environment a feature goes from User Story, to Use Case, to implementation. With tests along the way, you should be fairly confident that by the time you mark a story done that it meets the technical requirements you were working from. If someone is testing your application after this point, and the developer tests are good, it seems like they should be testing something other than the code, which has already been tested.
Even though it's important that your QA team does not become the place where "testing is done" as that will sidetrack an agile project very quickly, it is possible that the QA Team is testing (ideally in collaboration with developers) things that the team decided that developers could not test completely. Also, in practice, problems sometimes slip through developer tests, and the QA team is then testing the developer tests to give feedback on the types of things the developer tests need to be better at.
Aside from "catching the errors that slipped through" a QA team doing exploratory testing is also exploring novel paths through the application, and may discover problems during some of these paths. In this case the team and stakeholders need to have a conversation about whether the error is something that users will see and care about, or whether it is something that isn't worth fixing.
If you have a QA team, and they are doing exploratory testing, they are really testing:
Of course, this is an idealization. But if you are disciplined about doing developer testing, you can still get a lot of value from exploratory testing, whether the people doing them are dedicated QA people, or people who are filling the role.
In an "ideal" agile environment a feature goes from User Story, to Use Case, to implementation. With tests along the way, you should be fairly confident that by the time you mark a story done that it meets the technical requirements you were working from. If someone is testing your application after this point, and the developer tests are good, it seems like they should be testing something other than the code, which has already been tested.
Even though it's important that your QA team does not become the place where "testing is done" as that will sidetrack an agile project very quickly, it is possible that the QA Team is testing (ideally in collaboration with developers) things that the team decided that developers could not test completely. Also, in practice, problems sometimes slip through developer tests, and the QA team is then testing the developer tests to give feedback on the types of things the developer tests need to be better at.
Aside from "catching the errors that slipped through" a QA team doing exploratory testing is also exploring novel paths through the application, and may discover problems during some of these paths. In this case the team and stakeholders need to have a conversation about whether the error is something that users will see and care about, or whether it is something that isn't worth fixing.
In effect, exploratory testing is testing the requirements for completeness.
If you have a QA team, and they are doing exploratory testing, they are really testing:
- Requirements: Finding interactions between features and components that were not defined or understood when coding and developer testing began.
- Developer Tests: Identifying where developer tests were not as good as they could have been (and how they could be better). This is a good use of automated "integration" tests.
- Systems tests that might be hard to test in a developer context.This might be the one set of tests that are the primary domain of QA.
Of course, this is an idealization. But if you are disciplined about doing developer testing, you can still get a lot of value from exploratory testing, whether the people doing them are dedicated QA people, or people who are filling the role.
Monday, November 22, 2010
Risks of Manual Integration Testing in the Context Rapid Change
You have probably come across a situation like this: It's close to a release deadline. The QA team is testing. Developers as testing and fixing problems, and everyone is focused on getting the best product they can out the door on time. During this time, you may notice that someone on the QA team, while working late has found an interesting problem. And they clearly spent a lot of time investigating the problem, identifying the expected results, and the details of why what's happening is wrong. If this is a data intensive application, there may well be SQL queries included to allow you to pinpoint the issue quickly. In short, an ideal problem report.
Except for one thing. Your team found and fixed the problem hours before. And the effort to find and document the problem could have been spent on something else.
It's hard to avoid this kind of overlap when you don't have a complete end-to end automated testing process. And it's probably impossible (for now, anyway) to create an automated test process that will completely replace exploratory testing. Any manual testing process has to balance the timeliness with the "flow" of the testers. If you update a deployment hourly, then you'll reduce the risk of redundant bug reports, but the testers will experience too many interruptions. If you have a paradigm where you code, deploy, and stop coding until you see the next set of issues, but that means time wasted, and probably reduced quality. The answer is better communication between the the testers and developers about the state of the application. Under ordinary circumstances, you might have this sort of exchange in your daily scrum.
You're still early in adopting agile, you'll likely have a week or two at the end of a project where you'll have more effort put into manual testing, and that manual testing will find errors. So the simple thing to do is to query among the team before spending too much time documenting an issue.
Some of the options are:
Situations like the one that I described can happen. But it's worth figuring out how often they happen, and if there are simple ways to reduce their incidence and impact. Good communication channels are importanty and sometimes the lower tech ones work better.
Except for one thing. Your team found and fixed the problem hours before. And the effort to find and document the problem could have been spent on something else.
It's hard to avoid this kind of overlap when you don't have a complete end-to end automated testing process. And it's probably impossible (for now, anyway) to create an automated test process that will completely replace exploratory testing. Any manual testing process has to balance the timeliness with the "flow" of the testers. If you update a deployment hourly, then you'll reduce the risk of redundant bug reports, but the testers will experience too many interruptions. If you have a paradigm where you code, deploy, and stop coding until you see the next set of issues, but that means time wasted, and probably reduced quality. The answer is better communication between the the testers and developers about the state of the application. Under ordinary circumstances, you might have this sort of exchange in your daily scrum.
You're still early in adopting agile, you'll likely have a week or two at the end of a project where you'll have more effort put into manual testing, and that manual testing will find errors. So the simple thing to do is to query among the team before spending too much time documenting an issue.
Some of the options are:
- Searching the issue tracking system. This sound like a good idea, but sometimes it's hard to find the right query. And it's possible that something got fixed without and issue being generated.
- Asking. Yelling out a question, sending and email, or posting a message in a team chat room gives you the benefit of being able to be a bit vague in your query.
Situations like the one that I described can happen. But it's worth figuring out how often they happen, and if there are simple ways to reduce their incidence and impact. Good communication channels are importanty and sometimes the lower tech ones work better.
Sunday, November 14, 2010
To Scrum, Prepare
Agile methods have some sort of daily all-team checkpoint meeting as part of the process. The idea behind the Daily Scrum (Scrum) or Daily Standup (XP) is good: replace status meetings (or someone walking around asking about status) with one short daily meeting where everyone has a chance to communicate about what they are doing and what they need help with. This ensures that there is at least one chance each day for everyone to understand the big picture of the project, and to discover unexpected dependencies.
But just having everyone in the room doesn't make for an effective, focused scrum. You need to be be prepared. Once I was on a team where the scrums started going off track. They took longer. People's updates were often "I don't remember what I did yesterday," or they became long unfocused rambles that didn't convey much information. I suggested that we all take a few minutes before Scrum to organize our thoughts. This got a lot of resistance. "It feels like a pre-meeting meeting, and with Scrum we're supposed to spend less time in meetings."
While Daily Scrum's are meant to be lightweight, it's respectful of everyone else's time to think about what 's worth sharing with the team. Most days you might just be working on one thing, in which case a quick glance at the Scrum board might be enough. But if you want to do what's best for your team, why not take 2 minutes before Scrum (either in the morning, or even the day before) jotting down what you want to share with the team that addresses the questions:
Starting each day with a clear picture in your head of the answers those questions is probably not a bad thing from a professional development perspective anyway.
Sure, everyone will have off days where they don't get around to this, but if your Scrum's are losing focus frequently, consider:
But just having everyone in the room doesn't make for an effective, focused scrum. You need to be be prepared. Once I was on a team where the scrums started going off track. They took longer. People's updates were often "I don't remember what I did yesterday," or they became long unfocused rambles that didn't convey much information. I suggested that we all take a few minutes before Scrum to organize our thoughts. This got a lot of resistance. "It feels like a pre-meeting meeting, and with Scrum we're supposed to spend less time in meetings."
While Daily Scrum's are meant to be lightweight, it's respectful of everyone else's time to think about what 's worth sharing with the team. Most days you might just be working on one thing, in which case a quick glance at the Scrum board might be enough. But if you want to do what's best for your team, why not take 2 minutes before Scrum (either in the morning, or even the day before) jotting down what you want to share with the team that addresses the questions:
- What did I do yesterday?
- What do I plan to do today?
- What were my roadblocks?
Starting each day with a clear picture in your head of the answers those questions is probably not a bad thing from a professional development perspective anyway.
Sure, everyone will have off days where they don't get around to this, but if your Scrum's are losing focus frequently, consider:
- Posting a sign that reminds people of the agenda of the scrum (as Tabaka suggests in Collaboration Explained: Facilitation Skills for Software Project Leaders)
- Whether there are people participating who aren't needed.
- Whether your work isn't being structured in a way that moves the team towards the sprint goal.
The Daily Scrum (or standup) is a useful tool for being agile and responsive, but just being in the room does not mean that you are having a Scrum.
Sunday, October 10, 2010
The Checklist as Empowerment tool
In an earlier post I talked about how many of the ideas in The Checklist Manifesto: How to Get Things Right support for agile values. One of the observations in the book that caught me by surprise was that checklists help people function as a team by making it easier to distribute decision making and empower individual team members. Checklists also help teams make better decisions by making it easier to distribute decision making. A team of empowered cross functional people, working together to decide how to get work done sounds a lot like the model of an agile team.
Checklists can help by institutionalizing a process where someone other than "the expert" is the center of decisions. In a discussion of a surgical checklist, Gawande discusses that nurses are the best people to own the checklist process but they needed to be able to stop a surgeon who skips a step without risking disciplinary action by a surgeon who feels free to avoid the process. Making the checklist part of the endorsed process clears a path toward a more empowered team.
An interesting case study from the book involved construction management. Since a building project, like a software project, involves a number of disciplines and teams. And as a software pattern enthusiast, I've learned much from the study of Christopher Alexander's patterns. Much like agile and lean software development techniques have show us the advantages of moving away from a command and control process, the construction industry has learned similar lessons. According to Gawande:
Checklists can help by institutionalizing a process where someone other than "the expert" is the center of decisions. In a discussion of a surgical checklist, Gawande discusses that nurses are the best people to own the checklist process but they needed to be able to stop a surgeon who skips a step without risking disciplinary action by a surgeon who feels free to avoid the process. Making the checklist part of the endorsed process clears a path toward a more empowered team.
An interesting case study from the book involved construction management. Since a building project, like a software project, involves a number of disciplines and teams. And as a software pattern enthusiast, I've learned much from the study of Christopher Alexander's patterns. Much like agile and lean software development techniques have show us the advantages of moving away from a command and control process, the construction industry has learned similar lessons. According to Gawande:
But by the middle of the twentieth century the Master Builders were dead and gone. The variety and sophistication of advancements in every stage of the construction process had overwhelmed the abilities of any individual to master them. In the first division of labor, architectural and engineering design split off from construction. Then, piece by piece, each component became further specialized and split off, until there were architects on one side, often with their own areas of sub-specialty, and engineers on another, with their various kinds of expertise; the builders, too, fragmented into their own multiple divisions, ranging from tower crane contractors to finish carpenters. The field looked, in other words, a lot like medicine, with all its specialists and superspecialists.Or for that matter, a software project with people with expertise in various technologies like database design, user experience, etc. He further describes how a construction process uses checklist to remind people to think of the other aspects of a project.:
Pinned to the left-hand wall opposite the construction schedule was ... a “submittal schedule.” It was also a checklist, but it didn’t specify construction tasks; it specified communication tasks. ... The experts could make their individual judgments, but they had to do so as part of a team that took one another’s concerns into account, discussed unplanned developments, and agreed on the way forward. While no one could anticipate all the problems, they could foresee where and when they might occur. The checklist therefore detailed who had to talk to whom, by which date, and about what aspect of construction—who had to share (or “submit”) particular kinds of information before the next steps could proceed.The line from the book that seemed most like it could have been just at home in a book on Agile Software Development was:
They had made the reliable management of complexity a routine. That routine requires balancing a number of virtues: freedom and discipline, craft and protocol, specialized ability and group collaboration.By focusing on the shared responsibility of everyone to get the job done, we can strive for quality:
“That’s not my problem” is possibly the worst thing people can thinkHe sums up the role of checklists:
Just ticking boxes is not the ultimate goal here. Embracing a culture of teamwork and discipline is.By embracing the idea of a team-based approach to solving problems, we may need to give up some of our beliefs of what makes a person a valuable contributor:
It somehow feels beneath us to use a checklist, an embarrassment. It runs counter to deeply held beliefs about how the truly great among us—those we aspire to be—handle situations of high stakes and complexity. The truly great are daring. They improvise. They do not have protocols and checklists. Maybe our idea of heroism needs updating.So, how can you figure out how to use checklists to make teams more effective? Start with the basics and write down what you should be doing: The flow of a standup, minimally good coding practices, etc. Use iteration reviews and release retrospectives to identify what other issues can be avoided by adding a line to a checklist. Also review your current checklists and revise and improve them, especially if they get too long, or are mocked or ignored. Agile software adoption is as much about cultural change, as it is about specific skills, practices, or tasks.
Monday, September 6, 2010
Lean Architecture
When I was a new programmer, the career path that appealed to me was to be an software architect. The architect was the person who had the vision of how the system worked, and the work of the architect (if done correctly) set the stage for all good things in a project, coordinating the development, requirements, and anything else that you need to build a system. One thing that troubled me was that many architects I knew didn't code, considering the coding a distraction. Having worked on a project or two early in my career with a non-coding architect who was reluctant to spend time helping the team address how difficult his vision was to execute given the languages and frameworks we were implementing with, I thought that something was amiss with the idea of a non-coding architect.
One of the off-shoots of specification heavy projects (no doubt staffed by many analysts and non-coding architects) was the agile software development movement, which has as one of it's principles minimizing Big-Up-Front- Design. In some cases people took that (incorrectly) as meaning to always let the architecture evolve organically.
As I had hoped when I received my review copy, Jim Coplien's recent book Lean Architecture: for Agile Software Development explains how agile principles and architecture are complimentary, and how, with everyone working collaboratively, a good, lightweight architectural framework can help enable agility, rather than being a barrier to it. With his usual iconoclastic style, Coplien dispels the myth that agile doesn't need architecture.
As a C++ programmer in the early 90's Coplien's Advanced C++ Programming Styles and Idiomswas a source of interview material when looking for programmers. It's a good bet that this book may fill the same role for those looking to see if candidates for architect roles understand what it means to be an architect in a Lean or Agile Organization. This book dispels the myth that Agile and Architecture don't go together and explains the balance between Agile architecture and too much Big Up Front Design.
This book emphasizes the importance of frequent collaboration between stakeholders in defining a good architecture and helps you to understand the importance of architecture to the success of agile projects. With code examples throughout, this book demonstrates that architecture and coding must go together. After describing some general principles of how architecture can add value to an agile project, the authors explain the Data Context, Interaction (DCI) architecture, which provides an framework for building lean architectures. My one complaint is that the transition between the general discussions of lean architecture and the focused discussion of DCI was a bit abrupt. This could almost have been two books: one on lean architecture principles and a second (short) book demonstrating how DCI is an useful framework to apply the principles from book one. But this was a minor distraction from an enjoyable and informative read.
Rich with citations and historical context, this book will be useful for anyone who is struggling with how to build systems that need to support complicated user interactions.
One of the off-shoots of specification heavy projects (no doubt staffed by many analysts and non-coding architects) was the agile software development movement, which has as one of it's principles minimizing Big-Up-Front- Design. In some cases people took that (incorrectly) as meaning to always let the architecture evolve organically.
As I had hoped when I received my review copy, Jim Coplien's recent book Lean Architecture: for Agile Software Development explains how agile principles and architecture are complimentary, and how, with everyone working collaboratively, a good, lightweight architectural framework can help enable agility, rather than being a barrier to it. With his usual iconoclastic style, Coplien dispels the myth that agile doesn't need architecture.
As a C++ programmer in the early 90's Coplien's Advanced C++ Programming Styles and Idiomswas a source of interview material when looking for programmers. It's a good bet that this book may fill the same role for those looking to see if candidates for architect roles understand what it means to be an architect in a Lean or Agile Organization. This book dispels the myth that Agile and Architecture don't go together and explains the balance between Agile architecture and too much Big Up Front Design.
This book emphasizes the importance of frequent collaboration between stakeholders in defining a good architecture and helps you to understand the importance of architecture to the success of agile projects. With code examples throughout, this book demonstrates that architecture and coding must go together. After describing some general principles of how architecture can add value to an agile project, the authors explain the Data Context, Interaction (DCI) architecture, which provides an framework for building lean architectures. My one complaint is that the transition between the general discussions of lean architecture and the focused discussion of DCI was a bit abrupt. This could almost have been two books: one on lean architecture principles and a second (short) book demonstrating how DCI is an useful framework to apply the principles from book one. But this was a minor distraction from an enjoyable and informative read.
Rich with citations and historical context, this book will be useful for anyone who is struggling with how to build systems that need to support complicated user interactions.
Tuesday, August 3, 2010
Are You Done Yet?
Johanna Rothman recently wrote, commenting on Joshua Kerievsky's proposed definition of done. Both posts are worth a read, if for no other reason than to better understand why we have such a difficult time defining what "done" is, and why defining "done" is one of the major challenges for teams trying to adopt agile practices.
Thinking about both Joshua's and Johanna's points I wonder if the difference isn't similar to a discussion of whether principles or practices are more important to be successful when adopting agile methods. On the one hand following practices diligently allows you to develop good habits and even to get some good results early on. The challenge comes when it's time to reflect and improve on your practices. Without a good understanding of practices it's hard to optimize.
Similarly, defining done earlier in the process can cause problems if you are thinking about the meaning of "done" the wrong way. If "done" means washing your hands of issues ("we met the spec..."), evaluating done as late as possible makes sense, enforcing the idea that you are not done until the customer is happy is a useful driver.
If, on the other hand, you understand (and believe) that your goal as a developer is to deliver useful, quality software, and if the customer understands that that they may not have understood the problem until they had a working system in hand, defining done for earlier steps means that you have more tools with which to evaluate your effectiveness, status, and progress. Done closer to the developer means that you have more, rather than fewer, chances to evaluate, learn, and improve. By embracing the principle that delivering a useful end product is the goal, you can benefit from having some local completion criteria.
Having the definition of done closer to the team (as Johanna recommends) allows you to measure progress and identify risk. You also need to be able to acknowledge that it is possible that completing all the stories may still mean that there is still work to do. Then you have to inspect, adjust, and adapt. Which is to say: be agile.
Thinking about both Joshua's and Johanna's points I wonder if the difference isn't similar to a discussion of whether principles or practices are more important to be successful when adopting agile methods. On the one hand following practices diligently allows you to develop good habits and even to get some good results early on. The challenge comes when it's time to reflect and improve on your practices. Without a good understanding of practices it's hard to optimize.
Similarly, defining done earlier in the process can cause problems if you are thinking about the meaning of "done" the wrong way. If "done" means washing your hands of issues ("we met the spec..."), evaluating done as late as possible makes sense, enforcing the idea that you are not done until the customer is happy is a useful driver.
If, on the other hand, you understand (and believe) that your goal as a developer is to deliver useful, quality software, and if the customer understands that that they may not have understood the problem until they had a working system in hand, defining done for earlier steps means that you have more tools with which to evaluate your effectiveness, status, and progress. Done closer to the developer means that you have more, rather than fewer, chances to evaluate, learn, and improve. By embracing the principle that delivering a useful end product is the goal, you can benefit from having some local completion criteria.
Having the definition of done closer to the team (as Johanna recommends) allows you to measure progress and identify risk. You also need to be able to acknowledge that it is possible that completing all the stories may still mean that there is still work to do. Then you have to inspect, adjust, and adapt. Which is to say: be agile.
Monday, June 21, 2010
A Review of Drive by Dan Pink
This is one of those books that describes something extremely obvious and intuitive that at the same time goes against what you were taught was "common sense." This would be a good book just for the survey of the (long) history of the study of the theory of motivation. It also concludes with a number of things you can do to create an environment that encourages mastery (as opposed to simply meeting goals) in your work and school.
If you're an agile software developer you'll have a few aha! moments when you understand how agile practices really encourage flow and create environments where teams and individuals can be highly productive. If you're a manager, this book will encourage you to think about how teams work and how some common practices are counter-productive.
If you're trying to understand why self organizing teams work, but with a perspective outside of software development, this is a quick read that will get you thinking and learning.
Some other books on related topics:
If you're an agile software developer you'll have a few aha! moments when you understand how agile practices really encourage flow and create environments where teams and individuals can be highly productive. If you're a manager, this book will encourage you to think about how teams work and how some common practices are counter-productive.
If you're trying to understand why self organizing teams work, but with a perspective outside of software development, this is a quick read that will get you thinking and learning.
Some other books on related topics:
- Behind Closed Doors: Secrets of Great Management (Pragmatic Programmers)
- Pragmatic Thinking and Learning: Refactor Your Wetware (Pragmatic Programmers)
- Flow: The Psychology of Optimal Experience (P.S.)
Wednesday, May 19, 2010
Motivation Visibility, and Unit Testing
I've always been interested in organizational patterns (such as those in Organizational Patterns of Agile Software Development). I've recently found myself thinking a lot about motivation. I'm now reading Drive: The Surprising Truth About What Motivates Us and just finished Rob Austin's book on performance measurement. Being the parent of a three year old, I'm finding more and more that "because I said so, and I'm right" isn't too effective at home. My interests in motivation are closely related to my interest in writing software effectively. Writing software is partially a technical problem about frameworks, coding, and the like, but the harder (and perhaps more interesting) problem is how to get a group of people working together towards a common goal. Agile practices, both technical and organizational, build a framework which makes having the right amount of collaboration and feedback possible. But there's a bootstrapping process: How do you get people to start doing the practices, especially technical ones, such as unit testing?
In an ideal world, everyone will know how to write unit tests, understand their value, and want to write them. In an organization transitioning to agile having all three parts in place is not a given. Most people understand why unit tests are useful, in principle. The problem is the execution. In my experience, there are two reasons people who claim that that want to write unit tests give for not writing them:
Both of these reasons can have merit at times. Testing getters and setters, and trivial calls to a system library and other simple coding constructs really don't add value. And writing a complicated test using a hard to use framework to verify something non business critical that could be quickly validated by visual inspection ("make the background color of the home screen orange") may well be not worth the effort. The problem is that most people have bad intuitions about where the lines line until they start practicing the skill of unit testing. Most teams need experience with testing to get a true feel of what is a trivial test, and what is a seemingly trivial test that can unmask a major problem
Rather than frame the testing challenge with the default being the old way of not testing:
Change your perspective to the default being to test:
The key to this approach is to make encourage people to think through their rationale for not testing. There are a few ways to do this, but one approach is to tie the explanation mechanism into something every developer works with every day: your source code repository. Have the team agree that, in addition to the source files, every commit will have a change to a test or a rationale for why you didn't write a test. For example if I do a commit without a test I could write:
By developing a team agreement to add tests or explain why not, you are starting with a small change of behavior that paves the way for a greater change based on understanding. Even if a message like the last one is acceptable, many will be uncomfortable admitting to laziness, and think harder for a reason. By reviewing the commit messages later on you can get a sense of impediments to testing (technology, organizational, or attitude), and use that data in a retrospective to decide how to improve.
By being creative you can help people on your team understand the value of process changes, and start a conversation about how to evolve practices to suit your team.
In an ideal world, everyone will know how to write unit tests, understand their value, and want to write them. In an organization transitioning to agile having all three parts in place is not a given. Most people understand why unit tests are useful, in principle. The problem is the execution. In my experience, there are two reasons people who claim that that want to write unit tests give for not writing them:
- They are too hard. The overhead of the test can make the cost of developing a feature excessive.
- They are too easy. Some functionality appears to be trivial, so why would we want to test it.
Both of these reasons can have merit at times. Testing getters and setters, and trivial calls to a system library and other simple coding constructs really don't add value. And writing a complicated test using a hard to use framework to verify something non business critical that could be quickly validated by visual inspection ("make the background color of the home screen orange") may well be not worth the effort. The problem is that most people have bad intuitions about where the lines line until they start practicing the skill of unit testing. Most teams need experience with testing to get a true feel of what is a trivial test, and what is a seemingly trivial test that can unmask a major problem
Rather than frame the testing challenge with the default being the old way of not testing:
Write a test when it makes sense.
Write a test unless you can explain why you did not.
The key to this approach is to make encourage people to think through their rationale for not testing. There are a few ways to do this, but one approach is to tie the explanation mechanism into something every developer works with every day: your source code repository. Have the team agree that, in addition to the source files, every commit will have a change to a test or a rationale for why you didn't write a test. For example if I do a commit without a test I could write:
ISSUE-23: Fixed the spelling of the company name. NO TEST: it was a typo.
or
ISSUE-26: Fixed the rendering mechanism. NO TEST: we don't have a good framework for testing this sort of thing
or even
ISSUE-28: Fixed a serious logic issue. NO TEST: I didn't feel like writing one.
By developing a team agreement to add tests or explain why not, you are starting with a small change of behavior that paves the way for a greater change based on understanding. Even if a message like the last one is acceptable, many will be uncomfortable admitting to laziness, and think harder for a reason. By reviewing the commit messages later on you can get a sense of impediments to testing (technology, organizational, or attitude), and use that data in a retrospective to decide how to improve.
By being creative you can help people on your team understand the value of process changes, and start a conversation about how to evolve practices to suit your team.
Wednesday, April 28, 2010
Measuring Performance for Teams
If you've ever been in an organization that had performance reviews, you may have found yourself wondering whether the measurement process used to evaluate your (and your team's) performance made sense, and if there was a better way. Even if you have goals that can be evaluated quantitatively, rather than some metric that feels arbitrary, you may feel that your personal goals may run counter to the success of the your team. For example, you may wonder whether it makes sense to help your colleague on her high priority project and risk missing a deadline for your (lower) priority one. Sometimes the problems with measurement systems are because people just don't measure well. In other cases, it's because it's impossible to measure all of the things that matter.
Rob Austin's book Measuring and Managing Performance in Organizations gives you a model to understand why measurement systems become dysfunctional, and an approach to avoid dysfunction when you are measuring.
Austin addresses some core issues that agile (and other) teams face:
Much of what I learned from reading this book seemed obvious in retrospect, but Austin explains the problem with clarity and precision, making observations that only seemed obvious once I read them. Early in the book, for example he points out:
Reading this book won't give you a cookbook for designing a motivation and performance evaluation system. This is a difficult problem, especially for those working in an industry where there is a strong desire to quantify and measure. But this book will help you to understand the problem and enable to evaluate and improve your current practices.
While Austin's book will help you understand the model behind effective performance measurement, there are also more day-to-day practices you need to help your team be successful. For these consider reading Johanna Rothman and Esther Derby's book Behind Closed Doors: Secrets of Great Management, which is an excellent guide to the day to day process of people management.
Managing people can be intuitive, but it is also more difficult than many people realize. A desire to measure is useful, but it can be counter productive when you measure without understanding. Software development is a collaborative, human activity, and as such we need to understand that management and measurement and difficult, and doing either without without an understanding of the challenges can lead to unexpected results.
Rob Austin's book Measuring and Managing Performance in Organizations gives you a model to understand why measurement systems become dysfunctional, and an approach to avoid dysfunction when you are measuring.
Austin addresses some core issues that agile (and other) teams face:
- How to evaluate and reward individual knowledge workers doing complicated things in teams
- How to motivate the individuals on the team to do what helps the team meet their goals, and
- What performance measurements are helpful, and which just add noise.
Much of what I learned from reading this book seemed obvious in retrospect, but Austin explains the problem with clarity and precision, making observations that only seemed obvious once I read them. Early in the book, for example he points out:
Employees true output (such as value to the organization) is intangible and difficult to measure; in its place organizations choose to measure inputs (such as the amount of effort devoted to a task...)Which seems to be such an obvious problem with many evaluation systems that you wonder why so many organizations still do it.
Reading this book won't give you a cookbook for designing a motivation and performance evaluation system. This is a difficult problem, especially for those working in an industry where there is a strong desire to quantify and measure. But this book will help you to understand the problem and enable to evaluate and improve your current practices.
While Austin's book will help you understand the model behind effective performance measurement, there are also more day-to-day practices you need to help your team be successful. For these consider reading Johanna Rothman and Esther Derby's book Behind Closed Doors: Secrets of Great Management, which is an excellent guide to the day to day process of people management.
Managing people can be intuitive, but it is also more difficult than many people realize. A desire to measure is useful, but it can be counter productive when you measure without understanding. Software development is a collaborative, human activity, and as such we need to understand that management and measurement and difficult, and doing either without without an understanding of the challenges can lead to unexpected results.
Sunday, April 11, 2010
Things about Release Management Every Programmer Should Know
As I mentioned earlier I was privileged to contribute to the book 97 Things Every Programmer Should Know: Collective Wisdom from the Experts. In addition to the contributions about coding and design, I was pleasantly surprised to see the number of items that relate to release management. While I've long been interested in how to build architectures and processes that make deploying and releasing software easy, I sometimes get the impression that these items were often though of necessary evils that could be done at the end, often by the someone who isn't doing "more valuable work." Much like awareness of agile software development made it obvious that testing and quality assurance activities work best when they are integrated throughout the development lifecycle, agile has also made it more obvious why build and release engineering is something to work on as you go. This makes a lot of sense, as ease of release is closely tied to the physical architecture of the system, and your build process defines the physical architecture.
Some of the posts of interest are the following, though I could have added others than related to testing as well.
Branching is a useful tool when used in the right context, but more often than not, branching is used as a way to avoid issue rather than to address them. Rather than branching because there is a true divergence in the code, we branch to avoid breaking existing code. The problem is that doing so simply defers a cost. Sometimes deferring the cost makes sense. Often it's better to invest in the techniques you need to enable incremental change while keeping the codeline working.
97 Things Every Programmer Should Know is about much more than just coding, or just release management. And that's the point: programming is a multi-faceted skill and to write good code, you need to know about more than just writing code.
Some of the posts of interest are the following, though I could have added others than related to testing as well.
- Deploy Early and Often by Steve Berczuk
- Install Me by Marcus Baker
- Keep the Build Clean by Johannes Brodwall
- One Binary by Steve Freeman
- Own (and Refactor) the Build by Steve Berczuk
- Put Everything Under Version Control by Diomidis Spinellis
- Step Back and Automate, Automate, Automate by Cay Horstmann
Branching is a useful tool when used in the right context, but more often than not, branching is used as a way to avoid issue rather than to address them. Rather than branching because there is a true divergence in the code, we branch to avoid breaking existing code. The problem is that doing so simply defers a cost. Sometimes deferring the cost makes sense. Often it's better to invest in the techniques you need to enable incremental change while keeping the codeline working.
97 Things Every Programmer Should Know is about much more than just coding, or just release management. And that's the point: programming is a multi-faceted skill and to write good code, you need to know about more than just writing code.
Monday, April 5, 2010
Planning is a Gerund
One of the things teams adopting agile struggle with is deciding how much to define a plan before you start executing. Have a plan that's too well developed and you end up risking that your team may not be responsive enough to change. Too little of a plan and you may end up changing course excessively, and have no way to measure your progress towards any sort of deliverable.
At the core of this confusion over how much to plan is the reality that plans change, and spending too much time and energy creating a plan that ends up being wrong seems wasteful. But the lack of a plan means that you have nothing to measure progress against. One way to reconcile this is to keep in mind the following quote, attributed to Dwight Eisenhower (and a variant attributed to Churchill):
Plans are nothing; Planning is everything.
If we assume that as a project progresses, that things will change, we can still benefit from talking through the options as a team. Capturing the things that we don't know, but would like too, is useful information, and gives the team a good measure of risk.
The time you spend planning is an important consideration. Constrain the amount of planning time based on the duration of your sprint. If you can't come to an understanding of what the problem is or how to approach it, you have a clue that you're trying to do too much. But rather than throw up your hands, you can aim to have some sort of plan. It might be wrong, but even building the wrong thing can increase your understanding.
For the planning activity to be useful it is important that it not be top-down but that it involve the implementation team, as they are the ones who can speak to the the implementation risks, and can propose creative solutions given the ability to probe about real goals.
One thing that may concern people with the approach of involving the team and stakeholders at the same time is that a planning which raises more questions that providing answers can make some people uncomfortable. Senior managers may be uncomfortable with acknowledging ignorance. Team members may be put off by seeing that there are legitimate disagreements among the product ownership team about some issues. And some people are just uncomfortable when you can't just tell them what to do.
This is a cultural issue that may not be easy to overcome, but agile projects work well because the team can pull together to solve problems when given all of the information, and structure their code and their work to mitigate risk. And if the uncertainty exists it's better to identify it up front.
Regardless of the level of uncertainty about goals and dependencies it is important to exit a planning session with a common vision for the goals and target for when you will re-evaluate the goals. A well run planning activity can helps to focus the team towards a common goal.
Saturday, April 3, 2010
Book Review: Modular Java
I recently read Craig Walls' book Modular Java: Creating Flexible Applications with Osgi and Spring (Pragmatic Programmers). This book is a very detailed tutorial that walks you through setting up an application using OSGI and Spring with the help of Maven as a build tool. If you aren't familiar with any of these technologies, this book will get you started, and quickly have you feeling like you have a basic grasp of the concepts and technologies.
You'll finish the book with a desire to learn more about the technologies, and understand the power of modular applications. As a tutorial, this is an excellent book. This is not an general guide to how to design with OSGI. There is some background on the frameworks, and some explanation of technologies, and also pointers to sources for more information, but this book is all about learning by building. If you pick up the book hoping to learn any details about the how and why of OSGI and Spring, two useful technologies, you might be disappointed. But if you like learning by doing, and working with running code, you'll enjoy and value this book.
You'll finish the book with a desire to learn more about the technologies, and understand the power of modular applications. As a tutorial, this is an excellent book. This is not an general guide to how to design with OSGI. There is some background on the frameworks, and some explanation of technologies, and also pointers to sources for more information, but this book is all about learning by building. If you pick up the book hoping to learn any details about the how and why of OSGI and Spring, two useful technologies, you might be disappointed. But if you like learning by doing, and working with running code, you'll enjoy and value this book.
Thursday, March 25, 2010
Are You Agile Enough?
I recently read an article in SD Times about how organizations tailor agile processes to fit into their environment, rather than feeling a need to be dogmatic. Adapting a process to work in your environment is very important to being successful. It's also important, however to understand how the variations you are making help you move toward your goals.
Being agile is a means to an end; your goal is to develop better software more effectively, not to be able to wear a "We are Agile" badge. If you're considering adopting agile, you are probably doing do because your current approach isn't getting you where you need to be so it's worth giving the 'by the book' technique a shot before you try to adapt an agile method to your circumstances. This especially applies to when you consider omitting practices. Like many approaches, there is a synergy between the the core agile practices; any one can help you be better but the big wins come when you do them all.
The essential parts of an agile process in my experience are:
When I read something like this finding reported in the article:
My former colleague Bruce Eckfeldt was quoted in the article as saying:
"Agile" methods define a set of goals and techniques to attain these goals. If you are trying to adopt agile, but don't want to adopt all of the practices of a method, it's fair to say that your organization isn't ready for a given practice, but be honest about it. Without the honesty and transparency, you're missing the key difference of agile methods.
As to the relationship between textbook and practical agile when adopting agile methods, Bruce is quoted as saying:
Organizations look to agile methods because their current approaches don't allow them to deliver as quickly as they can. And in the end, organizations and people need to change, whether they are adopting agile, or any new process.
Before adapting you need to understand the core values or agile before you risk throwing the benefits away. You need to understand why you want to make the adaptation and honestly express the reason. And most important of all, you need to be clear and consistent about your goals. For example, if you want to encourage unit testing, you need to be clear that unit testing is encouraged, and accept that initially slightly slower progress is possible and OK if people are learning to write tests. Nothing can kill a change endeavor more quickly than inconsistent messages.
Being dogmatic for the sake of method purity isn't useful. Following the process until you understand the benefits of the practices can be very useful as a way to facilitate change. I'll end with a brief story.
When I first heard of XP, at an OOPSLA conference I came back to work and suggested that our definition of "done" include unit tests. The other person on my team grumbled each time I asked "did you test that?" The grumbling continued until a couple of days later when, being presented with a problem report from our QA team, he was able to diagnose the source of the problem in minutes with the help of unit tests. "Cool," I head him mumble as he found and fixed the problem.
Had we not given the discipline of testing a good shot because we believed that unit tests didn't help, we would not have understood why the practice mattered. If you think that a method has value, try it, learn, then adapt. The opposite, adapting before you try, is less useful.
Being agile is a means to an end; your goal is to develop better software more effectively, not to be able to wear a "We are Agile" badge. If you're considering adopting agile, you are probably doing do because your current approach isn't getting you where you need to be so it's worth giving the 'by the book' technique a shot before you try to adapt an agile method to your circumstances. This especially applies to when you consider omitting practices. Like many approaches, there is a synergy between the the core agile practices; any one can help you be better but the big wins come when you do them all.
The essential parts of an agile process in my experience are:
- Feedback, with a goal of continuous improvement.
- Honest evaluation of why you're doing and why.
- A (periodically) stable codebase so that you deliver functionality to customers quickly
- A set of goals that lead to customer value.
- A belief that the team doing the execution can find the best solution and that management needs to step back to encourage innovation.
When I read something like this finding reported in the article:
Of agile’s core tenets—daily standup, iteration planning and unit testing—VerisonOne found in its fourth annual “State of Agile Development” survey that 69% of the 2,570 participants adhered to these three things.or
Even a variation of Scrum exists. Known as “ScrumBut,” it was dubbed for shops that don’t fully comply with the methodology. It is for those who say, “We are doing Scrum, but…” While some people may view this as non-agile, others argue it’s simply a customization of an agile process to work better for that particular company and its structures.I wonder what these teams are they doing instead of the core "standard" practices to achieve the goals of agile. The argument about whether those practicing "Scrum-But" are "agile" misses the point of the label, which is to highlight the areas where teams can work to improve their productivity.
My former colleague Bruce Eckfeldt was quoted in the article as saying:
“I see the methodologies as a continuum, and at the end of the day it’s all agile with the same principles and practices,” ... “There’s nothing set in stone on how to do something. You’re always looking to improve."If you're not improving as fast as you like, consider what steps in your process you are skipping. It's really hard to grasp the concepts behind agile until you are disciplined and try to apply the practices. If you have another approach that gets you to the end goal, by all means do it, but at the core, agile methods don't define a lot of detail. The basics are mechanically easy. The hard part is getting comfortable with the transparency agile methods require.
"Agile" methods define a set of goals and techniques to attain these goals. If you are trying to adopt agile, but don't want to adopt all of the practices of a method, it's fair to say that your organization isn't ready for a given practice, but be honest about it. Without the honesty and transparency, you're missing the key difference of agile methods.
As to the relationship between textbook and practical agile when adopting agile methods, Bruce is quoted as saying:
“things may be reasonably pure,” but then people start to see what works or not and begin “to look beyond the textbook versions of agile.”The trick is not to customize too soon. Until you give the process a fair trial, there is a great tendency to fall back on comfortable ways.
Organizations look to agile methods because their current approaches don't allow them to deliver as quickly as they can. And in the end, organizations and people need to change, whether they are adopting agile, or any new process.
Before adapting you need to understand the core values or agile before you risk throwing the benefits away. You need to understand why you want to make the adaptation and honestly express the reason. And most important of all, you need to be clear and consistent about your goals. For example, if you want to encourage unit testing, you need to be clear that unit testing is encouraged, and accept that initially slightly slower progress is possible and OK if people are learning to write tests. Nothing can kill a change endeavor more quickly than inconsistent messages.
Being dogmatic for the sake of method purity isn't useful. Following the process until you understand the benefits of the practices can be very useful as a way to facilitate change. I'll end with a brief story.
When I first heard of XP, at an OOPSLA conference I came back to work and suggested that our definition of "done" include unit tests. The other person on my team grumbled each time I asked "did you test that?" The grumbling continued until a couple of days later when, being presented with a problem report from our QA team, he was able to diagnose the source of the problem in minutes with the help of unit tests. "Cool," I head him mumble as he found and fixed the problem.
Had we not given the discipline of testing a good shot because we believed that unit tests didn't help, we would not have understood why the practice mattered. If you think that a method has value, try it, learn, then adapt. The opposite, adapting before you try, is less useful.
Sunday, March 14, 2010
The Checklist Manifesto as Agile Primer
I recently read Atul Gawande's book The Checklist Manifesto: How to Get Things Right and found a number of of useful lessons in the book for agile developers. Agile software development methods often have very few explicit processes, but these processes are essential, and require discipline to execute well. We're often tempted to skip steps, either because:
It's when we skip steps that processes break down. Think about the time's you've had a hard time tracking down a problem. How often was this problem in code that you wrote a unit test for? While it's often tempting to say that a change is "too trivial" to break something, how likely would you make that same decision if you had to go through a process (either on paper or enforced by tools) that asks "did you write a unit test?" to which you had to explicitly say "no?"
As another example, consider the various meetings that are part of your agile process such as your daily scrum. Jean Tabaka's book Collaboration Explained: Facilitation Skills for Software Project Leaders, discusses the value of posting and following agendas for the meetings that agile teams have. In some sense these agendas are just checklists that we follow to set up a context that allows us to meet the goal of the meeting is an efficient manner. In my experience, Daily Scrums and XP stand-ups become less valuable when they stray from the agenda, because people lose focus, and start thinking of them as less valuable. And the posted agenda (checklist) empowers those who find a side conversation distracting to move the meeting along.
Discipline is essential to an effective agile software development process. But discipline, Gawande points out, is hard:
If your team is struggling with process and not getting enough done, think about whether there are some simple things you are forgetting, and write them down. And, most importantly, iterate on the checklists.
- We think that the step doesn't apply in a particular situation or
- We forgot
It's when we skip steps that processes break down. Think about the time's you've had a hard time tracking down a problem. How often was this problem in code that you wrote a unit test for? While it's often tempting to say that a change is "too trivial" to break something, how likely would you make that same decision if you had to go through a process (either on paper or enforced by tools) that asks "did you write a unit test?" to which you had to explicitly say "no?"
As another example, consider the various meetings that are part of your agile process such as your daily scrum. Jean Tabaka's book Collaboration Explained: Facilitation Skills for Software Project Leaders, discusses the value of posting and following agendas for the meetings that agile teams have. In some sense these agendas are just checklists that we follow to set up a context that allows us to meet the goal of the meeting is an efficient manner. In my experience, Daily Scrums and XP stand-ups become less valuable when they stray from the agenda, because people lose focus, and start thinking of them as less valuable. And the posted agenda (checklist) empowers those who find a side conversation distracting to move the meeting along.
Discipline is essential to an effective agile software development process. But discipline, Gawande points out, is hard:
Discipline is hard—harder than trustworthiness and skill and perhaps even than selflessness. We are by nature flawed and inconstant creatures. We can’t even keep from snacking between meals. We are not built for discipline. We are built for novelty and excitement, not for careful attention to detail. Discipline is something we have to work at.So, if checklists enforce discipline, do they do so at the expense of judgement and creativity? Gawande says no:
...the question of when to follow one’s judgment and when to follow protocol is central to doing the job well—or to doing anything else that is hard. You want people to make sure to get the stupid stuff right. Yet you also want to leave room for craft and judgment and the ability to respond to unexpected difficulties that arise along the way. The value of checklists for simple problems seems self-evident.Using example from aviation, structural engineering, and medicine, Gawande demonstrates that well made checklists allow you to focus on the activities that require creativity, by providing a way to get the basics right.
The checklist gets the dumb stuff out of the way, the routines your brain shouldn’t have to occupy itself withAs much as I find checklists useful, if used the wrong way, they can do bad things to productivity and creativity. As Gawande says:
Bad checklists are vague and imprecise. They are too long; they are hard to use; they are impractical. ... They treat the people using the tools as dumb and try to spell out every single step. They turn people’s brains off rather than turn them on. ...Good checklists, on the other hand, are precise. ... They do not try to spell out everything... Good checklists are, above all, practical.Perhaps a surgeon, using examples from aviation, medicine, and structural engineering can teach agile developers something valuable. The main lesson that appealed to me as someone who values (lightweight) process is process can enable you to be more effective and move quickly by liberating you from thinking about the well known issues, and allowing you to focus on the hard problems.
If your team is struggling with process and not getting enough done, think about whether there are some simple things you are forgetting, and write them down. And, most importantly, iterate on the checklists.
Note: This was also the first non-fiction book that I read on a Kindle so I'm discovering how useful the Kindle is to capture notes as I read. I'll have more to say later about other lessons from the Checklist Manifesto about teamwork and collaboration.
Sunday, March 7, 2010
Agile Portfolio Management
I've heard people criticize agile methods as being too reactive and focusing too much on the little picture and ignoring larger goals. This is a misunderstanding of a basic idea of agile. Agile methods are't about thinking small. Agile methods are about making small steps towards a goal, applying programming an management discipline along the way. (For more, have look at an elevator pitch for agile I wrote last year.)
The basic approach of all agile methods is to
If you have doubts about whether long-range planning in an agile environment is even possible, read Johanna Rothman's book Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, which I recently received a review copy of.
A project portfolio is "an organization of projects, by date, and by value, that the organization commits to or is planning to commit to." This sounds like a scaled up version of a product backlog that you might use to organize your work in an agile project, but with a longer time scale. So it's certainly aligned with agility.
In this book, Johanna motivates the importance of the project portfolio to enabling agile development, and also demonstrates how the technical and project management techniques of agile teams make it easier to define and iterate on a project portfolio.
Johanna is an expert on merging the human side and technical sides of projects. I learned quite a bit about managing people from Behind Closed Doors: Secrets of Great Management which she co-authored with Esther Derby. In Manage It!: Your Guide to Modern, Pragmatic Project Management Johanna discussed how to manage projects. And one of the more challenging part of managing a project portfolio is overcoming the resistance some people have to defining a goal for a project, a portfolio for a product line, and mission for an organization. In Manage Your Project Portfolio she shows how how to address common obstacles to defining a project portfolio, evolving it, and using it as a tool to allow everyone to understand where the organization is aiming.
And the benefits of a project portfolio don't just help with "fuzzy" concepts like vision, but can also help reduce and address items such as technical debt. In addition to an overview of concepts, and concrete guidance on how to address problems, the book interleaves stories that establish that this work is based on real-world experience, and help you to relate to the issues the book addresses.
I recommend this book to anyone who has a role in defining projects.
The basic approach of all agile methods is to
- Define a goal
- Break the goal into incremental bits so that you can iterate towards the goal
- Periodically (at the end of each iteration) pause to evaluate both your progress towards the goal, and whether the goal makes sense.
If you have doubts about whether long-range planning in an agile environment is even possible, read Johanna Rothman's book Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, which I recently received a review copy of.
A project portfolio is "an organization of projects, by date, and by value, that the organization commits to or is planning to commit to." This sounds like a scaled up version of a product backlog that you might use to organize your work in an agile project, but with a longer time scale. So it's certainly aligned with agility.
In this book, Johanna motivates the importance of the project portfolio to enabling agile development, and also demonstrates how the technical and project management techniques of agile teams make it easier to define and iterate on a project portfolio.
Johanna is an expert on merging the human side and technical sides of projects. I learned quite a bit about managing people from Behind Closed Doors: Secrets of Great Management which she co-authored with Esther Derby. In Manage It!: Your Guide to Modern, Pragmatic Project Management Johanna discussed how to manage projects. And one of the more challenging part of managing a project portfolio is overcoming the resistance some people have to defining a goal for a project, a portfolio for a product line, and mission for an organization. In Manage Your Project Portfolio she shows how how to address common obstacles to defining a project portfolio, evolving it, and using it as a tool to allow everyone to understand where the organization is aiming.
And the benefits of a project portfolio don't just help with "fuzzy" concepts like vision, but can also help reduce and address items such as technical debt. In addition to an overview of concepts, and concrete guidance on how to address problems, the book interleaves stories that establish that this work is based on real-world experience, and help you to relate to the issues the book addresses.
I recommend this book to anyone who has a role in defining projects.
Sunday, February 28, 2010
Estimation Poker
Estimation is a necessary part of software development. Product owners want to know how much work can get done by a deadline, project managers need to make commitments, and developers want to know if they committed to a reasonable amount of work. While estimates are often inaccurate, estimates provide landmarks along the way of a project to gauge progress. So, estimates are an inevitable, and useful part of the software development process. Many complain that the process of getting to those estimates, estimation, takes too long, so planning sessions are cut short and teams don't have enough time to discuss issues that have uncertainty. By appropriate use of planning poker, you can balance the needs for good estimates while minimizing time spent estimating.
Sometimes when a team is asked to estimate a backlog item, one or more people with expertise in an area are asked to estimate the item, but this is not the best way for an agile team to get a good estimate.
There are benefits to involving a larger part of the team in the estimation process. The challenge is that people feel that involving the whole team is wasteful if the estimation process takes too much time. On the other hand, inaccurate estimates have their own costs for the team and the other stakeholders.
Planning Poker, an estimating method popular with Agile Teams can address some of these issues. Briefly, planning poker involves getting the developers on a team together to estimate stories using a deck of cards that have numbers that represent units of work. The numbers are often spaced in a Fibonacci sequence, the theory being that the larger the estimate, the lower the precision. Planning planning poker can be a really useful tool to both improve estimation and discover uncertainty in requirements.
People resist planning poker for reasons like:
If you find that your estimates are inaccurate, or your estimation process takes too long, consider the following approach:
Sometimes when a team is asked to estimate a backlog item, one or more people with expertise in an area are asked to estimate the item, but this is not the best way for an agile team to get a good estimate.
There are benefits to involving a larger part of the team in the estimation process. The challenge is that people feel that involving the whole team is wasteful if the estimation process takes too much time. On the other hand, inaccurate estimates have their own costs for the team and the other stakeholders.
Planning Poker, an estimating method popular with Agile Teams can address some of these issues. Briefly, planning poker involves getting the developers on a team together to estimate stories using a deck of cards that have numbers that represent units of work. The numbers are often spaced in a Fibonacci sequence, the theory being that the larger the estimate, the lower the precision. Planning planning poker can be a really useful tool to both improve estimation and discover uncertainty in requirements.
People resist planning poker for reasons like:
- It seems inaccurate if the person doing the estimating does not having the "appropriate" expertise. A UI developer may not feel qualified to estimate a story that seems to be mostly backend processing, for example.
- It seems like a waste of time because people believe that one person can estimate for everyone.
- It seems inaccurate since the person who's been assigned the work should estimate it based on their skills.
If you find that your estimates are inaccurate, or your estimation process takes too long, consider the following approach:
- Gather team members who are working on all aspects of the application. You need not have the whole team, but be sure to represent each "architectural layer". If your team is less than 7 people or so, include everyone.
- Look at the description of each story or problem report in priority order. Ask the team to pick cards based on what they read.
- See how close the estimates are.
- If they are close, ask someone to explain what they envisioned doing to implement the issue. If someone has a vastly different idea, they should speak up.
- If they are different, as someone with one of the extreme estimates to explain their reasoning. This will start a conversation about what the requirement means, and what implementation strategy makes sense.
This process helps you to focus discussion time on the hardest, highest priority issues. You will want to be sure that to allocate an appropriate amount of time to planning and estimating relative to your sprint length. You may still run out of time, but even if you do, you'll have discussed and estimated the highest priority items as accurately as you could have, knowing what you knew.
The biggest challenges to having accurate estimates are not having consensus on the "what" and not understanding the details of the "how." The process above is one way to focus discussion on the high-risk items in your backlog, while keeping the time spent on estimating reasonably low.
Saturday, February 20, 2010
The Indivisible Task
One of the things that makes agile work well is a daily sense of progress that can be reflected in, for example, a burn-down chart. For burn-down charts to be meaningful, the estimate of amount of work remaining in a sprint need to be accurate. Re-estimating work remaining in a task is helpful, but the best measure of progress is the binary "done/not done" state of the items in your backlog.
Assuming that you have a clear definition of "done" for a task, it's easiest to measure progress when you have tasks that are small enough that you can mark them complete on a daily (or more frequent) basis. Breaking work down into a reasonable number of reasonably sized tasks is something many find challenging. (Note: I'm talking here about development tasks as part of a sprint backlog, rather than splitting User Stories in a product backlog, though there are some parallels.)
I've worked on teams when people refused to break down large task into 1-day or smaller parts. The common excuse for not breaking down work is that the person who signed up for the work understood what the work was and the estimate was accurate. Of course, we had no way of knowing that the estimate was at wrong until the work was not done at the end of the week or so.
What was interesting to me is that those most resistant to decomposition weren't less experienced programmers, but rather the people the team acknowledged as "experts" and "good designers" who were good at decomposition as it applied to designs. So the theory of attacking complexity through looking at smaller pieces was something they were comfortable with. Not only that, they actually worked in a way that led to discrete units of work being completed throughout the project, whether in terms of frequent commits or even simply being able to finish a work day with a sense of accomplishment, even if the great task was still incomplete.
Breaking down work isn't as hard as some make it sound. From a development centric perspective some of the things you already do which can guide you in task breakdown:
The main difference between doing this kind of planning and good programming practice is making your plan visible to others. This takes discipline, and a certain amount of risk, since if your plan goes awry it's visible. Part of being a successful agile team is understanding that plans can be wrong, and using that experience to figure out how to do better in the future.
You may discover part way through your planning that the task breakdown you did no longer makes sense in light of something you discovered. That's OK, but at least you have a good sense of what work was done, and you can figure out what tasks are left (and estimate them!)
By breaking down work into smaller parts you have the ability to:
The value of working in small steps isn't a new idea. in 1999 Johanna Rothman wrote about the value and mechanics of managing your work in terms of which she calls inch-pebbles (as opposed to "milestones") and Fred Brooks advised "allow no small slips" in The Mythical Man-Month, and being able to identify these slips is key to having effective sprints.
Agile teams work because they have mechanisms to give frequent feedback on status. Accurate estimates of work remaining are an essential tool for evaluating progress, and small tasks help you estimate accurately. Decomposing work is not easy, and takes discipline, but the benefits are great.
Assuming that you have a clear definition of "done" for a task, it's easiest to measure progress when you have tasks that are small enough that you can mark them complete on a daily (or more frequent) basis. Breaking work down into a reasonable number of reasonably sized tasks is something many find challenging. (Note: I'm talking here about development tasks as part of a sprint backlog, rather than splitting User Stories in a product backlog, though there are some parallels.)
I've worked on teams when people refused to break down large task into 1-day or smaller parts. The common excuse for not breaking down work is that the person who signed up for the work understood what the work was and the estimate was accurate. Of course, we had no way of knowing that the estimate was at wrong until the work was not done at the end of the week or so.
What was interesting to me is that those most resistant to decomposition weren't less experienced programmers, but rather the people the team acknowledged as "experts" and "good designers" who were good at decomposition as it applied to designs. So the theory of attacking complexity through looking at smaller pieces was something they were comfortable with. Not only that, they actually worked in a way that led to discrete units of work being completed throughout the project, whether in terms of frequent commits or even simply being able to finish a work day with a sense of accomplishment, even if the great task was still incomplete.
Breaking down work isn't as hard as some make it sound. From a development centric perspective some of the things you already do which can guide you in task breakdown:
- Thinking about when you might commit code. It's good practice to commit code frequently; consider the Task-Level-Commit pattern from Software Configuration Management Patterns.
- Considering what the tests you write as (or before) you code.
- Deciding what you want to accomplish before leaving work each day so that you end the day with a sense of accomplishment.
The main difference between doing this kind of planning and good programming practice is making your plan visible to others. This takes discipline, and a certain amount of risk, since if your plan goes awry it's visible. Part of being a successful agile team is understanding that plans can be wrong, and using that experience to figure out how to do better in the future.
You may discover part way through your planning that the task breakdown you did no longer makes sense in light of something you discovered. That's OK, but at least you have a good sense of what work was done, and you can figure out what tasks are left (and estimate them!)
By breaking down work into smaller parts you have the ability to:
- Evaluate your progress in a definitive way. as it is often easier to define "done" for a smaller task.
- Get feedback from your colleagues before you dive in to a problem.
- Share effort if any of the work can be done in parallel.
- Simplify updates and merges, as the changes to the codeline will all be small at any point in time.
The value of working in small steps isn't a new idea. in 1999 Johanna Rothman wrote about the value and mechanics of managing your work in terms of which she calls inch-pebbles (as opposed to "milestones") and Fred Brooks advised "allow no small slips" in The Mythical Man-Month, and being able to identify these slips is key to having effective sprints.
Agile teams work because they have mechanisms to give frequent feedback on status. Accurate estimates of work remaining are an essential tool for evaluating progress, and small tasks help you estimate accurately. Decomposing work is not easy, and takes discipline, but the benefits are great.
Tuesday, February 16, 2010
97 Things Every Programmer Should Know is Done
The book 97 Things Every Programmer Should Know: Collective Wisdom from the Experts is finally available, and the title is on the mark.
Kevlin Henney, who I first met at the 1998 PLoP conference, asked me to participate in this project in September of 2008. I am honored to be a part of the list of contributors, which includes Kevlin, Bob Martin, Michael Feathers, Giovanni Asproni, and many others who have important things to say about how to build great software. Kevlin did an amazing job coordinating and editing, and the the book represents an excellent cross-section of the many contributions that formed the basis for the final version.
Reading this book gives you a chance to learn from the experiences of people who've worked hard not just at writing good code, but at creating good software systems. Some of the advice may be things you already know. Some items may be surprising. Read this book to learn, be challenged, and to understand why programming isn't just about languages and syntax.
For more info, you can look at the associated wiki site. And feel free to share with me any thoughts you have about my contributions: Deploy Early and Often and Own (and Refactor) the Build.
Kevlin Henney, who I first met at the 1998 PLoP conference, asked me to participate in this project in September of 2008. I am honored to be a part of the list of contributors, which includes Kevlin, Bob Martin, Michael Feathers, Giovanni Asproni, and many others who have important things to say about how to build great software. Kevlin did an amazing job coordinating and editing, and the the book represents an excellent cross-section of the many contributions that formed the basis for the final version.
Reading this book gives you a chance to learn from the experiences of people who've worked hard not just at writing good code, but at creating good software systems. Some of the advice may be things you already know. Some items may be surprising. Read this book to learn, be challenged, and to understand why programming isn't just about languages and syntax.
For more info, you can look at the associated wiki site. And feel free to share with me any thoughts you have about my contributions: Deploy Early and Often and Own (and Refactor) the Build.
Sunday, February 7, 2010
Tracking what Matters
I'm a big fan of burn-down charts for tracking sprint and release progress. The basic idea of a burn-down chart is that the team starts with estimates for all of the tasks in the sprint, and then on daily (or more frequent) basis re-estimates the amount of work remaining.
With a burn-down chart, you are tracking the new estimate based on the work that you have done. As you work on the sprint backlog you get a better understanding of the tasks, and thus you can revise estimates for tasks that span more than one day. This is reasonable since the original estimate is, well, an estimate.
Sometimes if you spend 4 hours on an 8 hour task, you'll have 4 hours of work left. Most of the time the time left will not be the original estimate less the time spent, but more or less. At the end of 4 hours, the remaining work estimate for the same 8 hour task could be 2 hours, or it could be 10 hours if you discovered a snag. This is important information for everyone involved in the project and allows the team to identify a problem at the daily scrum. Re-estimating is harder than just doing subtraction, but it's valuable.
One thing that happens when teams use an issue tracking tool (like Jira and Greenhopper) to manage their backlog is that re-estimating and effort tracking are combined. The only way to re-estimate is to "log work." You're required to enter the amount of time spent, and the tool will kindly offer to change your estimate to the difference between the original estimate and the time spent. There are two problems with this:
Like all things agile, when looking at your project tracking approach you need to be clear about what you want to track and why. The main concern for stakeholders on an agile project is whether they will get the functionality they want at the end of the sprint. So the time-remaining number is important.
There are some good reasons for tracking time spent including evaluating estimation accuracy and billing
But in both of these cases you need to evaluate the overhead of the tracking time relative to the value. Tracking total effort for the sprint relative to estimated work done may be more useful than per-task effort to estimate tracking, and analyzing the results in a retrospective may yield more useful information than per-task tracking.
When doing sprint tracking:
Burn-down charts can be a simple, valuable, tool to identify problems during a sprint as long are your teams breaks out of the habit of tracking "effort" as opposed to "effort remaining."
With a burn-down chart, you are tracking the new estimate based on the work that you have done. As you work on the sprint backlog you get a better understanding of the tasks, and thus you can revise estimates for tasks that span more than one day. This is reasonable since the original estimate is, well, an estimate.
Sometimes if you spend 4 hours on an 8 hour task, you'll have 4 hours of work left. Most of the time the time left will not be the original estimate less the time spent, but more or less. At the end of 4 hours, the remaining work estimate for the same 8 hour task could be 2 hours, or it could be 10 hours if you discovered a snag. This is important information for everyone involved in the project and allows the team to identify a problem at the daily scrum. Re-estimating is harder than just doing subtraction, but it's valuable.
One thing that happens when teams use an issue tracking tool (like Jira and Greenhopper) to manage their backlog is that re-estimating and effort tracking are combined. The only way to re-estimate is to "log work." You're required to enter the amount of time spent, and the tool will kindly offer to change your estimate to the difference between the original estimate and the time spent. There are two problems with this:
- It's important to think about the time left for the task based on the assumption that your original estimate had a margin of error. For all but trivial cases, the "calculated new estimate" is always wrong.
- The "time spent" value isn't really useful to stakeholders. In the best case you are only doing one thing during the time in question, so your time spent entry is accurate, but doesn't answer the question: "when will it be done!" In the worst case, you're not tracking your time accurately, and the time spent number is inaccurate, and provide no real information.
Like all things agile, when looking at your project tracking approach you need to be clear about what you want to track and why. The main concern for stakeholders on an agile project is whether they will get the functionality they want at the end of the sprint. So the time-remaining number is important.
There are some good reasons for tracking time spent including evaluating estimation accuracy and billing
But in both of these cases you need to evaluate the overhead of the tracking time relative to the value. Tracking total effort for the sprint relative to estimated work done may be more useful than per-task effort to estimate tracking, and analyzing the results in a retrospective may yield more useful information than per-task tracking.
When doing sprint tracking:
- Make sure that everyone understands the goals of the tracking process so that you get uniformly valuable results. You definitely want to track how close you are to "done," but explain how important tracking effort is.
- Make sure that, whatever the goals, that the data are updated daily. If the burn-down chart doesn't change for 2 days is it because people didn't update their estimates, or that the project is at a stand-still?
- Remind everyone that the estimates are just that: "estimates," and an informed guess that turns out to be wrong is better than no estimate at all. (And the inaccuracy of the estimate helps to identify unknown complexity.)
Burn-down charts can be a simple, valuable, tool to identify problems during a sprint as long are your teams breaks out of the habit of tracking "effort" as opposed to "effort remaining."
Monday, January 18, 2010
Banishing Blame with Agile Values
I recently got pointed to an article about some research done by researchers at Stanford and USC about the dynamics of blaming in organizations. People Like to Play the Blame Game says that it's quite easy to create a culture of blame.
and decide that It's Chet's Fault, which is to say, stop trying to place blame and think about what the problem is.
In an agile organization you need to be careful to balance the need to figure out the root causes for why a project didn't go as well as it could have , and the tendency to place blame. Blame undermines collaboration, making it more difficult to improve. Accountability and responsibility help you figure out how to do better. Create an environment where it is OK to accept responsibility. One way to do that is to acknowledge when you made a mistake yourself. Another is to start having retrospectives periodically, and not just when things go bad.
Another quote from the article led me to think about another benefit of agile.
While much of what's in the article sounds intuitive, it's good to know that there is data to support that blame is both contagious, and simple to avoid.
Merely observing someone publicly blame an individual in an organization for a problem - even when the target is innocent - greatly increases the odds that the practice of blaming others will spread ...The ways that you avoid a blame culture seem to fit well into an agile culture that values retrospectives, and continuous improvement. The article advises leading by example:
A manager can keep a lid on the behavior by rewarding employees who learn from their mistakes and by making a point to publicly acknowledge his or her own mistakes, Fast said. Managers may also want to assign blame, when necessary, in private and offer praise in public to create a positive attitude in the workplace.This advice isn't just for managers; everyone on the team needs to acknowledge and learn from their own mistakes. And when you have a problem and find yourself trying to figure out who's fault it is, take the advice from the XP books such as Extreme Programming Installed
and decide that It's Chet's Fault, which is to say, stop trying to place blame and think about what the problem is.
In an agile organization you need to be careful to balance the need to figure out the root causes for why a project didn't go as well as it could have , and the tendency to place blame. Blame undermines collaboration, making it more difficult to improve. Accountability and responsibility help you figure out how to do better. Create an environment where it is OK to accept responsibility. One way to do that is to acknowledge when you made a mistake yourself. Another is to start having retrospectives periodically, and not just when things go bad.
Another quote from the article led me to think about another benefit of agile.
Another experiment found that self-affirmation inoculated participants from blame. The tendency for blame to spread was completely eliminated in a group of participants who had the opportunity to affirm their self-worth.Agile projects, with a common vision, self-organizing teams, and good infrastructure to help you make forward progress and detect problems quickly are a perfect environment to feel like you are contributing.
While much of what's in the article sounds intuitive, it's good to know that there is data to support that blame is both contagious, and simple to avoid.
Monday, January 11, 2010
Estimates and Priorities: Which Comes First
When developing a release plan, product owners often want to factor cost (or estimates) into where items go into the backlog. For example, you might hear "Do that soon, unless it's really hard." If this happens once in a while it's a useful way to have a conversation about a feature. If this approach is the norm and nothing gets prioritized until the team estimates it. I'd argue that the default order of operations is that features should get prioritized before anyone spends any effort on estimation. My reasons for this are both philosophical and practical.
The philosophical reason is that the Product Owner should be the one to prioritize work. By asking for the estimate first, the product owner is deferring their authority to the engineering team. This creates a risk that the team may not end up working effectively.
The practical reasons for prioritizing before estimating are:
The philosophical reason is that the Product Owner should be the one to prioritize work. By asking for the estimate first, the product owner is deferring their authority to the engineering team. This creates a risk that the team may not end up working effectively.
The practical reasons for prioritizing before estimating are:
- Estimation takes time, and if you don't start with a prioritized list to estimate with, you spend a lot of time estimating items that may never be worked on. (And yes, you may need to re-estimate items when they hit the top of the list, as estimates may change based on experience, staffing and architecture.)
- If you estimate as work appears, you lose some of the benefits of fixed, time-boxed sprints, and you increase the overhead cost of planning.
- By allowing the team to estimate first, and pushing an item off the list because it is too expensive, you are missing an opportunity for a conversation about how best to meet the business need.
Often the first version of a story may seem large because it includes more functionality than needed. If the the team knows that there is a critical feature to implement in a sprint, but that there isn't time to complete it, there may be a simpler, less costly, version of the feature that meets most of the business needs. If the product owner simply let's a large estimate defer the item, then the conversation will never happen and the business needs may not be met, which would be bad for everyone. Likewise if the expensive feature is lower on the list, then you need not have the conversation until later.
This balancing act between estimates and priorities underscores a key principle of agile planning: User Stories are an invitation to a conversation. By prioritizing first, you can understand where to focus energy on analysis and design. You also keep the agile team focused on delivering business value by placing priority first, and having the engineering team and the product owners communicating actively.
Thursday, January 7, 2010
Review of Adapting Configuration Management for Agile Teams: Balancing Sustainability and Speed
Adapting Configuration Management for Agile Teams: Balancing Sustainability and Speed is a book that bridges the gap between tradition SCM and Release Engineering and Agile teams. Mario asked me to be a reviewer of the draft manuscript and I knew that Mario had great experience in establishing SCM processes in larger organizations, and that he was also a strong advocate of Agile methods. I was pleased to discover a book that, while being able to help those from a traditional release management background adapt their processes to support agile, also addresses the needs of those transitioning to agile who can benefit from using appropriate SCM processes to help them.
This book will help you to understand how SCM can be an enabler for agile, and will also help you to understand how to fulfill the SCM definition of Identification, Control, Status Accounting, and Audit and Review while still being agile.
This is what I said about the book on my books page:
This book will help you to understand how SCM can be an enabler for agile, and will also help you to understand how to fulfill the SCM definition of Identification, Control, Status Accounting, and Audit and Review while still being agile.
This is what I said about the book on my books page:
This book is a good guide to both CM and Agile principles, and it demonstrates how to use software configuration management to enable your team to be more agile. This book can guilde you to understanding how to manage releases in an agile environment, and how to apply basic CM concepts like build and branching successfully. While not a replacement for a book on your agile method, this book is a primer on agile for those with a traditional release management background, and and a primer on CM for those who understand agile. After reading it you will have enough background to be productive, and a good sense of what you need to learn more about. In addition, this book covers topics such as how to leverage cloud service providers for infrastructure, how to leverage SCM to make off-shore development less painful, and how to evolve your SCM process in an agile (incremental) fashion. With a good structure that allows you to navigate the book quickly, and a good use of metaphor to describe concepts, this book will help a release managers, project managers, developers and architects use the SCM process to get the most out of their agile teams.If you are transitioning to agile and want to know how your release management team can help, rather than hinder you, give this book a look.
Monday, January 4, 2010
Sprint Boundaries and Working Weekends
A core principle of agile methods is sustainable pace. While the precise definition of this is debatable, the basic idea is that you want your work load to allow for a life outside of work, which in turn means not planning on overtime. The reality for many projects is that there will be times that the team needs to work outside of "normal" hours to meet a goal, and this is consistent with the idea of a sustainable pace if it happens occasionally, and the team decides that to meet a goal the added hours are needed.
Another core principle of agile methods is transparency. In order to improve, you need to be honest about acknowledging changes in plans, mistakes which cause more work, and misunderstandings that cause you to get things done significantly ahead of schedule.
A decision that teams make independently of the number of hours that they need to plan for is what the boundaries of a sprint should be, particularly with a 1 week sprint. This question is especially relevant if you think that weekend work will be needed. If you are doing 1 week sprints, there are basically 3 choices:
The second option is better; the weekend is part of the sprint boundary, and if the team agrees to work weekends you can measure that. Early in the sprint you may be more optimistic and not expect that to work, and early if you front load the riskier work, then the you might find it more useful to plan to do this work during the week when you can count on everyone being around (assuming that weekend work is done at people's own schedule).
The third option means that the weekend at the end, so you'll know if there is a need for extra work. Typically the work at the end of a sprint involves more defined tasks; you may be wrestling with getting something to work, but you probably had the design discussions already and you've tacked all of the tricky issues at the start. At this point going into a corner on the weekend to work is less likely to adversely affect someone else's work.
While each team should define what "sustainable pace" means, and how best to meet their goals, to get the full value of an agile process, keep any days you expect to work inside the boundaries of the sprint. And when you do your release retrospective, be sure to discuss how well the team is managing its workload.
Another core principle of agile methods is transparency. In order to improve, you need to be honest about acknowledging changes in plans, mistakes which cause more work, and misunderstandings that cause you to get things done significantly ahead of schedule.
A decision that teams make independently of the number of hours that they need to plan for is what the boundaries of a sprint should be, particularly with a 1 week sprint. This question is especially relevant if you think that weekend work will be needed. If you are doing 1 week sprints, there are basically 3 choices:
- Have sprints be Monday thru Friday, with a review first thing Monday.
- Have sprints be Friday thru Thursday.
- Have sprints be Tuesday thru Monday
The second option is better; the weekend is part of the sprint boundary, and if the team agrees to work weekends you can measure that. Early in the sprint you may be more optimistic and not expect that to work, and early if you front load the riskier work, then the you might find it more useful to plan to do this work during the week when you can count on everyone being around (assuming that weekend work is done at people's own schedule).
The third option means that the weekend at the end, so you'll know if there is a need for extra work. Typically the work at the end of a sprint involves more defined tasks; you may be wrestling with getting something to work, but you probably had the design discussions already and you've tacked all of the tricky issues at the start. At this point going into a corner on the weekend to work is less likely to adversely affect someone else's work.
While each team should define what "sustainable pace" means, and how best to meet their goals, to get the full value of an agile process, keep any days you expect to work inside the boundaries of the sprint. And when you do your release retrospective, be sure to discuss how well the team is managing its workload.
Subscribe to:
Posts (Atom)
Lessons in Change from the Classroom
This is adapted from a story I shared at the Fearless Change Campfire on 22 Sep 2023 I’ve always been someone to ask questions about id...
-
My main development language is Java, but I also some work in Python for deployment and related tools. Being a big fan of unit testing I wr...
-
This is a bit off of the usual “Software Development/Agile/SCM” theme that I usually follow, but it does fit into the theme of accidental si...
-
Being a fan of Continuous Delivery , identifiable builds, and Continuous Integration: I like to deploy web apps with a visible build number...