Showing posts with label medium. Show all posts
Showing posts with label medium. Show all posts

Sunday, January 29, 2017

Excitingly Well Run Meetings

The group of people huddled together in a room “until the job is done” is often used as a demonstration of virtues like “diligence” and “commitment.” And these scenes are often the stuff of dramatic moments in popular culture. We rarely, if ever, see any effusive praise for the well-run meeting that ends on time with a useful outcome. The former is more compelling. The latter is often more valuable and much harder to do. And while there are times that the former is the right thing to do, more often than not, it’s an indication of a problem. Well run meetings are useful meetings.
While it seems a bit pedantic to talk about meetings and schedules, and teams often present a ‘meeting-free culture’ as an ideal, the reality is that software development is a collaborative activity and we need to interact with (or “meet”) with people. When you need collaborate with more than a couple of people — perhaps from different teams — scheduling a meeting is inevitable. People have other commitments and time constraints, and respecting these commitments is good for the organization and the individuals.

Respect

There are many reasons for keeping a meeting on schedule. Starting late or ending late, wastes people’s time, which has a financial cost. But the primary reason keeping to schedule from perspective its it is respectful, and not honoring a schedule shows a lack of respect. By starting or arriving late you are disrespectful to the attendees who might be on time, by not ending on schedule you are being disrespectful to anyone the attendee was supposed to meet after a meeting and anyone who needs the room (or other resources) you are using. Of course, it is OK to you extend a discussion by mutual agreement, but you also need to agree on the basic ground rules.

Meeting Principles and Protocols

There is much written about good meeting structure and facilitation, and if you are in a role where you need to collaborate with people (as most of us are) it’s worth the effort to learn a bit about the subject. An effective meeting has, at minimum, two things: a reason, and a schedule.

A Reason

In general, a meeting worth having should have:
  • A reason, which attendees understand.
  • A goal, which you can decide if you have met or not.
  • An agenda, to help everyone understand if they are on track.
  • Attendees who understand why they are there.
  • A start and end time that is sufficient to have the discussion you need, and which allows for people to meet their next commitment.
All of these items can have more or less definition depending on the culture of the team, and the nature of a problem, and how close you are to having a common baseline of understanding. A Sprint Planning Meeting might have a more well defined reason, goal, and agenda, than an “explore design options” meeting. But it is important to be able to express these things.

Start Times and End Times

It’s easy to think of meetings as being (say) 1 hour long, starting and ending at hour boundaries. This makes no sense if you assume that anyone else has to be somewhere else after your meeting. It’s helpful to have a protocol where you allow for a buffer at the start and the end of the meeting to allow for transitions. There are many options. for a 1:00 - 2:00 meeting you could:
  • Start at 1:05 and end at 1:55
  • Start at 1:00 and end at 1:50
  • Start at 1:10 and end at 2:00
or any variant. Pick what works for your organization. In my experience the first option seemed to work best, as people tend to be more keyed to “round” boundaries. But as long as everyone is consistent, people will have time to regroup and move between meetings.
Be sure to plan the agenda to include time before the end of the meeting to review what you all decided, what followup is necessary, and to decide who will do that followup.

When “Until we are done” makes sense

The opposite side of watching the clock is the idea that you don’t want to cut short a productive exchange of ideas. Ideally you would have allocated enough time to allow for opportunistic conversation. If you do that, you will need to take care that a non-productive meeting doesn’t expand to fill the allocated time. In some cases you can’t do that, and “in the room until the problem is solved, regardless of schedule” is the right thing to do and makes sense but when certain things are true:
  • The issue under discussion needs to be resolved soon.
  • The issue is the top priority for everyone in the room who is asked to be there, and everyone understands this.
  • If you are using a shared resource, such as a conference room, the issue at hand is the top priority for the organization, and you have priority for using the resource.
All of these things are rarely true at the same time. Most people have other commitments, and in many organizations, meeting rooms are a scarce resource. It’s worth taking the time to solve the hard problem of how to get useful work done on a schedule (and to identify what problem fits in that schedule).

Next Steps

While “meetings” may always have a subtext of “something that distracts from work,” if you make the effort to be respectful of people’s time you’re likely to get more done. While having a company culture or guidelines that supports this approach is good, you can establish these guidelines are part or your working agreements at any level of scale.
Regardless or what guidelines you use, the most important thing is to be mindful of the goals of the meeting, and the reality the people may have other commitments.

Monday, January 9, 2017

Books to Make Discussion Easier

For a variety of reasons, I’ve recently found myself in quite a few conversations about social and political issues, both in person, and on Facebook and other social media. Even when I was engaging with someone with a different view that I had, I learned a lot, both about my views and contrary ones . Other conversations were more frustrating. The difference between the enjoyable ones and the frustrating ones seemed to be that the arguments I heard didn’t always seem to be either relevant or logical. Rather than (always) walk away I took these challenging conversations as an opportunity to practice focusing on understanding, rather than (only) and opportunity to win. (Though sometimes walking away is the best thing...)
I found these books, which I’ve recently read, to be a useful part of a toolkit for more productive arguments about controversial subjects:

Don’t Think of an Elephant by George Lakoff is the most partisan of the books. It is upfront as being about being a “guide for Progressives,” though it also explains the concept of “framing,” and the role it plays in how people interpret information.
As an engineer I tend to think that the best way to argue point is with facts and data. This doesn’t aways work, especially in political discussions, even when the data are clear, quantifiable, and not disputed by reasonable people, because the words are sometimes framed in a way that re-enforces another view point. Don’t think of an Elephant explains how framing works from the perspective of linguistics and cognitive science and the importance of framing in discussion and debate. Lakoff emphasizes that this book is more action oriented than academic, and he points to more scholarly works on the topic for those who are interested.
While the book is about political advocacy, and geared at Progressives, it can be useful for a number of audiences. For Progressives, you can better understand how to frame your arguments when trying to influence others. The book also has a discussion of what the Conservative mindset is, and awareness of the perspective of someone who thinks differently that you can help bridge gaps. Conservatives who are interested in having better conversations with their more progressive friends and associates might also get some insights from the book.
The book heavy emphasizes political discourse, but the concept of frames and framing is something you can apply to communicating your perspective in various contexts.
Partisan as the book, is, it can also be useful for bridging gaps. This book might provide a guide for Progressives to make their points in a more effective, less reactive way, and to have more productive conversations with their Conservative friends.

Mastering Logical Fallacies by Michael Withey is a bit less political, and more generic, but still relevant to political conversations. I got a copy of this book for my 10 year old so that he’d be exposed to the idea of good arguments early. Then I decided that I needed to read it myself, in part as a reaction to having been in far more conversations around recent political events where some of the arguments made no sense to me. The book helped me understand how to recognize and address those kinds of arguments. It also has helped me to take a step back in discussions in other domains, including technical discussions at work.
Having a framework for understanding these kinds of fallacies can help you to put a conversation in context, and be able to (more) calmly address the issues people are raising, rather than react emotionally and perhaps commit the same kind of fallacies yourself.
While I can’t speak fully to the thoroughness of the discussion of the fallacies (maybe if I had either taken the forensics class in High School, or considered the Debate Team!) I found this to be a really good bit of background. My one complaint is that some of the examples are a bit forced, but the author still makes his point most of the time.
I still plan to share the material with my 10 year old so that he can learn how to have good discussions -- something it’s never too early to learn! This book is a tool that can help you navigate conversations (especially political ones) be they on Facebook or in person.
While the first two books are more tactical, in that they provide guidelines for how to deal with specific conversational situations, [_Humble Inquiry_] by Edgar Schein is more about mindset.
This is a short, pragmatic, easy to read, book that can help you be better at something both essential and often neglected: How to work together better. By chance I first opened the book to a page with the heading “The Main Problem: valuing tasks over relationships” and I knew that I made the right choice when I got the book. Teamwork is essential and building an environment where people trust each other enough to work as a team is hard.
This book explores a technique that can help to work across cultural, organizational and hierarchical boundaries. With theory, examples, and practices to try, it becomes easy to understand what Humble Inquiry is. The practice will take work.
Humble Inquiry is as much a mindset as a technique, since, as Shein points out, it is hard to be authentic when simply “acting humble” and people will notice, and you will thus erode rather than build trust.
Any leader, team member, spouse, or even parent can learn valuable things from reading this book. I would even argue that if you interact with anyone there are lessons you can learn, or at least have reenforced. This book is a small investment in both time and money for a large reward.
There are certainly any more books that cover this space, but these three seemed to cover a range that might help be weather the more difficult conversations.

Sunday, July 10, 2016

Starting and Closing Agile Retrospectives with People in Mind

One of the more powerful aspects of agile software development methods such as Scrum is that they acknowledge the importance of individuals and their interactions in delivering quality software. As much as it is important to review and adapt the product backlog by having sprint review meetings at the end of each sprint, it is also important to have retrospectives to inspect and adapt how the Scrum process works on a team. The Sprint Review is about the tasks and scope (the “What” of the sprint). The Sprint Retrospective is about the Scrum process (the ‘How”). Sadly, many teams miss out on some value by glossing over the parts of a retrospective that acknowledge the human elements of the scrum process. By using some simple techniques teams can improve their retrospectives by putting more emphasis on people.
Allocating time for a retrospective every 2 weeks (if you use 2 week sprints) can be a challenge. The 5 step structure that Ester Derby and Diana Larson describe in their book Agile Retrospectives is an excellent framework for making good use of retrospective time. The steps are:
  • Set the Stage, where you introduce the plan for the retrospective, and help people move towards a mindset that will help identify problems
  • Gather Data, where you collect information about what went on during a sprint. Some of the data collection can happen before the actual meeting, but people will likely think of information to add.
  • Generate Insights, where you identify patterns and connections between events, and start to consider why things may have happened.
  • Decide what to do, where you collect ideas for things to do going forward, and then focus on a handful to explore in detail.
  • Close, where you review action items, appreciate the work people did, and perhaps discuss the retrospective.
These steps create an environment where people can feel safe, and help the team to explore the really impediments to improvement. Often teams skip steps, merge steps, or don’t consider whether the exercises they use at each stage move the process forward. Using structured exercises like those in Derby and Larsen’s book help keep the retrospective focused. Another common tendency is to problem solve too early, combining the Gather Data, Generate Insights, and Decide What to do steps. These mistake is often self correcting, as teams discover that they come out of retrospectives with actions that address superficial problems.
A bigger problem is when teams skip the steps that address the humans on the agile team. For example, particular, some facilitators skip over Setting the Stage, or Closing, in an effort to allow time for the “significant” parts of the meeting. While only a small part of the meeting time, the Setting the Stage and Closing steps, are quite valuable in terms of impact.
Setting the Stage for the retrospective can take just a few minutes, and can improve the effectiveness of the entire meeting by creating an environment where people feel comfortable collaborating. There are many reasons people may not contribute, including simple shyness or lack of attention, or even concern about getting blamed for something. Setting the Stage correctly can help engage the team more fully in the process by bootstrapping participation and emphasizing that the retrospective is about improvement not blame.
I often start a retrospective with an exercise that involves going around the room and giving people a chance to say a word or two about something, for example “one word about how they feel the sprint went”, or “how they feel about the retrospective ”, or even “one thing about yourself that you’d like you share with the team.” This often helps people step out of a spectator role. Note: Always give people the option to say “Pass,” since forcing people to reveal something about themselves is counter to the values of a retrospective; even saying “Pass” gets people engaged in the process.
To reenforce the constructive goals of the meeting, teams I work with sometimes start retrospectives by having someone read The Retrospective Prime Directive, and ask everyone if the agree. While some people initially feel like this process is a bit silly, may teams find it valuable, and make an effort to rotate who reads the Prime Directive.
The other part of the retrospective that can help maintain connection is the Close. I encourage teams that I work with to incorporate appreciations into their closings. Appreciations are a structured way of acknowledging the work someone did during the sprint. A quick appreciation can really help people feel engaged and valued, and the process helps the team consider the value each brings to the group.
By setting the stage and closing your retrospectives well you can help your team get more value out of retrospectives, and help form a stronger, more effective team. Inspect and Adapt isn’t just about the tasks, it’s about the how the team works too.

Sunday, March 27, 2016

Giving, Taking, and Being Successful

Giving, Taking, and Being Successful

I’ve been making good use of my commute time recently, catching up on reading, and in particular, the stack of physical books on non-fiction topics that are somewhat relevant to my work. I was making good progress, only to have new, interesting stuff cross my path. One Sunday morning in December I caught part of an interview with Adam Grant on On Being. I wasn’t familiar with Adam Grant before this, but I’m extremely glad that I caught the show. I soon got a copy of Give and Take by Adam Grant, and I read his next book Originals shortly after it came out. In the spirt of giving, and of the serendipity that led me to learn about Adam Grant, I’ll also mention some of the other books Give and Take brought to mind.

I read Give and Take by Adam Grant as last year ended. This is was a great book to end one year and start another with, as it got me thinking about the value of generosity, not just to others, but also to your self.

Grant explains how givers (as opposed to takers and matchers) get ahead in the long run and also help their teams succeed. Teams which have people who have a positive attitude towards helping others in small ways often do better in the end, and in the long run the helpers are more successful too. This goes contrary to the idea that the way that you make progress is to focus on what you need to do. The reality is that for most complex knowledge work, you can’t do it all yourself. As Austin Kleon’s Steal Like an Artist says, the best creativity is inspired by the work of others. Helping others both enables the larger unit to make forward progress, as well as making it easier for you to get help with your work when you need it.

The idea of people who make the team better, even when their short term contributions don’t seem as significant brings to mind "Catalysts" as mentioned by Tom Demarco in Peopleware.( Slack, another book by Demarco, also came to mind because of it’s discussion of the willingness of volunteers to contribute to efforts). This book also brought to mind another recent book, Invisibles, which discusses the “invisible” people who make things happen, and who are happy to be out of the spotlight. I don’t know if I can say that all invisibles are givers but I would not be surprised if that were true.

Giving can have limits. Many people struggle with how to balance the idea that being helpful and generous is good, while not overcommitting themselves. Grant explains how to be a giver and not overextend yourself. Likewise, givers often have a hard time taking care of themselves by leveraging their tendencies to advocate for others. Both approaches involve an an “otherish-strategy,” which is one of the more interesting (of many interesting) concepts in the book.

To those familiar with Jerry Weinberg, this will seem related to the Airplane Mask metaphor in Secrets of Consulting. Grant gives a more detailed model of how to think it through. Weinberg’s metaphor is still good to keep in in front of mind though.

This book resonated with me on many levels. There are lessons here that will help me in my roles an agile software developer, manager, member or my town community, and member of my UU church community. The information here resonates with, and explains, many things I've learned from Gerald Weinberg, about technical leadership (as in the book Becoming a Technical Leader, and Gil Broza about the agile mindset, and many other useful things I've read about how to be an effective team member.

This book will help you to understand why that's true, how you can be a more effective giver, and how to encourage others to give, so that you can be part of a more effective team or community. As Adam Grant says, we need more givers.

update March 28, 2016: Fixed reference to the correct Tom DeMarco book. I mentioned Slack. I meant Peopleware.


_ _

Saturday, March 22, 2014

Estimation as a requirements Exercise

I explored the role of estimation in a couple of articles on Techwell recently.

In the first article I discussed how teams balance the cost of estimation (in terms of time it takes) with the value it brings to the project. Some argue that estimation isn't very useful at all, where other's say that it can be useful, but that all stakeholders may not have the same vision of the value estimation brings.

In a follow up article I explore the debate about whether to estimate in hours, which reflects effort, and time, or  points, which reflect complexity. This is a common debate, since many articles on agile advocate points, to step away from the concepts of estimate, and focus on the complexity of a feature, and also to help teams move towards the model of the estimate being valid for the team and not just a particular person. And stakeholders are often concerned about schedule and deadlines, so tend to prefer hours initially.

Different teams will come to different decisions about what works best for them. To me the most important part of the estimation discussion is the part many teams don't have, namely asking (and answering) the question of why they are estimating, and what value the estimates bring to the project given that they are now working with an agile process.

As I think more about what value estimation has brought to teams that I have worked on, I realize that the biggest value is that of having a a discussion of scope. When you have an planning meeting, a few questions come up:


  • Why people have different estimates
  • Why the estimate is larger, or smaller, than the product owner expected
  • Whether the team really understands what the story means.
Give this I'm leaning towards thinking of estimation as being more about requirements definition rather velocity, effort, or even complexity.  To that end, maybe the approach to use is one where you spend all of your planning effort defining stories in terms of small, fixed size units, and your velocity is about how many stories you finish off of a prioritized backlog. I link to a description of what this means in the points and hours article.

I'm  interested in hearing about creative approaches your teams have used for estimation. Please comment on  the Techwell articles, or here if you want to share lessons you have learned.


Saturday, March 10, 2012

Have the Orders Changed?

One of the great things about being a parent is that you have an excuse to re-read some classic books. My five year old and I have been reading The Little Prince, and the story of the Lamp Lighter reminded me of a common problem teams have with organizational inertia when trying to transition to agile software development.

For those who haven't read the story, or who don't recall the details, the Little Prince relates the story of his journey to various planets. One one planet he encounters a Lamp Lighter who lights and extinguishes a lamp every minute, not having any time to rest. When the Little Prince asks how this absurd situation came to be, the Lamp Lighter discussed this with the Little Prince:
"Orders are orders..." "It's a terrible job I have. It used to be reasonable enough. I put the lamp out mornings, and lit it after dark. I had the rest of the day for my own affairs and the rest of the night for sleeping." 
"And since then orders have changed?" 
"Orders haven't changed," the Lamp Lighter said. "That's just the trouble! Year by tear the planet is turning faster and faster, and the orders haven't changed!"
This is a very apt metaphor for many teams which are trying to adopt agile without re-examining their entire application lifecycle. The team may be developing using good technical practices, but the requirements management process is too heavy weight to keep the backlog populated, or the release management policies discourage frequent deployments, leading to code line policies that place a drag on the team. In addition to organizational resistance to change, some team members might also be using using development tools and practices that make it hard for other agile practice to work as well as they could.

It's important to remember three things when you are trying to be more agile:
  • Agile is a system wide change. Changing practices in one part of the development lifecycle will quickly reveal roadblock in other parts.
  • It's important to periodically examine how well your team is doing. Iteration retrospectives are a an important part of improving how you work.
  • When reviewing how you work,  the practices that are now problematic are not often not "bad" per-se, they just don't apply  to your current situation.
Jerry Weinberg makes a great point about old rules in his Quality Software Management: First-Order Measurement where he says:
Survival rules are not stupid; they are simply over generalizations of rules we once needed for survival. We don't want to simply throw them away. Survival rules can be transformed into less powerful forms, so that we can still use their wisdom without becoming incongruent. 
"Survival" sounds a bit strong until you consider that the motivation behind working a certain way is often a desire not to fail, and sometimes, "failing" is scarier when you fail when not following established practices. That's why applying techniques such as those in Agile Retrospectives: Making Good Teams Great can make it easier for teams to consider how they can improve.

Standards and conventions ("orders") can be helpful to working effectively, but it's important to review those orders from time to time to see if they still apply.


Thursday, December 8, 2011

Questioning before Answering

The other day I came across a short video in which a parent is faced with answering a unexpected question posed by her young child. I found this video amusing because, being the parent of a kindergartener, I expect to be faced with many awkward moments like this in the future. I also found it an interesting metaphor for software requirements gathering.

In the video the  parent could have answered the question a whole lot more simply has she just asked taken a moment to try to understand what the real question was, and not taken the question at face value. (I'm being purposely vague at this point in case anyone wants to watch the video.)

Most of us have been on projects where much effort has been spent in an attempt to solve a customer's stated problem, only to discover that that either:

  • The program didn't actually solve any of the customer's real problems.
  • The team didn't solve the problem that the customer wanted solved.
  • There was a much simpler solution.
I'm not suggesting that exhaustive requirements capture is the answer. Nor am I suggesting that you should not start building software until you have complete understanding of a problem. The reason that agile methods work well is that they  acknowledge uncertainty, and provide a simple path to refining understanding: evaluating working software.

Acknowledging uncertainty does not mean ignoring information. In some cases customers might not really understand what they need, and the best way for a team to progress is to take a guess based on the information they have, and iterate on a working system.  In many cases a customer will know the reason they want to do something, but may not be able to step back enough to express it. In these cases, but not trying to ask "why?" your team is ignoring information  that can help them do their work. In more cases than not, a customer can fill in the pieces of  the user story template:

As a (ROLE) I want to (DO SOMETHING) so that (REASON)

and provide insight into the problem that she wants to solve. This will save time, effort, and help the team deliver a better system.

If your team takes customer requirements statements at face value without exploring the reasons why a customer wants a piece of functionality, they aren't holding up their end of the collaboration part of agile software development. 

Had the parent in the video though to ask why the child asked the question that she did, the process of answering might have been simpler.


Sunday, June 7, 2009

The Value of Agile Infrastructure

Engineering practices such as Continuous Integration, Refactoring, and Unit Testing are key to the success of agile projects, and you need to develop and maintain some infrastructure maintain these engineering practices. Some teams struggle with how to fit work that supports agile infrastructure into an agile planning process.

One approach is to create "infrastructure" or "technical" stories for tasks that are not directly tied to product features. In these stories the stakeholder may be a developer. For example, a user story for setting up a continuous integration server can go something like:

As a developer I can ensure that all the code integrates without errors so that I can make steady progress.

While there not all work on a project leads directly to a user-visible feature, thinking of infrastructure stories differently than other stories has a number of risks, and can compromise the relationship between the development team and other stakeholders. Agile methods are about delivering value as measured by the product owner (though the team has input in helping the owner make the right cost/benefit decisions). Thinking about tools and as having value that is dependent from end user value subjects you to the same risks that Big Design Up Front does: wasting effort and not delivering value.

I'm a fan of being able to track everything an agile team does to something that can create value for the product owner. This may be a bit idealistic, but having this ideal as a starting point can be useful; even if just try to focus on delivering value you will be more likely to make the right decisions for your team and your project.

One way to make the relationship between "infrastructure" work and product value is to recast infrastructure stories with a focus on value to the product owner. For example, consider the example above for setting up a CI system:

As a project manager, I want to identify when changes break the codeline so that mistakes can be fixed quickly, and the code stays healthy.

This may not be the perfect example, but when I think of infrastructure items this way I have a better understanding of why I'm considering a change, and it prepares me to advocate for the work in case there is push-back. You can apply this approach to other "technical" items such as:
  • Investigating a new testing framework.
  • Refactoring (in the context of implementing a story).
  • Upgrading your SCM system.
  • Setting up a wiki
  • Adding reporting to your maven build.
The benefits of considering the value of technical tasks to all stakeholders, and not just developers, include:
  • A better relationship between engineers and the other stakeholders on a project.
  • A better allocation of resources: If you can't explain the value of something there may be a better solution, or at least a less costly one.
  • A better understanding of how to use engineering techniques to deliver value.
I admit that his approach has some challenges. The value of technical tasks can be difficult to explain, and are often long term at the expense of short term costs. Even if your project dynamics require you to address infrastructure in some other, more indirect, way you can benefit by starting to think in terms of how what you want to do adds value. Software engineering is a means towards the end of solving business problems, and as engineers we should be able to explain what we're doing to any informed stakeholder.

Friday, May 29, 2009

Really Dumb Tests

When I mention the idea of automated unit testing (or developer testing as J.B. Rainsberger referred to it in his book JUnit Recipies ) people either love it, or more likely are put off because all the tests that they want to write are too hard, and the tests that they can write seem too simple to be of much value. With the exception of tests that "test the framework" that you are using, I don't think that one should worry prematurely about tests that are too simple to be useful, as I've seem people (myself included) spin their wheels when there was a coding error that could have been caught by one of these "simple" tests.

A common example is the Java programmer who accidentally overrides hashcode() and equal() in such a way that 2 items which are equal do not have the same hashcode. This causes mysterious behavior when you add items to a collection and try to find them later (and you don't get a HashCodeNotImplementedCorrectly" exception.) True, you can generate hashcode and equals with your IDE, but it's trivial to test for this, even without a framework that does the hard work for you.

So when I start working with a system that needs tests, I try not to worry too much about how "trivial" the test seem initially. Once I was working in a Interactive Voice Response application for which we would test by dialing in to a special phone number. We'd go through the prompts, and about half-way into our test we'd hear "syntax error." While I really wanted to write unit tests around the voice logic, we didn't have the right tools at the time, so I tried to see if I could make our manual testing more effective.

This IVR system was basically a web application with a phone instead of a web browser as a client, and VXML instead of HTML as the "page description" language. A voice node processed the touch-tone and voice input and sent a request to a J2EE server that sent back Voice XML which told the voice node what to do next. During our testing the app server was generating the VXML dynamically and we'd sometimes generate invalid XML. This is a really simple problem, and one that was both easy to detect, and costly when we didn't find it until integration testing.

I wrote a series of tests that made requests with various request parameter combinations, and tested the generated voice XML to see if it validated to the DTD for VXML. Basically:

String xmlResponse = appServer.login("username", "password");
try{
XMLUtils.validate(xmlResponse);
} catch (Exception e){
fail("error " + e);
}

Initially people on the team dismissed this test as useless: they believed that could write code that generated valid XML. Once the test started to pick up errors as it ran during the Integration Build, the team realized the value of the test in increasing the effectiveness of the manual testing process.

Even though people are careful when the right code, it's easy for a mistake to happen in a complex system. If you can write a test that catches an error before an integration test starts, you'll save everyone a lot of time.

I've since found lots of value writing test that simply parse a configuration file using and fail if there is an error. Without a test like this the problems manifest in some seemingly unrelated way at runtime. With a test you know the problem is in the parsing, and you also know what you just changed. Also, once a broken configuration is committed to your version control system you slow down the whole team.

If your application relies on a configuration resource that changes, take the time to write some sanity tests to ensure that the app will load correctly. These are in essence Really Dumb smoke tests. Having written these dumb tests you'll feel smart when you catch a error at build time instead when someone else is trying to run the code.

Lessons in Change from the Classroom

This is adapted from a story I shared at the Fearless Change Campfire on 22 Sep 2023 I’ve always been someone to ask questions about id...