Showing posts with label tools. Show all posts
Showing posts with label tools. Show all posts

Friday, September 21, 2012

Groovy Dependencies, Grape, and Firewalls

Groovy is a powerful and fun language. One of the nice features, particularly useful for internal tools,  is the ability to deliver scripts that are self contained with the only pre-requisite being that you have groovy installed. One of the tools that enables this is the Grape dependency management mechanism. Using Grape, rather than delivering a zip file with all dependencies, or requiring that modules be installed as part of the global system installation, you can add a line like this to your script:

@Grab(group='org.springframework', module='spring', version='2.5.6')

This will grap the dependency into a local repository if it is not present already. It makes the startup time for the first run of a script a bit slower, but it also greatly simplifies the delivery of short, script-based tools.

If you are in production environment behind a firewall, you might find that that this mechanism doesn't work because you can't access the public repositories. You can address this by adding an annotation to the file to add your repository to the list

@GrabResolver(name='restlet', root='http://maven.mycompany.com/proxy')

When I first tried this, it seemed to take a very long time for the script to run, with no output.

To diagnose the problem, I needed to have the GrabResolver tell me what it was doing. Looking at the code the way to do this was to set the ivy.message.logger.level system property.

My command line to get detailed output was:

groovy -Divy.message.logger.level=4 Script.groovy

This gave me enough output to see that Grape was looking in the local repository last after timing out when trying the standard repositories. The easiest fix I found for this was create a $HOME/.groovy/GrapeConfig.xml file as described in the Grape documentation. The new local config only has one ibiblio repository element, which is the company repository manager.

<ivysettings>
  <settings defaultresolver="downloadGrapes">
  <resolvers>
    <chain name="downloadGrapes">
      <filesystem name="cachedGrapes">
        <ivy pattern="${user.home}/.groovy/grapes/[organisation]/[module]/ivy-[revision].xml">
        <artifact pattern="${user.home}/.groovy/grapes/[organisation]/[module]/[type]s/[artifact]-[revision].[ext]">
      </artifact></ivy></filesystem>
      &lt;ibiblio m2compatible="true" name="proxy-repository" root="http://maven.mycompany.com/proxy/">
    </ibiblio></chain>
  </resolvers>
</settings>
</ivysettings>


This sped things up significantly.

There may be another way to address the problem, and I welcome feedback. But since this wasn't obvious from the documentation, I thought it might be worth sharing.

Note, this is based on Groovy 2.0.2.

Monday, March 14, 2011

Using Issue Tracking Systems Well

Many of us in the agile community have mixed emotions about issue tracking and issue tracking systems. Tracking is good, but issue tracking systems can become time sinks if not used well. I shared some thoughts on this topic in an article on Sticky Minds.com. Enjoy, and please comment (either here, or on the Sticky Minds site.)

Thursday, December 31, 2009

Continuous Integration of Python Code with Unit Tests and Maven

My main development language is Java, but I also some work in Python for deployment and related tools. Being a big fan of unit testing I write unit tests in Python using PyUnit. Being a big fan of Maven and Continuous Integration, I really want the  Python unit tests to run as part of the build. I wanted to have a solution that met the following criteria:
  • Used commonly available plugins
  • Keep the maven structure of test and src files in the appropriate directories.
  • Have the tests run in the test phase and fail the build when the tests fail.

The simplest approach I came up with to do this was to use the Exec Maven Plugin by adding the following configuration to your (python) project's POM.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>exec-maven-plugin</artifactId>
  <executions>
   <execution>
    <configuration>
     <executable>python</executable>
     <workingDirectory>src/test/python</workingDirectory>
     <arguments>
      <argument>unitTests.py</argument>
     </arguments>    
     <environmentVariables>
       <PYTHONPATH>../../main/python:$PYTHONPATH</PYTHONPATH>
     </environmentVariables>
    </configuration>
    <id>python-test</id>
    <phase>test</phase>
    <goals>
     <goal>exec</goal>
    </goals>
   </execution>
  </executions>
 </plugin>

This works well enough. Setting the PYTHONPATH environment variable allows your pyUnit tests to find the modules you are building in the project. What's less than ideal is that, unlike other maven plugins, the person running the build needs to have python installed and configured correctly. (You can allow for some variations between environments. And if you have a developer on your project who doesn't use python, and doesn't want to there is a property you can set on the exec plugin to skip the tests, so in the end only those who use python, and the continuous integration server, need the correct things installed.

This may be obvious to some, if not many, but in case anyone is looking for an answer to how to run unit tests as part of of your maven build, I hope that this is helpful.

Saturday, November 28, 2009

Silver Bullets and Simple Solutions

Recently I was reading the Boston Globe and saw a letter to the editor about problems with H1N1 vaccine distribution and lamenting that "We’re eight months into a pandemic, and it seems that all the government can do is tell us to wash our hands!" While I understand the writer's frustration about the availability of vaccines, and the technology used to produce them, I was  struck by the writer's attitude that a simple solution couldn't possibly be effective.  I'm not a medical professional, but from what I've read, hand washing, while mundane sounding, is effective in preventing the spread of disease. Since I  am a software development professional, I also thought that the attitude that the more exotic solution  is always better, is common in software development as well.

When people ask me about improving their release management process to be more agile,  they are disappointed when I don't focus on the latest SCM tool, but rather talk about approaches like unit testing, small and frequent commits and updates, continuous integration, and the like. All of these things are dismissed as simple and even mundane. Yet, if they are simple, why not do them? The truth is that the best way to make your release management process more agile is to have fewer codelines and keep the codeline you want to release working. There is no real magic to that, other than the magic that happens organically when a team is disciplined and committed to keeping the code high quality.

Since Fred Brooks wrote No Silver Bullet (which appears in The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition)) software developers have always been looking for technological ways to make projects go more smoothly.  Technology helps, to be sure. A good diff tool can help you merge code when you really have to, and an SCM tool that supports staged integration well can help you get to the point where you can trust your team to keep the codeline working. But the best approach is to small, incremental changes, and small, comprehensible modules.

In the same vein as what the  Pragmatic Programmers tell us in The Pragmatic Programmer: From Journeyman to Master, and as Bob Martin provides further guidance about in Clean Code: A Handbook of Agile Software Craftsmanship, there is a lot you can do to make your team more agile by small changes, keeping things working along the way, and iterating. Think about tools as a way to support how people work, and not the other way around. Tools help you work effectively, but a good tool can't replace good technique.

Saturday, September 26, 2009

IDEs and Builds Scripts (September Better Software)

I wrote an article for the September/October issue of Better Software Magazine: IDEs and Build Scripts in Harmony where I discuss how to use build tools, and IDEs while minimizing duplicate work and getting the benefits of both tools.

As usual, the editors did a great job selecting metaphorically appropriate art work, and there is lots of other good content in the issue, including an interesting commentary by Lee Copeland on testing, Scrum, and "testing sprints."

I won't repeat anything I say in the article (since I already said it), but I want to add some philosophical comments. The question of working in environments that use IDEs for development and integration builds that are driven by other tools is one that I care about because it involves some of the issues that often cause a lot of angst and discussion on development teams:
  • Standardization and the effects of tools on individual and team productivity, and
  • Repetition: having the same information in two places.
Many of these problems can be solved by tools that separate information appropriately. For example, dependency information in build scripts, formatting information in IDE settings, so you don't need to have duplicate configurations checked in. Or even having some sort of canonical way to describe formatting rules which you keep with the build scripts in a form that IDEs can leverage. This frees developers to use the tool that they are most accustomed to (and productive in), while maintaining the amount of standardization that helps teams be more productive.

I hope that you find the article interesting and thought-provoking.



Note: There was a production error that caused an old bio to show up in the print edition. I work at Humedica, not where the bio says .

Sunday, July 5, 2009

Testing Cost Benefit Analysis

I'm probably one of the first people to advocate writing tests, even for seemingly obvious cases. (See for example, Really Dumb Tests.) There are some cases where I suggest that testing might best be skipped. There are cases where tests may not only have little value but can also add unnecessary cost to change. It's important to honor the principles of agile development and not let the "rule" of a test for everything get in the way of the the "goal" of effective testing for higher productivity and quality.

While writing a unit test can increase the cost of a change (since you're writing the code and the test), but the cost is relatively low because of good frameworks, and the benefits outweigh the costs:
  • The unit test documents how to use the code that you're writing,
  • The test provides a quicker feedback cycle while developing functionality that, say, running the application, and
  • The test ensures that changes that break the functionality will be found quickly during development so that they can be addressed while everyone has the proper context.
Automated integration testing, especially involving GUIs, are ones are harder to write, and cover code that likely was tested with unit tests, so it's easy to stumble onto cases where the tests add little enough value and enough cost that it's worth re-considering the need for an automated test in a particular case.

On one project I worked on the team was extremely disciplined about doing Test Driven Development. Along with unit tests, there were integration tests that tested the display aspects of a web application. For example, a requirement that a color be changed would start with a test that checked a CSS attribute, or a requirement that 2 columns in a grid be swapped would result in a test that made assertions about the rendered HTML.

The test coverage sounded like a good idea, but from time to time a low cost (5 minute), low risk change, would take much longer (1 hour) as tests would need to be updated and run, and unrelated tests would break. And in many cases the tests weren't comprehensive measures of the quality of the application: I remember one time when a colleague asserted that it wasn't necessary to run the application after a change, since we had good test coverage, only to have the client inquire about some buttons that had gone missing from the interface. Also, integration level GUI tests can be fragile, especially if they are based on textual diffs: a change to one component can cause an unrelated test to fail. (Which is why isolated unit tests are so valuable.)

I suspect the reasons for the high cost/value ratio for these UI-oriented tests had a lot to do with the tools available. It's still a lot easier to visually verify display attributes than to automate testing for them. I'm confident that tools will improve. But it's still important to consider cost in addition to benefit when writing tests.

Some thoughts:
  • Integration (especially GUI) tests tend to be high cost relative to value.
  • When in doubt try to write an automated test. If you find that maintaining the tests, or execution time, adds a cost out of proportion to the value of a functionality change, consider another approach.
  • GUI tests can be high cost relative to value, so focus on writing code where the view layer is as simple as possible.
  • If you find yourself skipping GUI testing for these reasons, be especially careful about writing unit tests at the business level. Doing this may drive you to cleaner, more testable, interfaces.
  • Focus automated integration test effort on key end-to-end business functionality rather than visual aspects of an application.
Applications need to be tested at all levels, and automated testing is valuable. It's vital to have some sort of end-to-end automated smoke test. Sometimes there is no practical alternative to simply running and looking at the application. Like all agile principles, testing needs to be applied with a pragmatic perspective, and a goal of adding value, not just following rules blindly.

Monday, May 25, 2009

IDEs in March

In March I wrote an article in CM Crossroads that argues that as long as a team and it's individuals are productive, there isn't a lot of sense in imposing a standard IDE on a team. Organizations go overboard with standards sometimes. As long as a developer is productive, and doesn't cause problems for others, why should anyone care what tools she uses? I end the article:
There is a difference between consistency in important things, which is valuable, and conformity, which is often mistaken for consistency. Focus on delivering consistent results, and respect that a team will know how to get there.

I've been thinking a fair amount about how to balance IDEs and the build configurations, since this seems to be a problems teams struggle with often, though it is getting better as IDEs can model their project settings off of the information in Maven POM files and the like.

Read Beware the IDEs.

Lessons in Change from the Classroom

This is adapted from a story I shared at the Fearless Change Campfire on 22 Sep 2023 I’ve always been someone to ask questions about id...