A common example is the Java programmer who accidentally overrides hashcode() and equal() in such a way that 2 items which are equal do not have the same hashcode. This causes mysterious behavior when you add items to a collection and try to find them later (and you don't get a HashCodeNotImplementedCorrectly" exception.) True, you can generate hashcode and equals with your IDE, but it's trivial to test for this, even without a framework that does the hard work for you.
So when I start working with a system that needs tests, I try not to worry too much about how "trivial" the test seem initially. Once I was working in a Interactive Voice Response application for which we would test by dialing in to a special phone number. We'd go through the prompts, and about half-way into our test we'd hear "syntax error." While I really wanted to write unit tests around the voice logic, we didn't have the right tools at the time, so I tried to see if I could make our manual testing more effective.
This IVR system was basically a web application with a phone instead of a web browser as a client, and VXML instead of HTML as the "page description" language. A voice node processed the touch-tone and voice input and sent a request to a J2EE server that sent back Voice XML which told the voice node what to do next. During our testing the app server was generating the VXML dynamically and we'd sometimes generate invalid XML. This is a really simple problem, and one that was both easy to detect, and costly when we didn't find it until integration testing.
I wrote a series of tests that made requests with various request parameter combinations, and tested the generated voice XML to see if it validated to the DTD for VXML. Basically:
String xmlResponse = appServer.login("username", "password");
try{
XMLUtils.validate(xmlResponse);
} catch (Exception e){
fail("error " + e);
}
Initially people on the team dismissed this test as useless: they believed that could write code that generated valid XML. Once the test started to pick up errors as it ran during the Integration Build, the team realized the value of the test in increasing the effectiveness of the manual testing process.
Even though people are careful when the right code, it's easy for a mistake to happen in a complex system. If you can write a test that catches an error before an integration test starts, you'll save everyone a lot of time.
I've since found lots of value writing test that simply parse a configuration file using and fail if there is an error. Without a test like this the problems manifest in some seemingly unrelated way at runtime. With a test you know the problem is in the parsing, and you also know what you just changed. Also, once a broken configuration is committed to your version control system you slow down the whole team.
If your application relies on a configuration resource that changes, take the time to write some sanity tests to ensure that the app will load correctly. These are in essence Really Dumb smoke tests. Having written these dumb tests you'll feel smart when you catch a error at build time instead when someone else is trying to run the code.