Testing Your Own Code

I’m sure you’ve heard people say that developers are not good at testing their own code. I feel like people believe this to be an axiom; an underlying and fundamental principle of software development that is to be taken at face value. I don’t buy that.

Now I’m not saying that it’s false. Many software developers have a poor track record when it comes to testing their own code. But why is it that when they ‘throw it over the wall’ to the testing or QA team, that it’s often so bad, and so full of problems – problems that are identified quickly?

One reason, and luckily not so prominent anymore, is ‘Not My Job’ syndrome. I don’t believe any developer can produce quality code without testing it, but I’m sure there are still developers out there who don’t believe it’s their job to test. “That’s QA’s job”.

The second reason is much more common. Developers at many organizations are under incredible pressure to meet deadlines. These deadlines are often arbitrary, or were made without the developer’s input. I’m not the only one (I hope!) who has given into pressure and delivered code without adequate testing. Mind you, when funding is tight, sometimes inadequate testing is all that you can provide.

But the biggest reason I think for why developers are not good at testing their own code comes all the way back to poor requirements. How can you test something if you are not even sure what it’s supposed to do? Or if the requirements are vague and don’t cover all the edge cases and unusual combinations? Test cases and requirements are very much linked. If requirements are poor, test cases will be too. Much like the previous reason, when funding is tight, it can often result in poor or nonexistent requirements.

Can you think of any other reasons?

Leave a Reply

Your email address will not be published. Required fields are marked *