That is most probably true if you are not unit testing it or collecting metrics.

One of the most important metrics is the Change Risk Analysis and Predictions or CRAP index.

The CRAP index is designed to analyze and predict the amount of effort, pain, and time required to maintain code.

It is one of the most useful tools you have to check code quality and identify possible problematic blocks.

The index is calculated taken into account the cyclomatic complexity and the code coverage by automated testing tools.

The cyclomatic complexity is a widely used metric and is calculated as one plus the number of decision points in the code block.

The higher the CRAP index the crapier your code is.

As you might guess a complex code block with lots of decision points will have an high crap index, that is where automated test coverage comes to the rescue.

if your code is 100% covered by unit tests you can be sure that whatever changes you made won’t break anything, otherwise the tests will fail.

CRAP index measures the effort to maintain and change code, the higher the test coverage the lower you CRAP index will be as the risk of changing code is also less.

Tools like Project Mess Detector (java, php) will help you a lot to produce better code, simply run them and you will know the problematic blocks in your code, the just need to follow the recommendations and refactor a bit.

As a side note, metrics are metrics and just that, they are no kind of dogmatic religion and every value should be analyzed, but please don’t use this argument as an excuse not to acknowledge the obvious if your metrics, like the npath complexity is a fifteen digit number (as I have seen) you can be sure that something is really wrong!

 

How many of you are using unit tests and metrics as part of the software development process?