The Dichotomy of Change Control and Quality Software
It may seem counter-intuitive to the old guard that change control can actually negatively affect the quality of software, but when implemented in such a way that it dramatically increases the cost and effort of making software improvements, it leads a team down a path of mediocrity.
You see, good developers naturally want to fix things that they find wrong with the software. Tweak it to be a little faster. Make a button bigger so it’s easier to see. Change the alignment of some form labels. There are endless ways that good engineers will reflect upon their work or the work of their peers and find better ways to deliver a superior user experience. Good teams will also find ways to quickly fold user feedback into the UX design and improve the usability of the system.
Indeed, there are times when changes are drastic and require that proportionally sufficient effort is put into testing, but there are times when changes are small and well contained where running the same process for big fixes simply yields frustration and drives a team to ultimately “settle” for fear of waking the slumbering giant of change control.
Consider a doctor performing open heart surgery. Of course, the protocol for sanitation and safety checks are going to be far more rigorous than for administering a flu shot. But this makes sense because the risk is far, far greater with one than the other. If a doctor’s protocols for preparing for surgery were one-size-fits all and applied for administering a flu shot, I suspect the quality of care in the US would decline drastically while causing the cost to skyrocket! Or consider the permits and documentation required if you want to add a walkout entrance to your basement. Of course you will need engineering specs, signed permits, and so on because the risk is high. No one would think of requiring certification from a structural engineer and permits from your municipality to hang a picture frame on your wall because it would be absurd!
Likewise, common sense, pragmatism, and a sensible balance is a requirement for any software quality process; a one-size-fits-all approach creates a wall of effort for even minor tweaks that simply means that the small fixes that can make a big difference in the usability and functionality of the system aren’t made for fear of generating a ton of e-paperwork. The Broken Windows Theory, popularized by Steven D. Levitt and Stephen J. Dubner in Freakonomics, explains this phenomenon:
In an anonymous, urban environment, with few or no other people around, social norms and monitoring are not clearly known. Individuals thus look for signals within the environment as to the social norms in the setting and the risk of getting caught violating those norms; one of those signals is the area’s general appearance.
Under the broken windows theory, an ordered and clean environment – one which is maintained – sends the signal that the area is monitored and that criminal behavior will not be tolerated. Conversely, a disordered environment – one which is not maintained (broken windows, graffiti, excessive litter) – sends the signal that the area is not monitored and that one can engage in criminal behavior with little risk of detection.
In the education realm, the broken windows theory is used to promote order in classrooms and school cultures. The belief is that students are signaled by disorder or rule-breaking and that they, in turn, imitate the disorder.
When change is made expensive, niggling things here and there in the code and UX that don’t get fixed simply accumulate over time and promotes discord and disorder. When the processes discourages innovation and excellence, a team ends up simply mired in mediocrity with no real quality to show for it. It’s great that there’s now a trail of paperwork long enough to circle the Earth for the changes that were made, but the real shame are the fixes, improvements, and ideas that weren’t implemented because of the cost in paperwork.
Again, this is not to say that no quality processes or change control should exist, but that their application must be proportional to the risk and pragmatic about the priorities of the project.
One way to solve this problem is via rigorous automation for it enables teams to condense the processes into robotic automatons that take minutes, not hours to execute. It enables teams to conceptualize, develop, and deliver more rapidly through continuous deployment.
Wired had a good article on how LinkedIn is able to build and release new features quickly:
LinkedIn is a Wall Street darling, its stock up more than threefold in two years on soaring revenue, spiking profits, and seven straight quarters beating bankers’ estimates. But LinkedIn’s success isn’t just about numbers: an impressive acceleration of LinkedIn’s product cycle and a corresponding revolution in how LinkedIn writes software is a huge component in the company’s winning streak.
Much of LinkedIn’s success can be traced to changes made by Kevin Scott, the senior vice president of engineering and longtime Google veteran lured to LinkedIn in Feb. 2011, just before the buttoned-down social network went public. It was Scott and his team of programmers who completely overhauled how LinkedIn develops and ships new updates to its website and apps, taking a system that required a full month to release new features and turning it into one that pushes out updates multiple times per day.
By creating the tools and process and training the team to operate under continuous deployment, LinkedIn was able to quickly bring concepts and ideas to life; it is because the cost of making changes and improvements to the software have become cheap that they can be made readily and without friction.
True software quality can never be achieved solely through heavy-handed processes (unless they are automated!); such processes are simply paper tigers that lead to the appearance of conformance but instead, create an impedance to creating better software.