Thoughts From The Trenches (Giant Brain Dump Incoming!)
A random assortment of random thoughts (and rants!) from the trenches…
You Know You’re In Trouble When…
- You have to convene three people to figure out how to create an instance of one of the core objects in your framework. I think this is directly related to having an anemic domain model – it just isn’t obvious which “service” you should be calling to set the properties on the object. It seems like the whole thing would be easier if you could just call the constructor or a static initializer on the class to get an instance; this is the most basic premise of an object oriented system (and one that gets thrown to the wayside much too often). Constructors are the most natural way to create an instance of an object; why not use them?
- Your team members are afraid to update their code (in fact, they’ll wait days before updating because it’s always a painful, time-consuming excursion to get your codebase compiling not to mention your environment working afterwards). This could be a symptom of many different ills. In this case, the problem is three fold:
- The source control system is painful to use. The culprit is Accurev; it is perhaps one of the worst source control systems I’ve ever used (not to mention it’s very obscure and uses obtuse terms for common source control actions). A quick search on Dice yields 6 results for the keyword “Accurev” while “svn or subversion” yields some 786 results. Of course, the big problem with this is that it takes an extraordinarily long time to ramp up a new addition to the team to the peculiarities of the source control system. (I still haven’t figured out how to look at changesets, run “blame” on a file, and why it’s so slow…)
- There are no automated unit tests for the most basic and important of functionality: data access code. The lack of a structured way to unit test your core data access code makes the entire codebase seem….fragile. Changes in code that are not regression tested tend to break things, which tends to ruin productivity. I can understand not testing code that is dependent on external libraries which are difficult to test (it really requires a lot of thinking and work to do right), but I can’t understand why any team wouldn’t test their core data access code.
- There is no software support for tracking breaking changes. What I mean by this is, for example, changes to a database schema or a stored procedure. The standard way some teams “resolve” this issue is by emailing people when a breaking change is entered. However, the problem with email is that it’s easy to forget someone and, even if you remember everyone, it’s not easy to backtrack and find all of the different email notices. For example, if I’m in the process of writing an intense piece of code, I’ll ignore a breaking change and deal with it the next time I update. But by that time, there could be two or three breaking changes. It’s difficult to sort these out in email and much easier to sort them out with some pretty basic software support. On FirstPoint, we used a Trac discussion to track breaking changes. Developers checking in breaking changes were required to document the steps that the other developers would need to take to ensure that the environment remained stable.
- You’re worried about deadlines, but you roll off two people who’ve been working on your project for two years and replace them with one person who’s been working on the project for two months. Fred Brooks’ The Mythical Man-Month covers this pretty succinctly:
adding manpower to a late software project makes it later
The problem is that the new resource cannot possibly have the richness of experience with the existing codebase that is require to be productive right away. In a system that’s sparsely documented (and by that I mean there is no documentation on the core object model), it means that a new developer has to interrupt the workstream of more seasoned developers to get anything done. This is probably okay when the going is slow and steady, but in crunch time, this becomes a big productivity issue. I know I hate being interrupted when I’m in the zone, so I personally hate to interrupt others, but in this scenario, I have no choice since there is no documentation, the codebase is huge, and it’s not at all obvious how to get the data that I need.
- When there are multiple ways to set the value of a property on a core object in your model. What I mean by this is say I have an object called Document and somehow, there were two or more ways to set the value of VersionId (and each way getting you a different type of value) when you use a data access object to retrieve an instance. Again, this is a byproduct of an anemic domain model. Because the rules of how to use the object are external of the object itself, the proper usage of the properties becomes open to interpretation, based on the specific service populating the object.
- Your object model is littered with stuff ending in “DAO”, “Util”, “Service”, or “Manager”. It means that you haven’t really thought about your object model in terms of object interactions and the structural composition. These are suffixes that I use only when I can’t think of anything better. More often than not, when I write these classes, they truly are utility classes and are usually static classes. If this is a big portion of your codebase, you have some serious problems.
You Can Make People Productive If…
I think the role of any senior developer, lead, or principal on a project is not to watch over everyone’s shoulder and make sure that they are writing good code. I’ve learned pretty early on that this doesn’t work; you can’t control how people write code and if you try to, you’ll just get your panties in a twist all the time, raise your blood pressure to unhealthy levels, and piss off everyone around you. So then the question is how can you get a group of diverse individuals with a diverse level of experience to write consistently good code?
It’s a hard question and one that I’m still trying to answer. However, I’ve learned a few lessons from my own experiences in working with people:
- Make an effort to educate the team. This means reading assignments, group discussions, and making learning a basic requirement of the job, not an optional extracurricular activity. Pick a book of the month and commit to reading a chapter a day.
- Have code reviews regularly. One of the surest ways to help get everyone on the same page is through code reviews. The key is to keep it focused and not let the process devolve into a back-and-forth debate regarding the little things, but rather focus on the structural elements of the objects and interactions.
- The smartest guys on the team work on the most “useless” code. What I mean by “useless” here is that the code doesn’t yield immediate benefits; in other words, framework code. Typically, this involves lots of interfaces, abstract classes, and lots of fancy-pants design patterns. The idea here is to make it easy for the whole team to write structurally sound code, regardless of skill level, by modeling the core interactions between objects and the core structure of the objects. I think a key problem is that project managers see this as a zero-sum activity early on in the game (the most important time to establish this type of code) when in reality, it usually returns a huge ROI when done with the right amount of forethought and proper effort to refactor when the need arises.
- Document things…thoroughly. One of the easiest ways to mitigate duplication and misuse is to use documentation in the code. For framework level code, it’s even more important to have solid documentation about the fields, what type of values to expect, how the objects should be used, how instances are created, what special actions need to be performed for cleanup, etc. Documentation done right can also help improve code consistency if you add examples into your documentation.
Writing good code is productive. It becomes easier to maintain, easier to bugfix, easier to ramp up new developers, easier for one developer to take over for another, and it means a generally more pleasant and insightful workday, every day. Which brings us to…
Sound Software Engineering Is Like…
Exercise! Project managers seem to lose this very basic insight when they make the transition from a developer. Like exercise, it’s always easier to put in the effort to do it regularly and eat a healthy diet than to wait until you’re obese and then start worrying about your health and well-being. Sure, it feels like hard work, waking up at the crack of dawn and going out into the rain/snow/dark, eating granola and oatmeal, skipping the fries and mayonaise, but it’s much easier to keep weight off than to lose weight once you’re 200lbs overweight!
Likewise, it’s always going to be easier to refactor daily as necessary and address glaring structural issues as soon as possible than to let them linger and keep stuffing donuts in your face. It’s like carrying around 200lbs of fat: you lose agility, it becomes difficult to move, everything seems to take more effort – even simple things like climbing the stairs becomes a chore. The lesson is to trim the fat as soon as possible; don’t let serious structural issues linger — if there’s a better, cleaner, easier way to do something, do it that way. Every excuse you make to keep fat, ugly code around will only make it heavier and harder to maintain.
How To Reinvent The Wheel…
It seems like a pretty common problem: a lead or architect doesn’t want to use a library because it’s not “mature” enough. What this means, exactly, still baffles me to this day. Mature is such an arbitrary measure that it’s hard to figure out when software becomes mature. What this usually leads to is reinventing the wheel (several times over).
When evaluating third party libraries, I really only have a handful or criteria to consider whether I want to use it or not:
- Is it open source and is the license friendly for commercial usage? I’ll almost always take a less feature-rich, open source library over a more complete licensed library. The reason is that there’s less lock-in. I won’t feel like I’ve just wasted $1000 (or whatever) if I encounter a scenario where the library is insufficient or plain doesn’t work.
- Does it have sufficient documentation to get the basic scenarios working? This is perhaps the only measure of “maturity” that matters to me.
- Does it solve some scenario that would otherwise take the team an inordinate amount of time to impelment ourselves? I hate wasting time duplicating work that’s freely available and well documented with a community of users who can help if the problem arises. And yet, time and time again, there is no end to the resistance against using third party libraries. Part of it is this very abstract definition of “maturity” (objections by technical people) and part of it is a fundamental misunderstanding and general laziness about different licensing models (the business folks).
That’s it. I don’t need the Apache software foundation to tell me whether log4net is mature or not. I look at the documentation, I write some test code, I use it and I evaluate it, and I incorporate it once I’m satisfied.
Software Estimation And Baking Cakes…
Fine grained software estimation is most assuredly the biggest waste of everyone’s time. Once it comes down to the granularity of man-hours, you know that someone has failed at their job since there is no way to even quantify that level of absurdity. Once you start having meetings about your fine-grained estimates that pull in all of the developers, then you really know that you’re FOCKED.
If I handed you a box of cake batter and asked how long it would take you to bake the cake, you’d probably take a look at the directions, read the steps, and estimate how long it would take you to perform all of the steps and add the baking time and come up with 50 minutes. Okay, we start the timer. You’re off and cracking eggs and cutting open pouches and what not. But wait, your mother calls and wants to talk about your trip next week. -5 minutes. You open the fridge and find that you’re half a stick of butter short so you run to the grocery store. -30 minutes. Oh shoot! You forgot to pre-heat the oven. -5 minutes. Finally, you’ve got the batter mixed up and ready to bake. The directions say to bake for 40 minutes but you’ve already used up 40 minutes and only 10 minutes left of your original estimate: now what?
Well, you could turn up the heat, but that’d only serve to singe the outside of the cake while leaving the inside uncooked. You could just bake it for 10 minutes, but your cake would still be uncooked — but hey, you’d meet your estimate. More likely than not, you’d just bake the cake for 40 minutes and come in 30 minutes late since late, edible cake is better than burnt or mushy cake.
Software estimation is kinda like that (and look, in the case of baking a cake, all of the directions and exact steps are already well defined and spelled out for you — writing software is rarely so straightforward). It’s mostly an exercise in futility once it becomes too granular since there are just too many variables to account for. The answer — if it must be implemented feature complete — is that it’s going to take as long as it’s going to take (and probably longer!). For most non-trival tasks, I feel like the only proper level of granularity is weeks. Don’t get me wrong, I’m not saying that you shouldn’t estimate, but that you should estimate at the right level of granularity and accept that once you’ve reached your estimation and the work isn’t done, your only real choices are to:
- Extend the deadline.
- Trim the unnecessary features.
So that’s it; feels good after a brain dump!