Reflecting on the past year and a half, I've come to some conclusions on how development teams can be successful in delivering software. I don't think any of them are major or relevatory, but I think each team and each project has a different take on the same ideas. Here are mine:
During our active build phase, we release a new build every week and each build is functional software that incrementally incorporates requirements and defect fixes from previous builds. This has given us the benefit of allowing our customer to preview the software and help us identify flaws or areas requiring additional clarity in the requirements continuously and early in the process.
It might sound insane, but it is possible to release weekly builds because our solution incorporates a heavy dose of automation where it counts on many levels.
- We've removed raw SQL from the equation, relying purely on FluentNHibernate and NHibernate to automatically generate our schema
- We've invested in building tools to automate the export and re-import of configuration and data, allowing us to easily and quickly reset our development environments entirely with standard test data (bonus: the same tool allows us to move configuration and data from environment to environment)
- We've invested in idiot-proofing our installs so that they are boiled down to a few scripts
- We've built automated build scripts that package everything up neatly and even FTPs a build to our integration and test environments
- Our domain data model is 90% generated automatically from content schemas (SharePoint content types) which we have to create anyways.
Because of the automation, tasks which would otherwise be expensive are cheap to execute.
It also cuts down on mistakes and missed steps.
Our team is 100% geographically dispersed with developers and team members in Vietnam, Mexico, Virginia, New Jersey and California.
But relatively speaking, we meet very infrequently. Two development team meetings a week: one at the start of the week -- our "A" meeting -- and one towards the end of the week -- our "B" meeting. We use the "A" meeting to discuss our deliverables for the week and the "B" meeting to discuss the outcome of our weekly sprint walkthroughs, any adjustments that need to be made, and so on.
We also use these sessions as show-and-tell to let everyone see the progress and changes being made by different team members as well as to inform of upcoming breaking changes and mitigating actions required downstream.
Otherwise, developers are encouraged to have long spans of uninterrupted work time instead of constantly being pulled into meetings. One-on-one sessions and communications occur as necessary, but this recipe has been very successful in minimizing the amount of time the team spends huddled up in conference calls and gives everyone more time to solve problems.
Meet with a Purpose
Every meeting should have an agenda and an outcome (an action, decision, or an issue for followup). Demand a bullet-listed agenda when someone sends you a meeting request and provide one if you need to schedule a meeting. Ensure that the goal and outcome of the meeting is clear for all parties and schedule new meetings to resolve items not on the agenda or do not contribute to the outcome.
Additionally, create a record of every meeting. Who attended? What was covered? What was not covered? What was left open? What are the action items? Ensure that this record is easily accessible (wiki or forum system is perfect for recording these details) and email a copy to all participants and other relevant parties to ensure that everyone has the same understanding of what was just discussed. This basic task can often clear up misunderstandings before they become systemic issues. I take the burden on myself to record and followup with major bullet points from the meetings and it's saved my butt many times when following up with customers.
This is the simple art of not running a terrible meeting.
Lead by Example
A bit of career advice for those with a passion for software development: never remove yourself from the process of creation.
I have witnessed it as the career ladder moves individuals up and up, further from the pure act of creation that is software development. For those of us who feel invigorated when we solve a difficult programming task, for those of us who feel a great rush of exhilaration when a machinery of thousands of lines of code executes in harmony, it is our burden to tinker, to learn, and to create.
When you "ascend" from developer to architect or team lead or such, never leave your passion for creation behind; authority flows naturally from understanding, knowledge, and mastery -- not just a title.
I was inspired to reflect on this by an interview with James Dyson in Wired:
Wired: Now that Dyson is a sprawling, multinational corporation, how do you keep the spirit of innovation alive?
Dyson: We try to make the corporation like the garage. We don’t have technicians; our engineers and scientists actually go and build their own prototypes and test the rigs themselves. And the reason we do that—and I don’t force people to do that, by the way, they want to do it—is that when you’re building the prototype, you start to really understand how it’s made and what it might do and where its weaknesses might be. If you merely hand a drawing to somebody and say, “Would you make this, please?” and in two weeks he comes back with it and you hand it to someone else who does the test, you’re not experiencing it. You’re not understanding it. You’re not feeling it. Our engineers and scientists love doing that.
As a team lead, never just be a middle man with the developers and requirements; be an active participant in the process. Work on the hard problems. Understand the creation process and understand the challenges of the team from the bottom up and build your authority from your ability to innovate and solve problems.
If you watch shows like Iron Chef or Chopped, every one of the judges and every one of the Iron Chefs can be considered the vanguard of their craft and it is from there that their authority flows. You would not watch Iron Chef if all the Iron Chefs did was design the menu and then watch their team cook. You would not trust the judges on Chopped if they weren't great chefs in their own right that understood the ingredients, the techniques, and the skill required to pull off a dish.
The better you understand the system, the better you understand your team, the more effective you will be in understanding the root cause of an issue and how to route defects within the team.
Push Your Team Incrementally
As a young developer, I always found great satisfaction in solving new problems and new challenges. I think it's important that throughout the process, you push your team members and give them tasks that challenge their knowledge and abilities to push them just a little bit.
Of course, there will be plenty of mundane code and defect fixing, but don't box in your team members intellectually. Understand their capabilities individually and push them to try things that are just beyond their current level of capability, understanding, and comfort zone. This will keep them engaged and improve their skills to boot.
Invest in Code Quailty, Especially Early On
It's a lot easier to write good code earlier on in the process than it is to come in and try to refactor later on. Additionally, code written early on tends to be reused more often and patterns and solutions are copied by other developers later on. So early in the process, it is important to keep code quality in mind and correct bad behaviors since the most influential code will be written earlier in the process rather than later.
What this means is that detailed code reviews are far more important at the beginning than at any other time in the project. If you can correct bad programming practices or show a developer a better, more modular way of writing a piece of functionality early on, she will carry that knowledge throughout the project.
We rarely do code reviews now (1 year in) as I focused heavily on working one-on-one with developers as they joined the team to ensure that the code was up to standards. I frequently rejected code and asked developers to rewrite whole pieces of functionality to help them understand why one design was better than another.
Put Your Best Developers on Your Least "Visible" Code
What this boils down to is the framework level components. Your best developers should be working on the deepest innards of the system that power what the rest of the developers do at the presentation and business logic layers. This code will be the most invoked and the most reused so it is important that it is:
- Easy to use and as intuitive as the platform you are working on
- Well structured and object oriented to reduce repetition of code and code complexity
- Well documented with abundant examples -- your best developers must embody and practice whatever best practices have been designated
Do not waste your best developer's time with defect fixes (unless there is sufficient bandwidth), even if they can do it better than anyone else on the team because it will throw off the balance of the team (your more junior developers might not be able to fix a low level defect as quickly, but there are many design issues and higher priority defects that they cannot solve effectively yet).
Document, Document, Document
Early on in our process, I had to decide between a wiki system or a Word document for our documentation. Because of the fast, iterative nature of the project, I decided to use a wiki/forum system as it was more flexible and -- in a sense -- more "visible".
While our formal documentation is trailing, it is easy to assemble it from our weekly sprint guides which document every new feature with screenshots, details, and examples.
But at any given time, our customer and our delivery partner can load up the wiki and see exactly when we delivered a feature, how to install and configure the feature, how to use the feature, and so on. By putting it all out there in lock-step with the weekly delivery, it is easy to ensure that the entire team is aware of the status of the project and progress being made and allows test teams to progress iteratively so that by the end of the project, most features have been tested for weeks.
Mid ways through the project, we moved from "status" focused meetings to "demo" focused meetings where we would do a weekly writeup and walkthrough of what changed, what was added, and what was fixed. It also allowed for open forums for test and business teams to ask questions and get clarifications.
This approach has allows the customer to see progress and the customer will never be surprised at the end of the project as they will have seen the progress and documentation update on a weekly basis.
So far, we have done well with these basic guiding principles.
I'm sure I will revise and add to my own lessons learned as the project continues, but I think that this is a good starting point!
One of the things I've had to do quite often in the last few years is conduct technical interviews.
It's always a challenge as of course, if you want to make sure that a resource is technically competent in specific tasks or role for which you are resourcing, that's pretty easy; just ask questions oriented around those tasks that they will be responsible for. But what if you want to assess an individual's broader technical competency of and experience with a platform?
As a technically oriented guy myself, it's very easy to inject my bias and my core knowledge into the equation and I think that typically makes for a bad interview. There are always many things that I know well and many things that I don't know well for any given platform. So how can I go about measuring a candidate's competency without injecting my own knowledge and experience bias?
As an addendum, for technical interviews, it's sometimes difficult to come up with a good set of questions that are "fair". I don't think it's fair to ask obscure questions for which few folks would know off the top of their head, but would have no problem solving with Google and StackExchange, for example. I also don't like to ask brain-bender type questions as I don't find the outcome of those questions to be generally useful in evaluating technical expertise.
One approach is to ask open-ended design type questions. "Given this platform, how would you design a solution to meet requirement X?" "What are the benefits of approach U versus approach V for modeling this data?" These are okay, but I find that I often get clouded by situational bias. What I mean by that is that I tend to think of problems I've solved in the recent past or problems that I'm working on now. But I know that many of these design issues took me days if not weeks of research, prototyping, experimenting, and discussion to settle upon -- it simply doesn't seem fair to ask a candidate to produce a response on the spot. It's worth something, I guess, if they are able to come up with the same solution (or a better one!), but if they can't, does one hold that against them?
So in thinking about these issues, I think one good approach is to go with a reverse technical interview. What this means is that I ask the candidate to produce a list of technical and design questions for me for the interview. My thought is that this allows me to turn the bias that I would otherwise have into a tool because they will have the same biases. They will tend to ask questions around what they've worked on, what hard problems they've solved, and what experience they have. This seems like a much more dynamic approach and would seem to provide more valuable insight...I think. It's one thing to list a skill or a technology on your resume, but it's another thing to be able to ask deep, challenging, technical questions around it.
As a bonus, being able to come up with and ask good questions is itself a valuable skill.
Finally got around the cleaning up my office (after 4 years...) . Love the way it turned out and much more work space with much less clutter.
For reference, this is what it looked like previously.
From an NPR interview with Walter Isaacson, Jobs' biographer:
On Jobs' father, who rebuilt cars, and held design and craftsmanship in high regard:
"He would show Steve the curve of the designs and the interiors and the shapes ... and even have pictures of the cars he liked the most in the garage. He put a little workbench in the garage, and he said, 'Steve, this is now your workbench.'
"One of the most important things he taught Steve was [that] it's important to be a great craftsman, even for the parts unseen. When they were building a fence, he said, 'You have to make the back of the fence that people won't see look just as beautiful as the front, just like a great carpenter would make the back of a chest of drawers ... Even though others won't see it, you will know it's there, and that will make you more proud of your design.'"
I can't begin to compare myself to Jobs, but I think this is a value that's important to me, personally, from a software development perspective. There is an aesthetic and a beauty to well written code or framework not to mention the pride of the author in crafting a clear, concise, and elegant design -- even in the "parts unseen" by the end user.
I once got into a heated debate about code quality with a CEO who was, at the time, bent on fixing nail pops and seams on the walls of his recently built and painted house. I asked why he didn't see the same need to put an effort into addressing analogous issues in our code that disturbed the overall quality and aesthetics of the codebase, which seemed to me even more important than a few nail pops. "That's different" is the only response that I got, but to me, it's the same. There's a personal pride in building a product with exemplary craftsmanship. There's a team's pride to being able to walk into a code review with a customer and know that they will be wowed. There's a developer's own pride in writing software that takes design and usability of the framework into account.
This gets lost in the age of outsourcing and rent-a-coders, but great software is still -- and likely will always be -- a craft that requires skilled craftsmans to build (of course, great software requires more than that like vision and an understanding of the problem domain as well).
As a consultant, I feel strongly about giving sound technical advice to my clients, even if such advice means saying "no" to a client or possibly turning back a larger project for a more pragmatic one. It's about doing the right thing and offering sound technical advice to the best of my knowledge -- not just money, projects, and utilization.
The one personal example that really sticks out for me is the case where Microsoft sold a deal to a hedge fund to build a bulk import system using BizTalk that would have cost them triple the price (once licensing and hardware was factored in) of doing it using SQL Server DTS, which was easier to program, maintain, and more robust in every way (not to mention this company already had SQL Server skillsets in-house). Luckily, we were able to convince the client that DTS was purposefully designed for carrying out bulk import and transform of data before they committed the cash to BizTalk.
Recently, a friend of mine showed me a project that the Big Consulting Company he works for was delivering to their client, a public library. It looked really good for a public library website...until he dropped the bomb that it was built using Silverlight (and to top it off, he was really proud, too -- as if I was supposed to find it impressive). I don't think I've ever done a bigger facepalm in my life.
As I've stated in the past, I have a strong disdain for the misuse of Silverlight. There are certainly scenarios where it should be used for building web sites:
- Streaming media
- Scalable 2D vector graphics and animation
- 3D graphics and animation
- Interactive games
And that's it! Beyond that, if a company wants to use it in their intranet site, it doesn't concern me as much because the environment is more homogeneous and controlled in terms of having the platform to run the Silverlight applications; it's their headache going forward. Besides, if it's a private, multi-national company, then by all means; if they wish to waste their capital and resources, that's their choice.
However, it is a damn crime to recommend Silverlight to any client building basic web applications that are Internet facing, especially a public library financed by taxpayers. I mean, people should be fired and embarrassed for offering such terrible advice. To begin with, few non-Windows devices natively support Silverlight (and even folks on older Windows OSes can't natively run Silverlight apps). iPad? iPhones? Android phones? Linux based netbooks? As sales of traditional laptops and desktops decline, it's important to factor in the presence of these newer platforms when designing a publicly facing Internet site. I would think that this would be even more important for a public library.
Now, if the site were media focused -- like a YouTube -- perhaps it could be forgiven; after all, HTML5 is still a moving target and supported only by newer browser versions. But this is a public library website that was listing books...It's as bad as websites that still use Java (yes, Java without the "Script") for image galleries or raindrop effects. It's as bad as websites using Flash for menus and menu rollover animations.
I would be embarrassed to be a part of the company or the team that sold and implemented this deal. A fucking crime to the taxpayers of the township with me as the perpetrator; no better than stealing money from your neighbors. I couldn't live with myself for being so evil.
Now, he told me that the client insisted on Silverlight and that it was they who wanted it done in Silverlight. To me, that makes no difference. As a consultant, it's my duty to provide sound technical guidance to the best of my knowledge and ability. If there is a more compatible, cheaper, easier to maintain solution built on a platform with greater longevity that solves the same problem, I will recommend taking that route, even if it takes me out of the running. It's our job as consultants to consult and to offer sound technical advice.
For you see, the client may not know or care for the difference between Silverlight and HTML5 or jQuery based UIs. The client may be under the impression that a given UI or bit of functionality is only possible because of Silverlight if that's what they've been sold and demo'd. The client may not understand the alternative solutions as certainly, for a non-expert, the difference between two types of wood -- for example -- aren't perceivable. The client may be enamored with one buzzword or technology, but it is our duty and responsibility as consultants (and decent human beings) to tell the truth because I'd like to believe that when I ask a contractor to come to my house for a quote or get a diagnosis from an auto mechanic, he'd do the same for me and give me the low-down to the best of his or her ability and knowledge.
I'm still peeved by this as it's a critical misunderstanding of the Internet ecosystem and managing device compatibility as well as a critical misunderstanding of technology and their suitability for a purpose. Not to mention that it's a terrible choice for audience accessibility, long term costs, and maintenance. I really don't want to be upset by the fact that my friend or his team could have purposefully offered bad advice for greater financial returns as that would be a true embarrassment and I only hope that all sides in this come to their senses and ditch Silverlight.
In the end, for me, consultancy is about people and treating customers with respect by offering the best technical advice to one's knowledge. Even if it costs me my job, I've always believed that I am accountable to my clients and I'm responsible for giving sound technical advice.
Caught this editorial on CNN this weekend:
Companies spend billions on rent, offices, and office equipment so their employees will have a great place to work. However, when you ask people where they go when they really need to get something done, you'll rarely hear them say it's the office.
If you ask, you'll usually get one of three kinds of responses: A place, a moving object, or a time.
They'll say their house, their back porch, an extra bedroom they've converted into a home office, a library, the coffee shop down the street, the basement. Or they'll say their car, or a train, or a plane -- basically, during their commute. Or they'll say really early in the morning, really late at night, or on the weekend. In other words, when no one else is around to bother them.
Indeed, I think it's important to realize that different individuals have different productivity models. By that I mean that certain people are "morning people" and their brains are most active and creative in the morning. Others are "night people" where there brains are most wired and effective in the evenings. Some people feel more comfortable with natural lighting during the day time. Some prefer a bright working space while others prefer a dim one.
It seems counterproductive to force everyone into one model of the work environment when the preferences that maximize the efficiency of each individual can be vastly different.
And then there's the bigger issue of interruptions:
I don't blame people for not wanting to be at the office. I blame the office. The modern office has become an interruption factory. You can't get work done at work anymore.
People -- especially creative people -- need long stretches of uninterrupted time to get things done. Fifteen minutes isn't enough. Thirty minutes isn't enough. Even an hour isn't enough.
I believe sleep and work have a lot in common. I don't mean that you can sleep at work or you can work in your sleep. I mean sleep and work are phase-based activities. You don't just go to sleep or go to work -- you go towards sleep and towards work.
You aren't sleeping when your head hits the pillow. You start the sleep process. You have to go through phases to get to the really beneficial sleep. And if you're interrupted before you get there, you have to start over.
The same is true for work. You don't just sit down at your desk and begin working effectively. You have to get into a groove. You go towards good work. It takes some time to settle in, clear your head, and focus on what you need to do.
A very good analogy and I wholeheartedly agree. At the same time, to ensure that this model works, teams need the right tools (Webex or equivalent, chat clients, VOIP, etc.) and the right people to make it work. To some extent, it takes a good amount of trust that each member of the team understands their tasks and roles to get their jobs done without having to have a manager or supervisor constantly buggering for a status or having meetings to figure out the status of the tasks.
At least for myself, I find it incredibly difficult to work any any problem of moderate complexity without sitting down and having a solid bloc of a few hours to work on the problem. There's nothing worse than having to do a mental context switch when one is working on a difficult problem. Well, it's only worse when that context switch is for a meeting that's inconsequential to the tasks at hand
I didn't go, but John Peterson did.
Check out his feedback from the conference.
The session is titled "Object Oriented Development and Practices in SharePoint":
Building maintainable solutions on the SharePoint platform can be a challenge (and that might be putting it mildly). Code interspersed with CAML strings, rampant code duplication, hundred (thousand?) line methods, inconsistent code quality, and so on. How can a dev/technical lead address these problems that arise when a team of individuals with diverse experience and skill levels embarks on designing and building a solution on the SharePoint platform?
This session introduces a series of practices, tools, libraries, and techniques to support an object-oriented approach to building sustainable and maintainable solutions on the SharePoint platform. It offers an innovative approach to solving complex solution and development problems through embracing simplicity and leveraging the capabilities of the .NET Framework to build a framework for highly object-oriented, patterns based solutions.
Technologies: SharePoint 2007, Visual Studio 2010, C#, .NET, XSLT (Saxon)
Audience: SharePoint developers, SharePoint technical architects, SharePoint technical leads, .NET developers
Level: Intermediate/Advanced. Audiences with experience in design patterns, reflection, delegates, anonymous functions, and XSLT will be able to follow along and extract the most value from this session.
To expand on that, the plan is to cover some of the lessons I've learned from being deep in the code on a handful of large SharePoint projects. These lessons I've encapsulated in a framework of sorts which was designed to help:
- Accelerate development of solutions for SharePoint
- Increase developer productivity while still maintaining high levels of code consistency
- Increase adherence to the DRY (Don't Repeat Yourself) principle by leveraging patterns and object-oriented code
- Decrease the entry barrier for ASP.NET developers transitioning to SharePoint
It won't be for everyone; however, for any team that's deep into the SharePoint APIs and building custom solutions (web parts, event receivers, web pages, layout pages, and so on), I promise this will be a great session to attend. My hope is that attendees will be able to walk away with some ideas on how to make their teams more productive and to help teams write better code.
The event will take place on Saturday, October 9th at the DeVry campus in Fort Washington, PA (great campus, good presenters, free lunch!). Details here: http://codecamp.phillydotnet.org/2010-2/default.aspx
I'd be lying if I said I wasn't a bit anxious over the whole thing.
I plan on putting together a monster post before the event with the outline, details, and materials of the stuff I plan to cover. See you there (and wish me luck)!
This is one of my favorite things about ants -- the ant death spiral. Actually, it's a circular mill, first described in army ants by Schneirla (1944). A circle of army ants, each one following the ant in front, becomes locked into a circular mill. They will continue to circle each other until they all die. How crazy is that?
This is the perfect description for bad code and bad programmers (and poorly run companies!). Each development cycle that builds on bad code just compounds the problem until you're locked into a code death spiral of "we don't have time to clean it up" or "it'll take too much effort to refactor it" or "this is just how we do it here". Instead, each member of the team begrudgingly (or even worse, dutifully and mindlessly marching like ants) continues to use the bad code, copy and paste the bad code, and build on top of the bad code thereby creating more bad code and more dependencies on the bad code that become inexorably difficult to refactor and extract.
In programming and software development, Paul Graham captures this concept perfectly in his essay on the failure of Yahoo! and why they fell to Microsoft and Google:
In technology, once you have bad programmers, you're doomed. I can't think of an instance where a company has sunk into technical mediocrity and recovered. Good programmers want to work with other good programmers. So once the quality of programmers at your company starts to drop, you enter a death spiral from which there is no recovery.
But not all hope is lost,
Sometimes they escape, though. Beebe (1921) described a circular mill he witnessed in Guyana. It measured 1200 feet in circumference and had a 2.5 hour circuit time per ant. The mill persisted for two days, "with ever increasing numbers of dead bodies littering the route as exhaustion took its toll, but eventually a few workers straggled from the trail thus breaking the cycle, and the raid marched off into the forest."
Avoid the ant death spiral! As Fred Brooks suggests in The Mythical Man Month,
Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one set of their studies, Sackman, Erickson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space requirements! In short the $20,000/year programmer may well be 10 times as productive as the $10,000/year one. The converse may be true, too. The data showed no correlation whatsoever between experience and performance. (I doubt if that is universally true.)
Take the effort to find, work with, hire, or -- better yet -- count yourself among those programmers that can help teams avoid walking into the ant death spiral in the first place. Address lingering issues and inefficiencies as soon as possible; fixing bad code early can yield huge gains in agility and flexibility down the line. Never be afraid to break the cycle and call out bad code and poor practices.
(Alternate title: Failing Productively)
I'll repost the relevant bits here:
KK: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?
FB: You can learn more from failure than success. In failure you're forced to find out what part did not work. But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.
I think this is an important lesson. I've written about this topic before in a post about, of all things, The World of Warcraft.
From the Wired article:
Where traditional learning is based on the execution of carefully graded challenges, accidental learning relies on failure. Virtual environments are safe platforms for trial and error. The chance of failure is high, but the cost is low and the lessons learned are immediate.
To expand on this, in software, I think it's important to have lots of little failures. This is the only way to discover and find solutions that work and solutions that don't work (hopefully on the path to a solution that does work!). In my book, failure is good; it's a necessary part of the learning process (if I'm not failing, I'm probably not doing anything interesting or challenging). I expect to fail and I expect other developers that I work with to fail. My estimations even account for failure. The important thing, however, is to actually examine your failures and to understand why you've failed. More than that, it's important to understand how to fail. The key is to fail early and fail in small, isolated scenarios and be able to extract from that some concept of what will work and what will not; we call this iterating or prototyping or iterating with prototypes. Then, on a macro scale, examine one's work once a project is done and identify what one did wrong, what was painful, what could have been done better and actually make the effort to improve.
Brooks also expands on this in The Design of Design. In chapter 8, "Rationalism versus Empiricism in Design", he writes:
Can I, by sufficient thought alone, design a complex object correctly? This question, particularized to design, represents a crux between two long-established philosophical systems. Rationalism and empiricism. Rationalists believe I can; empiricists believe I cannot.
The empiricist believes that man is inherently flawed, and subject repeatedly to temptation and error. Anything he makes will be flawed. The design methodology task, therefore, is to learn how to determine the flaws by experiment, so that one can iterate on the design.
Brooks boldly states: "I am a dyed-in-the-wool empiricist." I'm in Brooks' camp; I'd definitely consider myself an empiricist. It's evident in my sandbox directory where hundreds of little experiments live that I use to rapidly iterate an idea (and isolate the failures). If you're an empiricist, then -- as Brooks implies -- iterative models of design and development come naturally. I find it more productive to go through a series of quick, small prototype and experiments to identify the failures than to end up discovering one big failure (or lots of little small failures) late in a project! As much as we'd like software engineering to be a purely mechanical process (say an assembly line in an automotive plant), I don't think that this can ever be the case.
So then it follows, if designers and developers work best with an empiricist view of the world, then why do we continue to design, plan, budget, and schedule projects using a waterfall approach? Why do we continue to use a model that does not allow for failure in design or implementation, yet cannot actually prevent failure? "Sin." Brooks writes in chapter 4 "Requirements, Sin, and Contracts":
The one-word answer is sin: pride, greed, and sloth... Because humans are fallen, we cannot trust each other's motivations. Because humans are fallen, we cannot communicate perfectly.
For these reasons, "Get it in writing." We need written agreements for clarity and communication; we need enforceable contracts for protection from misdeeds by others and temptations for ourselves. We need detailed enforceable contracts even more when the players are multi-person organizations, not just individuals. Organizations often behave worse than any member would.
So it seems that the necessity for contracts best explains the persistence of the Waterfall Model for designing and building complex systems.
I find that quite disappointing and pessimistic and yet, full of truth.
On a recent project, we failed to launch the project entirely even after months of designing, design reviews, sign-offs, and discussions. I had already started writing some framework level code, fully anticipating the project starting within a matter of weeks after the design had been scrutinized ad nauseum and "finalized". The client insisted on a rigid waterfall approach and wanted to see the full solution in design documents upfront. As absurd as this sounds, the client had already spent more for design artifacts (documents and UML diagrams), by this point, than they had budgeted for delivery (development, testing, validation, and deployment). It was an impossible objective to start with, but we obliged as an organization despite my own protests internally. Tedious, micro-level designs were constructed and submitted, but to what end? The project was scheduled to go live this April. It is now August and after a change of vendors, it isn't even close to getting off the ground. Instead of many micro-failures along the path to success, this client's fear of failures (embodied by their goal of designing out all of the risk) has lead them down to the path of one big failure.
So the question then is: how can we overcome this? How do you negotiate and write a contract to build a solution iteratively? How can you effectively build that relationship of trust to break down the sins and the communication barriers? Brooks touches upon various models and why they work, but doesn't necessarily offer much insight and guidance in how to overcome the "sins" while still working within an enforceable contract. This, I think, is an important lesson to learn not just for individuals, but for organizations. A certain level of failure must be acceptable and in fact, encouraged; this is essentially what iterative design and development means: iterate quickly and find what does and doesn't work. Make many small mistakes early instead of finding big mistakes in your design or assumptions later.
Footnote: I'm still working through the book and, so far, it has been a great read.