<CharlieDigital/> Programming, Politics, and uhh…pineapples


Why The Office Is The Worst Place To Work

Posted by Charles Chen

Caught this editorial on CNN this weekend:

Companies spend billions on rent, offices, and office equipment so their employees will have a great place to work.  However, when you ask people where they go when they really need to get something done, you'll rarely hear them say it's the office.

If you ask, you'll usually get one of three kinds of responses: A place, a moving object, or a time.

They'll say their house, their back porch, an extra bedroom they've converted into a home office, a library, the coffee shop down the street, the basement. Or they'll say their car, or a train, or a plane -- basically, during their commute. Or they'll say really early in the morning, really late at night, or on the weekend. In other words, when no one else is around to bother them.

Indeed, I think it's important to realize that different individuals have different productivity models.  By that I mean that certain people are "morning people" and their brains are most active and creative in the morning.  Others are "night people" where there brains are most wired and effective in the evenings.  Some people feel more comfortable with natural lighting during the day time.  Some prefer a bright working space while others prefer a dim one.

It seems counterproductive to force everyone into one model of the work environment when the preferences that maximize the efficiency of each individual can be vastly different.

And then there's the bigger issue of interruptions:

I don't blame people for not wanting to be at the office. I blame the office. The modern office has become an interruption factory. You can't get work done at work anymore.

People -- especially creative people -- need long stretches of uninterrupted time to get things done. Fifteen minutes isn't enough. Thirty minutes isn't enough. Even an hour isn't enough.

I believe sleep and work have a lot in common. I don't mean that you can sleep at work or you can work in your sleep. I mean sleep and work are phase-based activities. You don't just go to sleep or go to work -- you go towards sleep and towards work.

You aren't sleeping when your head hits the pillow. You start the sleep process. You have to go through phases to get to the really beneficial sleep. And if you're interrupted before you get there, you have to start over.

The same is true for work. You don't just sit down at your desk and begin working effectively. You have to get into a groove. You go towards good work. It takes some time to settle in, clear your head, and focus on what you need to do.

A very good analogy and I wholeheartedly agree.  At the same time, to ensure that this model works, teams need the right tools (Webex or equivalent, chat clients, VOIP, etc.) and the right people to make it work.  To some extent, it takes a good amount of trust that each member of the team understands their tasks and roles to get their jobs done without having to have a manager or supervisor constantly buggering for a status or having meetings to figure out the status of the tasks.

At least for myself, I find it incredibly difficult to work any any problem of moderate complexity without sitting down and having a solid bloc of a few hours to work on the problem.  There's nothing worse than having to do a mental context switch when one is working on a difficult problem.  Well, it's only worse when that context switch is for a meeting that's inconsequential to the tasks at hand

Filed under: DevLife 1 Comment

jQuery Conference 2010

Posted by Charles Chen

I didn't go, but John Peterson did.

Check out his feedback from the conference.

<3 jQuery

Filed under: DevLife No Comments

Presenting at the Tri-State Code Camp 2010.2!

Posted by Charles Chen

The session is titled "Object Oriented Development and Practices in SharePoint":

Building maintainable solutions on the SharePoint platform can be a challenge (and that might be putting it mildly). Code interspersed with CAML strings, rampant code duplication, hundred (thousand?) line methods, inconsistent code quality, and so on.  How can a dev/technical lead address these problems that arise when a team of individuals with diverse experience and skill levels embarks on designing and building a solution on the SharePoint platform?

This session introduces a series of practices, tools, libraries, and techniques to support an object-oriented approach to building sustainable and maintainable solutions on the SharePoint platform.  It offers an innovative approach to solving complex solution and development problems through embracing simplicity and leveraging the capabilities of the .NET Framework to build a framework for highly object-oriented, patterns based solutions.

Technologies: SharePoint 2007, Visual Studio 2010, C#, .NET, XSLT (Saxon)

Audience: SharePoint developers, SharePoint technical architects, SharePoint technical leads, .NET developers

Level: Intermediate/Advanced.  Audiences with experience in design patterns, reflection, delegates, anonymous functions, and XSLT will be able to follow along and extract the most value from this session.

To expand on that, the plan is to cover some of the lessons I've learned from being deep in the code on a handful of large SharePoint projects.  These lessons I've encapsulated in a framework of sorts which was designed to help:

  1. Accelerate development of solutions for SharePoint
  2. Increase developer productivity while still maintaining high levels of code consistency
  3. Increase adherence to the DRY (Don't Repeat Yourself) principle by leveraging patterns and object-oriented code
  4. Decrease the entry barrier for ASP.NET developers transitioning to SharePoint

It won't be for everyone; however, for any team that's deep into the SharePoint APIs and building custom solutions (web parts, event receivers, web pages, layout pages, and so on), I promise this will be a great session to attend.  My hope is that attendees will be able to walk away with some ideas on how to make their teams more productive and to help teams write better code.

The event will take place on Saturday, October 9th at the DeVry campus in Fort Washington, PA (great campus, good presenters, free lunch!).  Details here: http://codecamp.phillydotnet.org/2010-2/default.aspx

I'd be lying if I said I wasn't a bit anxious over the whole thing.

I plan on putting together a monster post before the event with the outline, details, and materials of the stuff I plan to cover.  See you there (and wish me luck)!


A New Euphemism for Bad Code

Posted by Charles Chen

Ant Death Spiral (via Cynical-C).

This is one of my favorite things about ants -- the ant death spiral. Actually, it's a circular mill, first described in army ants by Schneirla (1944). A circle of army ants, each one following the ant in front, becomes locked into a circular mill. They will continue to circle each other until they all die. How crazy is that?

This is the perfect description for bad code and bad programmers (and poorly run companies!).  Each development cycle that builds on bad code just compounds the problem until you're locked into a code death spiral of "we don't have time to clean it up" or "it'll take too much effort to refactor it" or "this is just how we do it here".  Instead, each member of the team begrudgingly (or even worse, dutifully and mindlessly marching like ants) continues to use the bad code, copy and paste the bad code, and build on top of the bad code thereby creating more bad code and more dependencies on the bad code that become inexorably difficult to refactor and extract.


In programming and software development, Paul Graham captures this concept perfectly in his essay on the failure of Yahoo! and why they fell to Microsoft and Google:

In technology, once you have bad programmers, you're doomed. I can't think of an instance where a company has sunk into technical mediocrity and recovered. Good programmers want to work with other good programmers. So once the quality of programmers at your company starts to drop, you enter a death spiral from which there is no recovery.

But not all hope is lost,

Sometimes they escape, though. Beebe (1921) described a circular mill he witnessed in Guyana. It measured 1200 feet in circumference and had a 2.5 hour circuit time per ant. The mill persisted for two days, "with ever increasing numbers of dead bodies littering the route as exhaustion took its toll, but eventually a few workers straggled from the trail thus breaking the cycle, and the raid marched off into the forest."

Avoid the ant death spiral!  As Fred Brooks suggests in The Mythical Man Month,

Programming managers have long recognized wide productivity variations between good programmers and poor ones.  But the actual measured magnitudes have astounded all of us.  In one set of their studies, Sackman, Erickson, and Grant were measuring performances of a group of experienced programmers.  Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space requirements!  In short the $20,000/year programmer may well be 10 times as productive as the $10,000/year one.  The converse may be true, too.  The data showed no correlation whatsoever between experience and performance. (I doubt if that is universally true.)

Take the effort to find, work with, hire, or -- better yet -- count yourself among those programmers that can help teams avoid walking into the ant death spiral in the first place.  Address lingering issues and inefficiencies as soon as possible; fixing bad code early can yield huge gains in agility and flexibility down the line.  Never be afraid to break the cycle and call out bad code and poor practices.

Filed under: DevLife 2 Comments

Failing (Gracefully)

Posted by Charles Chen

(Alternate title: Failing Productively)

I posted some snippets from a recent interview with Fred Brooks in the August issue of Wired (by the way, I'm working through his latest compilation of essays, The Design of Design).

I'll repost the relevant bits here:

KK: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?

FB: You can learn more from failure than success.  In failure you're forced to find out what part did not work.  But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.

I think this is an important lesson.  I've written about this topic before in a post about, of all things, The World of Warcraft.

From the Wired article:

Where traditional learning is based on the execution of carefully graded challenges, accidental learning relies on failure. Virtual environments are safe platforms for trial and error. The chance of failure is high, but the cost is low and the lessons learned are immediate.

To expand on this, in software, I think it's important to have lots of little failures.  This is the only way to discover and find solutions that work and solutions that don't work (hopefully on the path to a solution that does work!).  In my book, failure is good; it's a necessary part of the learning process (if I'm not failing, I'm probably not doing anything interesting or challenging).  I expect to fail and I expect other developers that I work with to fail.  My estimations even account for failure.  The important thing, however, is to actually examine your failures and to understand why you've failed.  More than that, it's important to understand how to fail.  The key is to fail early and fail in small, isolated scenarios and be able to extract from that some concept of what will work and what will not; we call this iterating or prototyping or iterating with prototypes.  Then, on a macro scale, examine one's work once a project is done and identify what one did wrong, what was painful, what could have been done better and actually make the effort to improve.

Brooks also expands on this in The Design of Design.  In chapter 8, "Rationalism versus Empiricism in Design", he writes:

Can I, by sufficient thought alone, design a complex object correctly?  This question, particularized to design, represents a crux between two long-established philosophical systems.  Rationalism and empiricism.  Rationalists believe I can; empiricists believe I cannot.

The empiricist believes that man is inherently flawed, and subject repeatedly to temptation and error.  Anything he makes will be flawed.  The design methodology task, therefore, is to learn how to determine the flaws by experiment, so that one can iterate on the design.

Brooks boldly states: "I am a dyed-in-the-wool empiricist."  I'm in Brooks' camp; I'd definitely consider myself an empiricist.  It's evident in my sandbox directory where hundreds of little experiments live that I use to rapidly iterate an idea (and isolate the failures).  If you're an empiricist, then -- as Brooks implies -- iterative models of design and development come naturally.  I find it more productive to go through a series of quick, small prototype and experiments to identify the failures than to end up discovering one big failure (or lots of little small failures) late in a project!  As much as we'd like software engineering to be a purely mechanical process (say an assembly line in an automotive plant), I don't think that this can ever be the case.

So then it follows, if designers and developers work best with an empiricist view of the world, then why do we continue to design, plan, budget, and schedule projects using a waterfall approach?  Why do we continue to use a model that does not allow for failure in design or implementation, yet cannot actually prevent failure?  "Sin."  Brooks writes in chapter 4 "Requirements, Sin, and Contracts":

The one-word answer is sin: pride, greed, and sloth... Because humans are fallen, we cannot trust each other's motivations.  Because humans are fallen, we cannot communicate perfectly.

For these reasons, "Get it in writing."  We need written agreements for clarity and communication; we need enforceable contracts for protection from misdeeds by others and temptations for ourselves.  We need detailed enforceable contracts even more when the players are multi-person organizations, not just individuals.  Organizations often behave worse than any member would.

So it seems that the necessity for contracts best explains the persistence of the Waterfall Model for designing and building complex systems.

I find that quite disappointing and pessimistic and yet, full of truth.

On a recent project, we failed to launch the project entirely even after months of designing, design reviews, sign-offs, and discussions.  I had already started writing some framework level code, fully anticipating the project starting within a matter of weeks after the design had been scrutinized ad nauseum and "finalized".  The client insisted on a rigid waterfall approach and wanted to see the full solution in design documents upfront.  As absurd as this sounds, the client had already spent more for design artifacts (documents and UML diagrams), by this point, than they had budgeted for delivery (development, testing, validation, and deployment).  It was an impossible objective to start with, but we obliged as an organization despite my own protests internally.  Tedious, micro-level designs were constructed and submitted, but to what end?  The project was scheduled to go live this April.  It is now August and after a change of vendors, it isn't even close to getting off the ground.  Instead of many micro-failures along the path to success, this client's fear of failures (embodied by their goal of designing out all of the risk) has lead them down to the path of one big failure.

So the question then is: how can we overcome this?  How do you negotiate and write a contract to build a solution iteratively?  How can you effectively build that relationship of trust to break down the sins and the communication barriers?  Brooks touches upon various models and why they work, but doesn't necessarily offer much insight and guidance in how to overcome the "sins" while still working within an enforceable contract.  This, I think, is an important lesson to learn not just for individuals, but for organizations.  A certain level of failure must be acceptable and in fact, encouraged; this is essentially what iterative design and development means: iterate quickly and find what does and doesn't work.  Make many small mistakes early instead of finding big mistakes in your design or assumptions later.

Footnote: I'm still working through the book and, so far, it has been a great read.


Laptop Buying – For Developers

Posted by Charles Chen

About a year ago, I caught on to Dell's refurbished laptops over at Dell Outlet and since then I've purchased a total of three four laptops from there and each one has worked out great.

My first purchase was a Dell Latitude E6400 which I used as a primary development machine as I was traveling heavily.  At the time, as configured, the laptop that I acquired was over $500 cheaper than a brand new laptop from their business channel with the addition of a 15% off coupon (which they throw out there all the time; you can check their Twitter stream for updates).  That's a huge savings.  I used it to run Visual Studio 2008 and VMWare 6.5.  It was plenty good, but with the rollout of Visual Studio 2010 and SharePoint 2010, I definitely noticed a HUGE decrease in performance.  It was excruciating.

I was torn between upgrading the E6400, which I had for less than a year, by adding another 4GB of RAM and an SSD or getting a new laptop, but it just so happened that my mom needed a laptop for some contract work that she picked up.  So I turned to Dell Outlet again and picked up a Core-i7 packing Latitude E6410, purchased an extra 4GB of RAM (total of 8GB), a Muskin Calisto Deluxe from Newegg (a Sandforce based SSD), and a second drive tray from NewModeUS for somewhere around $1600 (note that this includes almost $80 from shipping and taxes from Newegg and NewModeUS) after using a 15% off coupon for the laptop.  It's a great value considering configuring the same laptop from the business channel would have cost around $400-500 more.

The E6410, with the 8GB and the Calisto SSD, is able to lay down some serious computing power.  It handles my SharePoint 2010 Enterprise VM without a sweat.  Visual Studio 2010 is far more usable now as well.  As I almost never use my DVD drive, I swapped it out for a Western Digital Scropio Black (at $80 for 7200RPM, 320GB, it can't be beat in terms of price/performance) and store all of my large files and VM images on the second drive.

I've also purchased an E4310 for my wife this year.  My experience with the E-class Latitudes from Dell Outlet has been so overwhelmingly positive, that it was a no-brainer.  It's a great little machine for the road warrior developer and now that I've felt the heft and the size, I'd seriously consider it myself (although it doesn't have an option for a Core i7 CPU -- i3 and i5 only) as NewModeUS also has a drive tray for the E4310.  She tends to use laptops for far longer than I do 😀 Her last one lasted her about 5 years now so I hope that this one can last at least as long.

Refurbished? I'm not really sure what this means.  It's pretty broad I guess, but considering that I got my E6410 in July and the laptop itself was released only in April or May, I figured that it had to be in pretty good shape.  How much wear could a laptop accumulate in two months?  My guess is that the refurbished laptops fall into one of a few scenarios (just my guess):

  1. Ordered too many -- perhaps a hiring freeze or some employees were let go before IT was notified.
  2. Not needed anymore -- perhaps a company went bankrupt or went out of business?
  3. Some malfunctioning component -- maybe the power supply didn't work or the video card was wonky and the whole chassis was returned.
  4. Misconfigured -- IT department receives shipment and finds that a batch of the laptops were misconfigured with the wrong CPU or missing other features.

I don't know the answer and I don't know why my laptop is "refurbished", but for all intents and purposes, when I pulled it out of the box, it was brand spanking new; no wear to speak of.

Dell E64xx.  I'd like to take a moment to reflect on these laptops.  I spent quite a bit of time looking into the offerings from HP as well.  In particular, the HP EliteBook 8440w and 8540w which I was also considering.  Ultimately, having had my experiences with the E6400 the first time and seeing the build quality of the E-class Latitudes, it was hard to justify shelling out the additional premium for the HP units (the pretty consistent 15% off coupons for the Latitudes at Dell Outlet are a big incentive).  Given that the performance difference between the two would be largely marginal, I stuck with the E-class laptop once I found out about NewModeUS (Dell doesn't let you configure a laptop with two 2.5" hard drives the way I wanted it configured and it was one of my key criteria as I keep several multi-GB VM images on my laptop).

Overall, these laptops have been a joy to work with.  Far better than that Lenovo T series laptops (which my sister purchased herself despite my suggestions and which I use for some clients).  The screen is bright, the connectivity is great (though no USB3, it does have eSATA and a DisplayPort connector), the keyboard is excellent (especially with the backlighting), the web cam and microphone are excellent, it has a pointer "nipple", and the build quality is top notch.  I regularly pick up the laptop one handed and there's little discernible flex; the chassis is very rigid.  I also like that the system is so easy to customize for the do-it-yourselfer.  This allows you to buy a cheap chassis (focus on the CPU) and simply just replace the RAM and the HDD.  The entire underside (a thin, magnesium alloy plate) is held in by one screw (to my surprise).

Even with the Core-i7 onboard, it isn't any noisier nor does it run appreciably hotter than my Core 2 Duo packing E6400.

I've also come to really like the overall design of the E-class Latitudes.  They're relatively thin, simple, and classy looking.  Much better looking than the Lenovos.

Dual Core or Quad Core? I struggled with this for a while as I was heavily considering one of the quad core Core-i7 processors.  However, I'm glad I chose the dual core.  I've found the performance to be excellent and the price, heat, and battery life trade-offs to be the big win.  Generally speaking, in development, it would seem that your limiting factors are the disk speed and RAM rather than the number of physical cores.  Given that the dual core CPUs have faster physical cores than the quad core CPUs, my feeling is that one is probably better off with the dual core Core i7 CPUs for a development laptop.

There was some good discussion on a thread over at NotebookReview.com with great insight on the topic.  Highly recommended read for developers in the same quandary as I was on dual core vs. quad core.

At the time, I was also thinking that having a quad core would help in terms of the VM (I was getting terrible performance on my SharePoint 2010 VM) by being able to assign two cores to the VM, but the VMWare documentation seems to advise against this (can't find it now, but there was a whitepaper on this very topic) in most scenarios.  In practice, with the 8GB of RAM and the SSD, the dual core Core-i7 has proven to be more than enough.

Suggestions for Developers. For any developers looking to get your own laptops or for small development shops, I'd definitely recommend looking at Dell Outlet and the E6410 and E4310 laptops.  Wait for the 15% off coupons and you'll get yourself a steal.  For the time being, unless you plan on getting the top of the line quad core Core i7 and you aren't concerned about heat or battery life, I'd stick with the dual core Core i5 or Core i7 CPUs.

Here's what I would do (once I've got a 15% off coupon code):

  1. Buy the chassis with the best CPU and ancillary features that are important to you (web cam, battery size, BlueTooth, Windows 7, x64, etc.) that you can find in their database.  For the most part, disregard the HDD, even if it comes equipped with an SSD.  You can kind of disregard the RAM, but look for something that has 4GB in one slot.
  2. Buy a Sandforce based SSD (the Calisto is a great SSD -- I've already purchased two of these).  You can check LogicBuy.com as amazing deals do occasionally surface.  Target at least 120GB.
  3. Buy an extra 4GB of RAM from Newegg.
  4. Buy a drive tray from NewModeUS for your chassis (do note that the drive tray is an actual SATA interface -- WIN!).
  5. Buy a Western Digital Scorpio Black HDD and plug that into your new drive tray (Amazon has good prices if you have Prime).  Use this drive to store you large files and your VMs (store your source files on the SSD for speed).
  6. Buy an external enclosure for whatever drive you take out of the chassis.  I've used the ACOMDATA Tango enclosures (see my review at the link) which supports eSATA.  Use this as an external drive or for backups.
  7. Do a clean install with the SSD as the primary.
  8. Once you have you system reinstalled, be sure to change the write caching policy to improve performance on the disk in the tray.  Follow these steps:
    1. Right click on Computer
    2. Select Manage
    3. Click Disk Management
    4. Right click on the disk and select Properties
    5. In the Hardware tab, select the disk and click Properties
    6. In the new dialog, select the Policies tab
    7. Here, you should enable write caching and you can also turn off the Windows write cache buffer flushing if you want.  Since it's essentially an internal drive now (unless you plan on hot swapping it) with battery backup, it should be pretty safe (but do so at your own risk!)

Write caching configuration

I'm not sure how the Seagate Momentus XT hybrid drive does in terms of large files that you'd be working with in terms of VMs, but I've had pretty good success with the Scorpio Black.

Suggestions for Dell. Get some better web developers.  Seriously.  The Dell Outlet site is barely usable.  It was terrible before they fixed it up, but they've somehow made it prettier, but much harder to use -- I wouldn't have thought that possible given the state the site was in when I first used it.

With a bit of patience (waiting for the coupon), luck (finding the right configuration for your needs), and elbow grease (upgrading a few components yourself), you'll have yourself a killer development machine at a great, budget friendly price.  My E6410 is now my primary and only development machine.

Filed under: DevLife No Comments

Lessons from Fred Brooks

Posted by Charles Chen

Brooks is one of my revered writers on the subject of software engineering. The basic lessons in The Mythical Man Month are so obvious and fundamental yet often obscured or forgotten in many of the projects that I've worked on. Certainly, even this classic is "no silver bullet", as Brooks himself would concede, but it offers sage advice for aspiring developers and architects.

In this month's Wired magazine (8.10), he dishes some more wisdom in an interview with Wired's Kevin Kelly.

KK: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?
FB: You can learn more from failure than success.  In failure you're forced to find out what part did not work.  But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.

KK: In your experience, what's the best process for design?
FB: Great design does not come from great processes; it comes from great designers.

Both these points resonate with me and I think the last point is particularly salient.  Brooks highlights an example in Steve Jobs:

KK: You're a Mac user.  What have you learned from the design of Apple products?
FB: Edwin Land, inventor of the Polaroid camera, once said that his method of design was to start with a vision of what you want and then, one by one, remove the technical obstacles until you have it.  I think that's what Steve Jobs does.  He starts with a vision rather than a list of features.

Brooks' The Mythical Man Month and  The Design of Design should be on every developer, architect, and IT project manager's reading list.

Filed under: DevLife No Comments

Moving to WordPress and WebFaction

Posted by Charles Chen

For some time, I've had hosting with WebFaction for some personal python+django projects I was (well, am still...) working on while my main blog was hosted with ServerIntellect (a great hosting company, by the way).

While WebFaction breaks one of my steadfast rules of hosting by not having a plainly visible phone number (actually, I can't find one anywhere on their site), I've been incredibly pleased with the hosting overall.

I've been running my own Trac installs, SVN servers, Mercurial servers, and now WordPress for < $10/mo.

And now I'm hosting two top level domains as well.  Still < $10/mo.

While I do miss being able to call someone 24-7, their documentation on how to install and configure the various apps is great, their reps are active in the support forums, the plethora of out-of-the-box applications is great (django, Trac, SVN -- just to name a few), and I've gotten email responses to newb-ish Linux questions in about 2-3 minutes.  But overall, it's just a more useful platform for me.

The whole blog move seemed daunting when I approached it, but I really just needed to slice out a small chunk of time and get it done.  All-in-all, I had my content migrated and visible in I'd say less than 20 minutes.

I found these resources helpful in getting it all done:

At the end of the day, it wasn't nearly as dreadful as I feared it would be.  However, it was ultimately worth it to stop paying for two hosting providers and the pains of the aging dasBlog engine.


The Math of Mediocrity

Posted by Charles Chen

Professionally, almost nothing aggravates me more than the Math of Mediocrity.  The only thing worse than observing failure based on the Math of Mediocrity is having to actively participate in it.

Steve Jobs’ Parable of the Concept Car is a perfect illustration of how companies fail to execute because they fall for the Math of Mediocrity:

"Here's what you find at a lot of companies," he says, kicking back in a conference room at Apple's gleaming white Silicon Valley headquarters, which looks something like a cross between an Ivy League university and an iPod. "You know how you see a show car, and it's really cool, and then four years later you see the production car, and it sucks? And you go, What happened? They had it! They had it in the palm of their hands! They grabbed defeat from the jaws of victory!

"What happened was, the designers came up with this really great idea. Then they take it to the engineers, and the engineers go, 'Nah, we can't do that. That's impossible.' And so it gets a lot worse. Then they take it to the manufacturing people, and they go, 'We can't build that!' And it gets a lot worse."

When Jobs took up his present position at Apple in 1997, that's the situation he found. He and Jonathan Ive, head of design, came up with the original iMac, a candy-colored computer merged with a cathode-ray tube that, at the time, looked like nothing anybody had seen outside of a Jetsons cartoon. "Sure enough," Jobs recalls, "when we took it to the engineers, they said, 'Oh.' And they came up with 38 reasons. And I said, 'No, no, we're doing this.' And they said, 'Well, why?' And I said, 'Because I'm the CEO, and I think it can be done.' And so they kind of begrudgingly did it. But then it was a big hit."

This doesn’t just happen in automobile manufacturing or engineering, but it also occurs plenty often in software development.  One particular example is what I call the “Communism of Failure”.  In this case, the best solutions, the best ideas, the ones that will benefit the end users the most, the ones that will get the users excited, the ideas and solutions that will help people be more productive or more efficient are…shelved.  Why?  because it can’t be supported by the commoners in tech support.

Certainly, this is a valid concern -  it would be absolutely foolish to believe otherwise; a solution is useless if only the brightest and most exceptional of minds can understand it, deconstruct it, rebuild it, fix it and so on.  But proper software engineering and project management offers ways to mitigate this through process and practice.  Pair programming, strict guidelines and expectations for documentation, well documented common coding styles and techniques, leveraging automation and code generation, encouraging reuse of code assets by thinking in terms of frameworks.  The point is, building an exceptional solution is not exclusive of building a sustainable solution.

The Legalist philosopher Han Fei-Tzu wrote:

If it were necessary to rely on a shaft that had grown perfectly straight, within a hundred generations there would be no arrow. If it were necessary to rely on wood that had grown perfectly round, within a thousand generations there would be no cart wheel. If a naturally straight shaft or naturally round wood cannot be found within a hundred generations, how is it that in all generations carriages are used and birds shot? Because tools are used to straighten and bend. But even if one did not rely on tools and still got a naturally straight shaft or a piece of naturally round wood, a skillful craftsman would not value this. Why? Because it is not just one person that needs to ride and not just one arrow that needs to be shot.

Indeed, a solution catered towards the brightest of minds is equally as bad as a solution catered towards the most common of capabilities.  But through the usage of the right tools and the application of bending and straightening, we can bridge the two.  It's not a compromise, but rather an effort to build a framework that allows excellence to propagate.

As in Jobs’ parable, where solution and enterprise architects fail at the Math of Mediocrity is that they cede to the concerns of the plebeians; they sacrifice excellence for “good enough” so that tech support can support the solution.  On the one hand, Solution A can solve the problem for the end user in 1 click.  On the other hand, Solution B requires the end user to perform more than 20 clicks to complete the same operation but is easier for tech support.  Which is better?  Jobs’ would surely side with Solution A, even if it’s the more technically complex solution because it delivers a better user experience and improves efficiency and productivity.  Amazon loved Solution A so much that they gave it a name and patented it.

The problem arises because of ambiguity in calculating the true cost of Solution A compared to Solution B.  Solution A may require $300/HR consultants and twice as much time to implement.  Solution B may require $150/HR consultants and cost only half as much as Solution A.  These costs are concrete and easy to quantify and grasp.  What seems to escape the calculus is the cost of lost productivity and efficiency by having the end users, who have to use the applications day in and day out, suffer through an inferior solution because of the marginal costs of contracting better developers, working smarter, building from a framework, and hiring more competent tech support.

The same math carries over to the support staff, who are a marginal percentage of the overall workforce for any large organization or for a particular project or initiative.  The question is whether it’s better to hire more competent support staff who can maintain a more complex but better solution or to hire less competent support staff at lower costs.  And again, the question comes back to the issue of productivity and efficiency and comparing the gains made across an organization of tens of thousands of people versus the additional costs associated with a few dozen people.  No question: I improve the user experience which not only improves productivity and efficiency if done right, but also aids in adoption and uptake;  cost is but only one metric of success and perhaps not even the most important one.  In the end, it really doesn't matter how much you saved by approaching the problem using Solution B; if the end result is a clunky, hard to use, inefficient, productivity drain, then the project has failed, regardless of how much money was saved by catering to the mediocre.

A solution architected and designed around a compromise for the average can work, but the problem must be approached differently.  Leverage better developers from the get-go, make documentation a priority, standardize code, leverage patterns, ensure that the right tools and platforms are in place, build frameworks to support the most common scenarios, use pair programming and code reviews to ensure cross pollination of skills and knowledge, make learning and education a primary job conern, find solutions through engineering and process, not through capitulation to the lowest common denominator.

Update: An article in the New York Times by Damon Darlin this weekend caught my attention and adds another layer to this:

Remember those lines? Back when commissars commanded the Soviet Union’s economy like Knute commanding the tides, people would wait for hours in long queues for free bread. Although the bread was free, people paid for it with their time.

To economists, the long lines were a real-life example of the market requirement that payment be made one way or another — in money or in time. (In this country, the long lines would be for an Apple gadget, which is neither cheap nor scarce. But explaining that mystery is for another time.)

Paying with time rather than money seems just as common on the Web...Technology could very well make the Soviet bread line disappear. Do you remember how long it took to do a Google search a dozen years ago, when the service started? Probably not, but Google engineers calculate that their refinements have saved users a billion seconds a day. Using Google to quickly make the calculation, that comes out to about 1,800 lifetimes.

Indeed, the question is whether a business wants to make payment in money or in time (well, for businesses, time is money).  For that reason, enterprise architects should think long and hard about the priorities of the platform or solution they are architecting.  Is it just to do the minimum and keep maintenance effort and costs low?  Or is it to actually streamline and improve the business processes and improve productivity and efficiency?  How can you achieve the latter while sacrificing minimally of the former?

Filed under: DevLife, Rants 2 Comments

Philly .NET Code Camp and Windows Azure

Posted by Charles Chen

I spent half a day at the Philly .NET Code Camp and ended up attending only two sessions (weather was too nice outside to be sitting inside on a Saturday :-D).  By chance, I saw Alvin Ashcraft's name on the list of presenters when I showed up, so I was hoping I'd get to meet him in person.  But he was seemingly absent from his early morning session.

One of the two sessions I attended was on Windows Azure; it was an excellent presentation given by Dave Isbitski.  I've dabbled with it a bit early on in the CTP and was not particularly impressed.  Since then, I've continued to read up on it on and off.  The one thing that I took away from today's session was that Azure is not enterprise ready (yet) and perhaps isn't meant to be?

To understand why, consider your account management and login experience: it's all tied to Windows Live IDs.  Yes, that's right.  Windows Live IDs.  This means that your enterprise password and account naming policies can't be enforced.  Your enterprise password complexity and history rules are not applicable.  Furthermore, what happens if the person who owns or creates a new Live ID leaves the company?  Perhaps she'd be nice enough to hand over the password info, but what if this person were hit by a bus?  I think this is a big problem.  What if she has a grudge?  And just how secure are Windows Live IDs?

Account management is another issue.  As it is, it requires entering in credit card information.  This doesn't scream "enterprise" to me.  You'd think there would be an ability to link accounts to company by company billing accounts (I dunno, maybe via a company's MSDN license?)  There's also no concept of hierarchical account linking and instance management.  This means that I can't even associate multiple Live IDs with one account and set granular permissions on the instances that each account can control (for example, Steve's account can manage these two worker roles while Joe's account manages this web role).  What it boils down to is the wild, wild west of account management; there's no global view for a company to monitor usage across multiple accounts.

While there are a host of other issues (the ability to create data (SQL log exports and external replication are not supported, for example) and image backups) that affect enterprise adoption, perhaps the biggest one, in my opinion, is the big question mark of how these systems can be validated.  Whether you're working with clients in the financial industry or perhaps insurance or life sciences (like me), enterprise systems will need to be validated and certified.  I see this is a big challenge for adoption in life sciences due to the strict validation requirements for software systems.

At the end of the day, I can kind of see where Microsoft is going with this if you compare it to Google Apps, for example.  But the key differentiator to me has always been that Microsoft always represented the enterprise while Google perhaps better represents the entrepreneur and the tinkerer.  While both approaches are needed, it does add some difficulty in terms of evaluting Azure for enterprise usage given that the current implementation of some of the core features are not very enterprise friendly.

That said, it's still an exciting platform.  I've got a few things brewing and I'll be keeping the blog updated as I complete my experiments.

Filed under: .Net, DevLife 2 Comments