About a year ago, I caught on to Dell's refurbished laptops over at Dell Outlet and since then I've purchased a total of three four laptops from there and each one has worked out great.
My first purchase was a Dell Latitude E6400 which I used as a primary development machine as I was traveling heavily. At the time, as configured, the laptop that I acquired was over $500 cheaper than a brand new laptop from their business channel with the addition of a 15% off coupon (which they throw out there all the time; you can check their Twitter stream for updates). That's a huge savings. I used it to run Visual Studio 2008 and VMWare 6.5. It was plenty good, but with the rollout of Visual Studio 2010 and SharePoint 2010, I definitely noticed a HUGE decrease in performance. It was excruciating.
I was torn between upgrading the E6400, which I had for less than a year, by adding another 4GB of RAM and an SSD or getting a new laptop, but it just so happened that my mom needed a laptop for some contract work that she picked up. So I turned to Dell Outlet again and picked up a Core-i7 packing Latitude E6410, purchased an extra 4GB of RAM (total of 8GB), a Muskin Calisto Deluxe from Newegg (a Sandforce based SSD), and a second drive tray from NewModeUS for somewhere around $1600 (note that this includes almost $80 from shipping and taxes from Newegg and NewModeUS) after using a 15% off coupon for the laptop. It's a great value considering configuring the same laptop from the business channel would have cost around $400-500 more.
The E6410, with the 8GB and the Calisto SSD, is able to lay down some serious computing power. It handles my SharePoint 2010 Enterprise VM without a sweat. Visual Studio 2010 is far more usable now as well. As I almost never use my DVD drive, I swapped it out for a Western Digital Scropio Black (at $80 for 7200RPM, 320GB, it can't be beat in terms of price/performance) and store all of my large files and VM images on the second drive.
I've also purchased an E4310 for my wife this year. My experience with the E-class Latitudes from Dell Outlet has been so overwhelmingly positive, that it was a no-brainer. It's a great little machine for the road warrior developer and now that I've felt the heft and the size, I'd seriously consider it myself (although it doesn't have an option for a Core i7 CPU -- i3 and i5 only) as NewModeUS also has a drive tray for the E4310. She tends to use laptops for far longer than I do Her last one lasted her about 5 years now so I hope that this one can last at least as long.
Refurbished? I'm not really sure what this means. It's pretty broad I guess, but considering that I got my E6410 in July and the laptop itself was released only in April or May, I figured that it had to be in pretty good shape. How much wear could a laptop accumulate in two months? My guess is that the refurbished laptops fall into one of a few scenarios (just my guess):
- Ordered too many -- perhaps a hiring freeze or some employees were let go before IT was notified.
- Not needed anymore -- perhaps a company went bankrupt or went out of business?
- Some malfunctioning component -- maybe the power supply didn't work or the video card was wonky and the whole chassis was returned.
- Misconfigured -- IT department receives shipment and finds that a batch of the laptops were misconfigured with the wrong CPU or missing other features.
I don't know the answer and I don't know why my laptop is "refurbished", but for all intents and purposes, when I pulled it out of the box, it was brand spanking new; no wear to speak of.
Dell E64xx. I'd like to take a moment to reflect on these laptops. I spent quite a bit of time looking into the offerings from HP as well. In particular, the HP EliteBook 8440w and 8540w which I was also considering. Ultimately, having had my experiences with the E6400 the first time and seeing the build quality of the E-class Latitudes, it was hard to justify shelling out the additional premium for the HP units (the pretty consistent 15% off coupons for the Latitudes at Dell Outlet are a big incentive). Given that the performance difference between the two would be largely marginal, I stuck with the E-class laptop once I found out about NewModeUS (Dell doesn't let you configure a laptop with two 2.5" hard drives the way I wanted it configured and it was one of my key criteria as I keep several multi-GB VM images on my laptop).
Overall, these laptops have been a joy to work with. Far better than that Lenovo T series laptops (which my sister purchased herself despite my suggestions and which I use for some clients). The screen is bright, the connectivity is great (though no USB3, it does have eSATA and a DisplayPort connector), the keyboard is excellent (especially with the backlighting), the web cam and microphone are excellent, it has a pointer "nipple", and the build quality is top notch. I regularly pick up the laptop one handed and there's little discernible flex; the chassis is very rigid. I also like that the system is so easy to customize for the do-it-yourselfer. This allows you to buy a cheap chassis (focus on the CPU) and simply just replace the RAM and the HDD. The entire underside (a thin, magnesium alloy plate) is held in by one screw (to my surprise).
Even with the Core-i7 onboard, it isn't any noisier nor does it run appreciably hotter than my Core 2 Duo packing E6400.
I've also come to really like the overall design of the E-class Latitudes. They're relatively thin, simple, and classy looking. Much better looking than the Lenovos.
Dual Core or Quad Core? I struggled with this for a while as I was heavily considering one of the quad core Core-i7 processors. However, I'm glad I chose the dual core. I've found the performance to be excellent and the price, heat, and battery life trade-offs to be the big win. Generally speaking, in development, it would seem that your limiting factors are the disk speed and RAM rather than the number of physical cores. Given that the dual core CPUs have faster physical cores than the quad core CPUs, my feeling is that one is probably better off with the dual core Core i7 CPUs for a development laptop.
There was some good discussion on a thread over at NotebookReview.com with great insight on the topic. Highly recommended read for developers in the same quandary as I was on dual core vs. quad core.
At the time, I was also thinking that having a quad core would help in terms of the VM (I was getting terrible performance on my SharePoint 2010 VM) by being able to assign two cores to the VM, but the VMWare documentation seems to advise against this (can't find it now, but there was a whitepaper on this very topic) in most scenarios. In practice, with the 8GB of RAM and the SSD, the dual core Core-i7 has proven to be more than enough.
Suggestions for Developers. For any developers looking to get your own laptops or for small development shops, I'd definitely recommend looking at Dell Outlet and the E6410 and E4310 laptops. Wait for the 15% off coupons and you'll get yourself a steal. For the time being, unless you plan on getting the top of the line quad core Core i7 and you aren't concerned about heat or battery life, I'd stick with the dual core Core i5 or Core i7 CPUs.
Here's what I would do (once I've got a 15% off coupon code):
- Buy the chassis with the best CPU and ancillary features that are important to you (web cam, battery size, BlueTooth, Windows 7, x64, etc.) that you can find in their database. For the most part, disregard the HDD, even if it comes equipped with an SSD. You can kind of disregard the RAM, but look for something that has 4GB in one slot.
- Buy a Sandforce based SSD (the Calisto is a great SSD -- I've already purchased two of these). You can check LogicBuy.com as amazing deals do occasionally surface. Target at least 120GB.
- Buy an extra 4GB of RAM from Newegg.
- Buy a drive tray from NewModeUS for your chassis (do note that the drive tray is an actual SATA interface -- WIN!).
- Buy a Western Digital Scorpio Black HDD and plug that into your new drive tray (Amazon has good prices if you have Prime). Use this drive to store you large files and your VMs (store your source files on the SSD for speed).
- Buy an external enclosure for whatever drive you take out of the chassis. I've used the ACOMDATA Tango enclosures (see my review at the link) which supports eSATA. Use this as an external drive or for backups.
- Do a clean install with the SSD as the primary.
- Once you have you system reinstalled, be sure to change the write caching policy to improve performance on the disk in the tray. Follow these steps:
- Right click on Computer
- Select Manage
- Click Disk Management
- Right click on the disk and select Properties
- In the Hardware tab, select the disk and click Properties
- In the new dialog, select the Policies tab
- Here, you should enable write caching and you can also turn off the Windows write cache buffer flushing if you want. Since it's essentially an internal drive now (unless you plan on hot swapping it) with battery backup, it should be pretty safe (but do so at your own risk!)
I'm not sure how the Seagate Momentus XT hybrid drive does in terms of large files that you'd be working with in terms of VMs, but I've had pretty good success with the Scorpio Black.
Suggestions for Dell. Get some better web developers. Seriously. The Dell Outlet site is barely usable. It was terrible before they fixed it up, but they've somehow made it prettier, but much harder to use -- I wouldn't have thought that possible given the state the site was in when I first used it.
With a bit of patience (waiting for the coupon), luck (finding the right configuration for your needs), and elbow grease (upgrading a few components yourself), you'll have yourself a killer development machine at a great, budget friendly price. My E6410 is now my primary and only development machine.
Brooks is one of my revered writers on the subject of software engineering. The basic lessons in The Mythical Man Month are so obvious and fundamental yet often obscured or forgotten in many of the projects that I've worked on. Certainly, even this classic is "no silver bullet", as Brooks himself would concede, but it offers sage advice for aspiring developers and architects.
In this month's Wired magazine (8.10), he dishes some more wisdom in an interview with Wired's Kevin Kelly.
KK: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?
FB: You can learn more from failure than success. In failure you're forced to find out what part did not work. But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.
KK: In your experience, what's the best process for design?
FB: Great design does not come from great processes; it comes from great designers.
Both these points resonate with me and I think the last point is particularly salient. Brooks highlights an example in Steve Jobs:
KK: You're a Mac user. What have you learned from the design of Apple products?
FB: Edwin Land, inventor of the Polaroid camera, once said that his method of design was to start with a vision of what you want and then, one by one, remove the technical obstacles until you have it. I think that's what Steve Jobs does. He starts with a vision rather than a list of features.
Brooks' The Mythical Man Month and The Design of Design should be on every developer, architect, and IT project manager's reading list.
For some time, I've had hosting with WebFaction for some personal python+django projects I was (well, am still...) working on while my main blog was hosted with ServerIntellect (a great hosting company, by the way).
While WebFaction breaks one of my steadfast rules of hosting by not having a plainly visible phone number (actually, I can't find one anywhere on their site), I've been incredibly pleased with the hosting overall.
I've been running my own Trac installs, SVN servers, Mercurial servers, and now WordPress for < $10/mo.
And now I'm hosting two top level domains as well. Still < $10/mo.
While I do miss being able to call someone 24-7, their documentation on how to install and configure the various apps is great, their reps are active in the support forums, the plethora of out-of-the-box applications is great (django, Trac, SVN -- just to name a few), and I've gotten email responses to newb-ish Linux questions in about 2-3 minutes. But overall, it's just a more useful platform for me.
The whole blog move seemed daunting when I approached it, but I really just needed to slice out a small chunk of time and get it done. All-in-all, I had my content migrated and visible in I'd say less than 20 minutes.
I found these resources helpful in getting it all done:
- Aaron Lerch - Breaking Up: Moving Blog Engines
- DasBlog to BlogML Converter (extract and convert my current blog contents; posts, categories, comments -- everything)
- Sean Patterson's BlogML Importer Plugin
- John Godley's Redirection Plugin (redirect my .aspx URLs; I couldn't get it to work using .htaccess alone)
At the end of the day, it wasn't nearly as dreadful as I feared it would be. However, it was ultimately worth it to stop paying for two hosting providers and the pains of the aging dasBlog engine.
Professionally, almost nothing aggravates me more than the Math of Mediocrity. The only thing worse than observing failure based on the Math of Mediocrity is having to actively participate in it.
Steve Jobs’ Parable of the Concept Car is a perfect illustration of how companies fail to execute because they fall for the Math of Mediocrity:
"Here's what you find at a lot of companies," he says, kicking back in a conference room at Apple's gleaming white Silicon Valley headquarters, which looks something like a cross between an Ivy League university and an iPod. "You know how you see a show car, and it's really cool, and then four years later you see the production car, and it sucks? And you go, What happened? They had it! They had it in the palm of their hands! They grabbed defeat from the jaws of victory!
"What happened was, the designers came up with this really great idea. Then they take it to the engineers, and the engineers go, 'Nah, we can't do that. That's impossible.' And so it gets a lot worse. Then they take it to the manufacturing people, and they go, 'We can't build that!' And it gets a lot worse."
When Jobs took up his present position at Apple in 1997, that's the situation he found. He and Jonathan Ive, head of design, came up with the original iMac, a candy-colored computer merged with a cathode-ray tube that, at the time, looked like nothing anybody had seen outside of a Jetsons cartoon. "Sure enough," Jobs recalls, "when we took it to the engineers, they said, 'Oh.' And they came up with 38 reasons. And I said, 'No, no, we're doing this.' And they said, 'Well, why?' And I said, 'Because I'm the CEO, and I think it can be done.' And so they kind of begrudgingly did it. But then it was a big hit."
This doesn’t just happen in automobile manufacturing or engineering, but it also occurs plenty often in software development. One particular example is what I call the “Communism of Failure”. In this case, the best solutions, the best ideas, the ones that will benefit the end users the most, the ones that will get the users excited, the ideas and solutions that will help people be more productive or more efficient are…shelved. Why? because it can’t be supported by the commoners in tech support.
Certainly, this is a valid concern - it would be absolutely foolish to believe otherwise; a solution is useless if only the brightest and most exceptional of minds can understand it, deconstruct it, rebuild it, fix it and so on. But proper software engineering and project management offers ways to mitigate this through process and practice. Pair programming, strict guidelines and expectations for documentation, well documented common coding styles and techniques, leveraging automation and code generation, encouraging reuse of code assets by thinking in terms of frameworks. The point is, building an exceptional solution is not exclusive of building a sustainable solution.
If it were necessary to rely on a shaft that had grown perfectly straight, within a hundred generations there would be no arrow. If it were necessary to rely on wood that had grown perfectly round, within a thousand generations there would be no cart wheel. If a naturally straight shaft or naturally round wood cannot be found within a hundred generations, how is it that in all generations carriages are used and birds shot? Because tools are used to straighten and bend. But even if one did not rely on tools and still got a naturally straight shaft or a piece of naturally round wood, a skillful craftsman would not value this. Why? Because it is not just one person that needs to ride and not just one arrow that needs to be shot.
Indeed, a solution catered towards the brightest of minds is equally as bad as a solution catered towards the most common of capabilities. But through the usage of the right tools and the application of bending and straightening, we can bridge the two. It's not a compromise, but rather an effort to build a framework that allows excellence to propagate.
As in Jobs’ parable, where solution and enterprise architects fail at the Math of Mediocrity is that they cede to the concerns of the plebeians; they sacrifice excellence for “good enough” so that tech support can support the solution. On the one hand, Solution A can solve the problem for the end user in 1 click. On the other hand, Solution B requires the end user to perform more than 20 clicks to complete the same operation but is easier for tech support. Which is better? Jobs’ would surely side with Solution A, even if it’s the more technically complex solution because it delivers a better user experience and improves efficiency and productivity. Amazon loved Solution A so much that they gave it a name and patented it.
The problem arises because of ambiguity in calculating the true cost of Solution A compared to Solution B. Solution A may require $300/HR consultants and twice as much time to implement. Solution B may require $150/HR consultants and cost only half as much as Solution A. These costs are concrete and easy to quantify and grasp. What seems to escape the calculus is the cost of lost productivity and efficiency by having the end users, who have to use the applications day in and day out, suffer through an inferior solution because of the marginal costs of contracting better developers, working smarter, building from a framework, and hiring more competent tech support.
The same math carries over to the support staff, who are a marginal percentage of the overall workforce for any large organization or for a particular project or initiative. The question is whether it’s better to hire more competent support staff who can maintain a more complex but better solution or to hire less competent support staff at lower costs. And again, the question comes back to the issue of productivity and efficiency and comparing the gains made across an organization of tens of thousands of people versus the additional costs associated with a few dozen people. No question: I improve the user experience which not only improves productivity and efficiency if done right, but also aids in adoption and uptake; cost is but only one metric of success and perhaps not even the most important one. In the end, it really doesn't matter how much you saved by approaching the problem using Solution B; if the end result is a clunky, hard to use, inefficient, productivity drain, then the project has failed, regardless of how much money was saved by catering to the mediocre.
A solution architected and designed around a compromise for the average can work, but the problem must be approached differently. Leverage better developers from the get-go, make documentation a priority, standardize code, leverage patterns, ensure that the right tools and platforms are in place, build frameworks to support the most common scenarios, use pair programming and code reviews to ensure cross pollination of skills and knowledge, make learning and education a primary job conern, find solutions through engineering and process, not through capitulation to the lowest common denominator.
Update: An article in the New York Times by Damon Darlin this weekend caught my attention and adds another layer to this:
Remember those lines? Back when commissars commanded the Soviet Union’s economy like Knute commanding the tides, people would wait for hours in long queues for free bread. Although the bread was free, people paid for it with their time.
To economists, the long lines were a real-life example of the market requirement that payment be made one way or another — in money or in time. (In this country, the long lines would be for an Apple gadget, which is neither cheap nor scarce. But explaining that mystery is for another time.)
Paying with time rather than money seems just as common on the Web...Technology could very well make the Soviet bread line disappear. Do you remember how long it took to do a Google search a dozen years ago, when the service started? Probably not, but Google engineers calculate that their refinements have saved users a billion seconds a day. Using Google to quickly make the calculation, that comes out to about 1,800 lifetimes.
Indeed, the question is whether a business wants to make payment in money or in time (well, for businesses, time is money). For that reason, enterprise architects should think long and hard about the priorities of the platform or solution they are architecting. Is it just to do the minimum and keep maintenance effort and costs low? Or is it to actually streamline and improve the business processes and improve productivity and efficiency? How can you achieve the latter while sacrificing minimally of the former?
I spent half a day at the Philly .NET Code Camp and ended up attending only two sessions (weather was too nice outside to be sitting inside on a Saturday ). By chance, I saw Alvin Ashcraft's name on the list of presenters when I showed up, so I was hoping I'd get to meet him in person. But he was seemingly absent from his early morning session.
One of the two sessions I attended was on Windows Azure; it was an excellent presentation given by Dave Isbitski. I've dabbled with it a bit early on in the CTP and was not particularly impressed. Since then, I've continued to read up on it on and off. The one thing that I took away from today's session was that Azure is not enterprise ready (yet) and perhaps isn't meant to be?
To understand why, consider your account management and login experience: it's all tied to Windows Live IDs. Yes, that's right. Windows Live IDs. This means that your enterprise password and account naming policies can't be enforced. Your enterprise password complexity and history rules are not applicable. Furthermore, what happens if the person who owns or creates a new Live ID leaves the company? Perhaps she'd be nice enough to hand over the password info, but what if this person were hit by a bus? I think this is a big problem. What if she has a grudge? And just how secure are Windows Live IDs?
Account management is another issue. As it is, it requires entering in credit card information. This doesn't scream "enterprise" to me. You'd think there would be an ability to link accounts to company by company billing accounts (I dunno, maybe via a company's MSDN license?) There's also no concept of hierarchical account linking and instance management. This means that I can't even associate multiple Live IDs with one account and set granular permissions on the instances that each account can control (for example, Steve's account can manage these two worker roles while Joe's account manages this web role). What it boils down to is the wild, wild west of account management; there's no global view for a company to monitor usage across multiple accounts.
While there are a host of other issues (the ability to create data (SQL log exports and external replication are not supported, for example) and image backups) that affect enterprise adoption, perhaps the biggest one, in my opinion, is the big question mark of how these systems can be validated. Whether you're working with clients in the financial industry or perhaps insurance or life sciences (like me), enterprise systems will need to be validated and certified. I see this is a big challenge for adoption in life sciences due to the strict validation requirements for software systems.
At the end of the day, I can kind of see where Microsoft is going with this if you compare it to Google Apps, for example. But the key differentiator to me has always been that Microsoft always represented the enterprise while Google perhaps better represents the entrepreneur and the tinkerer. While both approaches are needed, it does add some difficulty in terms of evaluting Azure for enterprise usage given that the current implementation of some of the core features are not very enterprise friendly.
That said, it's still an exciting platform. I've got a few things brewing and I'll be keeping the blog updated as I complete my experiments.
A random assortment of random thoughts (and rants!) from the trenches...
You Know You're In Trouble When...
- You have to convene three people to figure out how to create an instance of one of the core objects in your framework. I think this is directly related to having an anemic domain model - it just isn't obvious which "service" you should be calling to set the properties on the object. It seems like the whole thing would be easier if you could just call the constructor or a static initializer on the class to get an instance; this is the most basic premise of an object oriented system (and one that gets thrown to the wayside much too often). Constructors are the most natural way to create an instance of an object; why not use them?
- Your team members are afraid to update their code (in fact, they'll wait days before updating because it's always a painful, time-consuming excursion to get your codebase compiling not to mention your environment working afterwards). This could be a symptom of many different ills. In this case, the problem is three fold:
- The source control system is painful to use. The culprit is Accurev; it is perhaps one of the worst source control systems I've ever used (not to mention it's very obscure and uses obtuse terms for common source control actions). A quick search on Dice yields 6 results for the keyword "Accurev" while "svn or subversion" yields some 786 results. Of course, the big problem with this is that it takes an extraordinarily long time to ramp up a new addition to the team to the peculiarities of the source control system. (I still haven't figured out how to look at changesets, run "blame" on a file, and why it's so slow...)
- There are no automated unit tests for the most basic and important of functionality: data access code. The lack of a structured way to unit test your core data access code makes the entire codebase seem....fragile. Changes in code that are not regression tested tend to break things, which tends to ruin productivity. I can understand not testing code that is dependent on external libraries which are difficult to test (it really requires a lot of thinking and work to do right), but I can't understand why any team wouldn't test their core data access code.
- There is no software support for tracking breaking changes. What I mean by this is, for example, changes to a database schema or a stored procedure. The standard way some teams "resolve" this issue is by emailing people when a breaking change is entered. However, the problem with email is that it's easy to forget someone and, even if you remember everyone, it's not easy to backtrack and find all of the different email notices. For example, if I'm in the process of writing an intense piece of code, I'll ignore a breaking change and deal with it the next time I update. But by that time, there could be two or three breaking changes. It's difficult to sort these out in email and much easier to sort them out with some pretty basic software support. On FirstPoint, we used a Trac discussion to track breaking changes. Developers checking in breaking changes were required to document the steps that the other developers would need to take to ensure that the environment remained stable.
- You're worried about deadlines, but you roll off two people who've been working on your project for two years and replace them with one person who's been working on the project for two months. Fred Brooks' The Mythical Man-Month covers this pretty succinctly:
adding manpower to a late software project makes it later
The problem is that the new resource cannot possibly have the richness of experience with the existing codebase that is require to be productive right away. In a system that's sparsely documented (and by that I mean there is no documentation on the core object model), it means that a new developer has to interrupt the workstream of more seasoned developers to get anything done. This is probably okay when the going is slow and steady, but in crunch time, this becomes a big productivity issue. I know I hate being interrupted when I'm in the zone, so I personally hate to interrupt others, but in this scenario, I have no choice since there is no documentation, the codebase is huge, and it's not at all obvious how to get the data that I need.
- When there are multiple ways to set the value of a property on a core object in your model. What I mean by this is say I have an object called Document and somehow, there were two or more ways to set the value of VersionId (and each way getting you a different type of value) when you use a data access object to retrieve an instance. Again, this is a byproduct of an anemic domain model. Because the rules of how to use the object are external of the object itself, the proper usage of the properties becomes open to interpretation, based on the specific service populating the object.
- Your object model is littered with stuff ending in "DAO", "Util", "Service", or "Manager". It means that you haven't really thought about your object model in terms of object interactions and the structural composition. These are suffixes that I use only when I can't think of anything better. More often than not, when I write these classes, they truly are utility classes and are usually static classes. If this is a big portion of your codebase, you have some serious problems.
You Can Make People Productive If...
I think the role of any senior developer, lead, or principal on a project is not to watch over everyone's shoulder and make sure that they are writing good code. I've learned pretty early on that this doesn't work; you can't control how people write code and if you try to, you'll just get your panties in a twist all the time, raise your blood pressure to unhealthy levels, and piss off everyone around you. So then the question is how can you get a group of diverse individuals with a diverse level of experience to write consistently good code?
It's a hard question and one that I'm still trying to answer. However, I've learned a few lessons from my own experiences in working with people:
- Make an effort to educate the team. This means reading assignments, group discussions, and making learning a basic requirement of the job, not an optional extracurricular activity. Pick a book of the month and commit to reading a chapter a day.
- Have code reviews regularly. One of the surest ways to help get everyone on the same page is through code reviews. The key is to keep it focused and not let the process devolve into a back-and-forth debate regarding the little things, but rather focus on the structural elements of the objects and interactions.
- The smartest guys on the team work on the most "useless" code. What I mean by "useless" here is that the code doesn't yield immediate benefits; in other words, framework code. Typically, this involves lots of interfaces, abstract classes, and lots of fancy-pants design patterns. The idea here is to make it easy for the whole team to write structurally sound code, regardless of skill level, by modeling the core interactions between objects and the core structure of the objects. I think a key problem is that project managers see this as a zero-sum activity early on in the game (the most important time to establish this type of code) when in reality, it usually returns a huge ROI when done with the right amount of forethought and proper effort to refactor when the need arises.
- Document things...thoroughly. One of the easiest ways to mitigate duplication and misuse is to use documentation in the code. For framework level code, it's even more important to have solid documentation about the fields, what type of values to expect, how the objects should be used, how instances are created, what special actions need to be performed for cleanup, etc. Documentation done right can also help improve code consistency if you add examples into your documentation.
Writing good code is productive. It becomes easier to maintain, easier to bugfix, easier to ramp up new developers, easier for one developer to take over for another, and it means a generally more pleasant and insightful workday, every day. Which brings us to...
Sound Software Engineering Is Like...
Exercise! Project managers seem to lose this very basic insight when they make the transition from a developer. Like exercise, it's always easier to put in the effort to do it regularly and eat a healthy diet than to wait until you're obese and then start worrying about your health and well-being. Sure, it feels like hard work, waking up at the crack of dawn and going out into the rain/snow/dark, eating granola and oatmeal, skipping the fries and mayonaise, but it's much easier to keep weight off than to lose weight once you're 200lbs overweight!
Likewise, it's always going to be easier to refactor daily as necessary and address glaring structural issues as soon as possible than to let them linger and keep stuffing donuts in your face. It's like carrying around 200lbs of fat: you lose agility, it becomes difficult to move, everything seems to take more effort - even simple things like climbing the stairs becomes a chore. The lesson is to trim the fat as soon as possible; don't let serious structural issues linger -- if there's a better, cleaner, easier way to do something, do it that way. Every excuse you make to keep fat, ugly code around will only make it heavier and harder to maintain.
How To Reinvent The Wheel...
It seems like a pretty common problem: a lead or architect doesn't want to use a library because it's not "mature" enough. What this means, exactly, still baffles me to this day. Mature is such an arbitrary measure that it's hard to figure out when software becomes mature. What this usually leads to is reinventing the wheel (several times over).
When evaluating third party libraries, I really only have a handful or criteria to consider whether I want to use it or not:
- Is it open source and is the license friendly for commercial usage? I'll almost always take a less feature-rich, open source library over a more complete licensed library. The reason is that there's less lock-in. I won't feel like I've just wasted $1000 (or whatever) if I encounter a scenario where the library is insufficient or plain doesn't work.
- Does it have sufficient documentation to get the basic scenarios working? This is perhaps the only measure of "maturity" that matters to me.
- Does it solve some scenario that would otherwise take the team an inordinate amount of time to impelment ourselves? I hate wasting time duplicating work that's freely available and well documented with a community of users who can help if the problem arises. And yet, time and time again, there is no end to the resistance against using third party libraries. Part of it is this very abstract definition of "maturity" (objections by technical people) and part of it is a fundamental misunderstanding and general laziness about different licensing models (the business folks).
That's it. I don't need the Apache software foundation to tell me whether log4net is mature or not. I look at the documentation, I write some test code, I use it and I evaluate it, and I incorporate it once I'm satisfied.
Software Estimation And Baking Cakes...
Fine grained software estimation is most assuredly the biggest waste of everyone's time. Once it comes down to the granularity of man-hours, you know that someone has failed at their job since there is no way to even quantify that level of absurdity. Once you start having meetings about your fine-grained estimates that pull in all of the developers, then you really know that you're FOCKED.
If I handed you a box of cake batter and asked how long it would take you to bake the cake, you'd probably take a look at the directions, read the steps, and estimate how long it would take you to perform all of the steps and add the baking time and come up with 50 minutes. Okay, we start the timer. You're off and cracking eggs and cutting open pouches and what not. But wait, your mother calls and wants to talk about your trip next week. -5 minutes. You open the fridge and find that you're half a stick of butter short so you run to the grocery store. -30 minutes. Oh shoot! You forgot to pre-heat the oven. -5 minutes. Finally, you've got the batter mixed up and ready to bake. The directions say to bake for 40 minutes but you've already used up 40 minutes and only 10 minutes left of your original estimate: now what?
Well, you could turn up the heat, but that'd only serve to singe the outside of the cake while leaving the inside uncooked. You could just bake it for 10 minutes, but your cake would still be uncooked -- but hey, you'd meet your estimate. More likely than not, you'd just bake the cake for 40 minutes and come in 30 minutes late since late, edible cake is better than burnt or mushy cake.
Software estimation is kinda like that (and look, in the case of baking a cake, all of the directions and exact steps are already well defined and spelled out for you -- writing software is rarely so straightforward). It's mostly an exercise in futility once it becomes too granular since there are just too many variables to account for. The answer -- if it must be implemented feature complete -- is that it's going to take as long as it's going to take (and probably longer!). For most non-trival tasks, I feel like the only proper level of granularity is weeks. Don't get me wrong, I'm not saying that you shouldn't estimate, but that you should estimate at the right level of granularity and accept that once you've reached your estimation and the work isn't done, your only real choices are to:
- Extend the deadline.
- Trim the unnecessary features.
So that's it; feels good after a brain dump!
A friend passed along a quote the other day:
So I just picked up this book today....and found this quote in the forward: "The truth of the matter is, if you need to “save” your job, I can’t help you. This book isn’t about struggling to maintain the level of mediocrity required not to get fired. It’s about being awesome. It’s about winning. You don’t win a race by trying not to lose. And you don’t win at life by trying not to suck. Fortunately, the content of the book has never been about trying not to suck. I can’t think that way, and neither should you."
One of my favorite all time sports quotes:
You PLAY to win the game. -- Herm Edwards
It's so simple and so obvious, and yet, so easy to lose sight of. Herm is right, but it doesn't just apply to sports, it applies to product development as well. You play to win, otherwise, get off the team...you're just dead weight taking up budget.
One of my pet peeves is lack of passion. It seems that many folks just don't get it; they're not playing to win. They're playing for their next paycheck.
In product development, like sports, sometimes, you have to take risks. Get too conservative, and you may see your lead evaporate. Like sports, you play to win...always. No team is perfect and no product is perfect, but that simply means that you play to your strengths. In product development, this means selling to your strengths. There's no such thing as a perfect team (as the Patriots proved) and there is no such thing as a perfect product.
The question for product managers and coaches is how you can work with what you have and maximize your resources and work around your weaknesses. Fail to do this, and you have failed your team as a leader. If your running game is weak, don't force your team to rely on running plays as a primary option. If your corners are weak, don't force them to have to cover deep. If your defense is weak, run your offense to maximize posession time. Work to the strengths of your team or your product; don't try to play a style that doesn't fit your personnel (analogously, don't use methodologies that don't fit your resources).
You PLAY to win the game. So simple and yet so easy to get into a mindset where you play for that paycheck instead of playing to win.
I recently finished up Eric Brechner's I.M. Wright's Hard Code.
One of the more interesting aspects of development and project management that he brings up is the concept of working depth first as opposed to breadth first. Too often, I think management gets this crazy idea in their head that progress is best served by having all hands on code at the same time; that to make the best progress, we should all be tapping away at our keyboards and churning code. I think this mindset is a mistake, especially in small teams.
In his October 1, 2004 article titled "Lean: More than good pastrami", Brechner (as I.M. Wright) writes:
Of course, you can use Scrum and XP poorly by making the customer wait for value while you work on "infrastructure." There is a fundamental premise behind quick iterations built around regular customer feedback: develop the code depth first, not breadth first.
Breadth first in the extreme means spec every feature, then design every feature, then code every feature, and then test every feature. Depth first in the extreme means spec, design, code, and test one feature completely, and then when you are done move on to the next feature. Naturally, neither extreme is good, but depth first is fa r better. For most teams, you want to do a high-level breadth design and then quickly switch into depth-first, low-level design and implementation.
This is just what Microsoft Office is doing with feature crews. First, teams plan what features they need and how the features go together. Then folks break up into small multidiscipline teams that focus on a single spec at a time, from start to finish. The result is a much faster delivery of fully implemented and stable value to demonstrate for customers.
To me, the key sentence is the last sentence: as software developers, project managers, and delivery teams, our goal should be to deliver demonstratable value to the customer in the fastest manner possible while maintaining a sufficient level of quality.
In small teams which work breadth first from start to finish, it becomes more difficult to accomplish this. (Well, I guess this is true for large teams, too. But in a larger team, you have more opportunities to modularize large stacks of the application.)
One of the core problems with working breadth first is that it assumes that everyone is a developer and everyone is equally skilled at every type of development task. This forces developers into roles and tasks which they are perhaps not comfortable with, not proficient with, or perhaps not even very good at. This may be a good thing, in the general case to act as a driver for learning, but at the same time, it's not very conducive to delivering quality (especially on production code).
In a sense, this approach assumes that everyone in the kitchen is a chef when this may not be the optimal usage of the resources at hand. No kitchen is comprised of all chefs; there is a head chef, a few sous chefs, a pastry/dessert chef, there are guys working on prepping ingredients, resources prepping the plates and dishes, resources optimizing the orders as they come from waiters, there are head chefs who are not working but experimenting or learning new techniques, and so on. The point is that too often, management assumes that everyone in the kitchen is a chef and everyone in the kitchen should be cooking. The reality is that no kitchen runs that way just as no development team can be run that way. Fred Brooks captured this in The Mythical Man Month with the concept of a Surgical Team.
For the sake of efficiency and quality, it seems that product development would be better served by using the proper resource for the development task at hand. Certainly, this raises the issue of pigeonholing developers into certain roles, but that's what downtime is for: cross training and developer education.
The second problem with working breadth first is code and functional duplication. In a multi-tiered architecture, if everyone is working on every tier, it becomes increasingly likely that certain functionality will end up being duplicated simply due to lack of immediate visibility. In a depth first approach, a team might be responsible for writing the service interface; they will know intimately which services already exist and which services can be reutilized. In a breadth first approach, in any non-trivial code base, functional duplication becomes rampant without a lot of non-productive effort.
The third major issue I have observed with a breadth first approach is that it causes stress to the test and quality assurance teams. Instead of having testable features trickle in as they are finished, pieces tend to all end up in their queue in a giant tidal wave. This puts more strain on small test and QA teams as it means that they are forced to work in a breadth first manner as well. Instead of having multiple eyes running one test script to ensure that the component is compliant with the design requirements, you end up with testers rushing through test scripts by themselves trying to catch up. Would it not be more desirable to have features delivered in a linear fashion to the test and QA teams so that their work can be more rigorous and comprehensive? I think this helps improve quality by finding usability issues and bugs earlier on in the development cycle as opposed to finding all the bugs near the end of the formal testing phase.
Just as you wouldn't deliver appetizers, the main course, and dessert to the table at once, it doesn't make sense to drop every module on your test and QA team all at once; it makes more sense to hand them deliverables early and often so that their work and effectiveness is not constrained. This is beneficial to dev as well since bugs and usability issues can be returned to the team earlier on in the cycle. Of course, the beauty of working depth first is that, if you do not have the option of pushing back a release date, it's easier to still ship on time with completed and tested modules minus features which could not be completed on time in the event that test or QA invalidates some earlier design assumptions which require significant time to fix or reimplement. In other words, a depth first approach gives you the flexibility to deliver finished and tested code even if some features must be left out for a later release.
The fourth major issue is exactly as Brechner writes: by working breadth first, you are not delivering value to the business users or your customers. Features are never in a fully tested and qualified state until the very end of the development cycle. Using Brechner's suggestion to design breadth first and implement depth first, it is possible to move completed pieces through test and QA (and fix bugs which may return) and deliver a working, functional module to business users or to customers, even if the particular product milestone is not complete. This decreases the feedback cycle and, again, allows usability and functional issues to be caught earlier in a smaller number of cases, rather than later in one huge bucket of tickets.
The fifth major problem with a breadth first approach is that you spread your resources thin. This means that you may not have sufficient resources who have knowledge of the code for a particular component. This can be mitigated to some degree if you run tight code reviews where group A peers into group B's code on a regular basis and has an understanding or the codebase or if group B generates impeccable documentation, but more than likely, you end up with small silos of knowledge where a small number of developers hold most of the knowledge regarding a module. This is bad for any number of reasons, as you can imagine.
So next time management peeks its head into the kitchen asking why everyone isn't in the act of cooking, perhaps you can sit them down and have a little talk with them; I think there is a lot of value to be found in Brechner's suggestion to perform high-level feature design breadth first, but low-level design and implementation depth first.
Once in a while, I toss my resume out there on Dice to see what the market's like and what opportunities are out there; you never know, right?
It always amazes me how terrible the process is and how it's so hard to find a job that's just right. Among the top of my lists for pet peeves -- in so far as head hunters are concerned -- are:
- A total lack of reading skills. I put the resume and profile together to help save time not just for me, but also for the head hunters. No need to send me a requirement for a position in California, you know, espeically when I've listed NYC and Princeton as my preferred locations.
- A total lack of courtesy. While my profile asks nicely to use email as the preferred contact method, that doesn't stop head hunters from calling. More annoying is that some just keep on talking, and talking, and talking...for me, it's a waste of time to listen to them read requiremets to me; I can read just fine, thank you.
- Asking invasive questions. I always get the "So how much are you making now?" question and I hate it. None of your damn business buddy! Your client can either meet the salary requirements or not...why in the world should I tell some total stranger how much I make? The bottom line is that whatever my current salary is has no bearing in the conversation. If a client can't meet my salary requiremets and the job just isn't that interesting, I'm not interested. Not only that, it's not like whatever number I give can be verified easily; asking this question is pointless because you'll get a batch of people who'll just make up numbers anyways. What if I said I made a million dollars last year? Would anyone believe me? What if I said I made $500,000 last year, would anyone believe me? What if I said I made $250,000 last year, would anyone believe me? Then why should anyone believe any unverified income number over the phone (isn't this how we got into this sub-prime mess)?
Well, anyways, once you make it past that morass, then you have to deal with the actual companies and phone interviews. This is where the fun begins (no, really)! I, for one, love being teched out. There is nothing more enjoyable than a match of wits to see if the person on the other end of the line can actually out tech me (and believe me, it would make me incredibly happy and excited if that were the case).
My new favorite part of the process is when the interviewer gets to the "So, do you have any questions for me?" part. Instead of asking boring, standard fare type questions, I've decided that this is my opening to gauge the technical skill of the developers in the organization. There is positively nothing more satisfying than doing some in depth tech grilling to kind of figure out whether an environment is right for you. I've kind of come to the conclusion that I can really only be happy where I can be out-teched; you know, an environment where I can learn from those around me and drive me to continue to dive into the technologies. It's a way to make sure that you're going to end up in a position where you'll feel challenged and look forward to learning and solving new problems.
In any case, here are some grilling points which I've come up with:
What's your approach to designing data access?
I like this question because it tells me a lot about the interviewer. In my opinion, data access is essentially the core of any application; it must be simple in design and easy to extend. It must be easy to understand and easy to use by the application layers above. Yet it must not be so basic that it's raw and verbose.
The worst possible answer is a response with any mentioning of datasets (even worse - and an absolute deal breaker - if that's not prefixed with "strongly typed").
Ideally, I'd like to hear something like:
- "We usually build a domain model built on top of Microsoft Enterprise Library." Most companies don't have the liberty of working with open source libraries. Enterprise Library is at least a baseline. The great thing about it is that it's very well documented and a well understood quantity. Pluse, having EL as a base encourages or enables at least a baseline level of uniformity in the code.
- "We use NHibernate (or substitute another ORM/persistence library)." I realize that not all companies and all projects have the freedom to use open source 3rd party libraries, but it's nice to see if they do or have used them in the past.
- "We're using LINQ." If you get a response like this, you know you're dealing with a group on the cutting edge of technologies and you're dealing with a group that doesn't mind the challenge of designing around new technologies; the developers probably read up on this stuff and work on it in their spare time. This is a group that you want to work with.
The answers to this question offer a rich view into the development resources that a company has and whether they have a strong indication of the Not Invented Here syndrome.
I tend to think that it takes a more advanced developer and development team to understand the landscape of libraries out there and how to utilize them since learning a new library is usually far more challenging, not to mention productive, than hacking together an inferior custom solution. As such, I also like to ask the following:
What's your approach to runtime logging?
It's a shock to me that many consultants I've worked with in the past either:
- Incorporate no runtime logging or tracing capabilities into their code or...
- Roll their own logging library.
If you've been writing applications without logging, then you haven't written any applications of any worth. If you're rolling your own, it means you're not interested in delivering value to your clients by wasting their time and money or, even worse, you don't know any better; you've never spent the time to look into the various off the shelf logging options. log4net would be a great answer, but Microsoft Enterprise Library Logging Application Block would be awesome as well.
I've never gotten to a point yet where I've been able to pop this next question, but I think that nothing would be a better indication of "this is a place where I want to work and these are people that I want to work with" than if I got a satisfying answer to this:
Are you familiar with Inversion of Control/Dependency Injection? Do you use any libraries to implement it?
This is a great quesiton because there are some design challenges in software, particularly around extensibility and orthogonality, that can really only be cleanly addressed by using the Inversion of Control (IoC) pattern. It enables the creation of far more extensible frameworks, libraries, and applications.
Low level developers on simple projects have no need for IoC or they end up writing a lot of code that's not extensible or modular. The more complex the application and the greater the need for extensibility, the more important IoC becomes as part of the glue that makes it all work together without a huge mess of dependencies.
If you get a response of "We use Spring.NET/Castle Project/Unity/CAB", then you know you're not dealing with some junior programmers. (CAB isn't really an IoC/DI container, but it utilizes some of the concepts of IoC).
Speaking of late binding, I'm also thinking this would be a good question as well:
Are you familiar with the Fusion Log and why you would need to use it?
Only developers who've worked extensively with late binding would ever have a need to enable this and you can tell the experience level of a developer if he/she can even give you a straight answer on what late binding is.
If I could make it past that point, I'd pretty much surely ask:
What's the difference between an interface and an abstract class? Which do you prefer when you design a framework or application? Why would you choose an interface over an abstract class? Why would you choose an abstract class over an interface?
In general, I don't like being asked or asking low level nitty-gritty questions like "Can you explain how CLR garbage collection works?" or "How many generations does the CLR GC have?" or questions like "Explain the ASP.NET page lifecycle?"; these questions aren't useful in the big picture and most of it can be looked up. These are just mere facts, the knowledge of which, doesn't indicate much.
On the otherhand, knowing the similarities and differences in iterfaces and abstract classes and how to use them properly gives insight into a developers approach to object oriented programming. It's a great question and a tough one as well. Nothing would be more awesome than a reply of "Well, according to Cwalina and Abrams in 'Framework Design Guidelines'...". Knowledge of interfaces and abstract classes is foundational to an understanding of good object oriented design and programming. You cannot write a well designed object oriented system without the judicious use of abstractions.
Another great question along these lines is:
Do you or does your team use any code generation tool?
I think that in general, it takes a great deal of thought to utilize code generation. It means that the developer or team in question understands the value proposition that it brings to development. Of note:
- It leads to more consistent code which means that in the long run, it's more maintainable and easier to document; it leads to repeatable and predictable results from every developer on the team, regardless of whether they've been writing code professionally for 10 years or 10 months.
- It leads to less error prone code since it's easy to fix small errors across the board by fixing the templates or the driver. For example, writing data contracts by hand is extremely error prone since it's easy to forget to put a [DataMember] attribute on property which needs to be serialized. Generating it from a template mitigates these types of simple mistakes.
- It increases productivity by allowing developers to get away from writing the plumbing and focusing on the business logic and UI, places where the ROI on code generation is lower.
It indicates to me that the developer or team is able to bring abstractions to the next level; not only are they abstracting in their object models, they are also abstracting the very act of coding. To successfully utilize code generation means that the developer or team is able to see the big picture and not just a class here and a class there. The develper or team has analyzed the code, identified the patterns, and encapsulated those patterns into templates and drivers. You know you're dealing with a high level team if they properly utilize code generation tools to reduce the amount of time spent doing grunt work.
Well, I'm sure I'll think of more if I can ever get past these questions. But overall, this outing has been disappointing. In general, I don't think interviewers take well to a technical grilling; whether they are unprepared to answer these types of questions or they simply don't know the answers, it hasn't been too promising. For me, it's important in gauging the technical competance of my potential colleagues and the type of technical training/staff development that the company provides and/or encourages. It's one way to avoid ending up in a company staffed by 5:01 developers.
I still haven't figured out how to respond to the "So what year did you graduate" question, as it's clearly a form of age discrimination but I'm not sure how to call someone out on that yet. More importantly, it implies that the groups and personnel are not necessarily organized by merit, but by seniority or, even worse, cronyism. I think next time, I'll just be blunt about it and ask if the interviewer realizes that it can be construed as age discrimination.