An interesting story (with some heated comments) with regards to a recent move by Spansion:
While Spansion Inc. was cutting 35 percent of its workforce, or 3,000 jobs on Monday, the company’s board was restoring full pay to its top executives.
The company had imposed a 10 percent pay cut last Oct. 6 for top executives. But in a securities filing, it said it was returning the executives to full pay as part of an “employee retention program.”
So I guess this is called "taking one for the team"? Seems more like a cash grab by the executives of a company that's about to go under.
One intrepid (and mildly retarded?) commenter, "Todd Fletcher", writes:
This story is quite biased. Understand that Spansion is in a very tight position right now, and they must remain competitive within their industry. This means not only attracting the very best executives, but retaining them.
Of course, the logical question is what metric are you using to define "the very best executives". Why are they in a "tight position"? Could it be because management and these very same executives made some bad decisions? Perhaps they lacked foresight into the market and the technologies? Nah, surely, it's the fault of the guys at the bottom, right? It's always their fault, right?
Some of the better rebuttals include:
Market guy: Todd, I would stop defending these executives at Spansion. They picked the wrong strategy and direction in 2003 and now the NAND Flash Memory guys are killing NOR Flash Memory manufacturers like Spansion. That’s why Spansion is in so much trouble today. Why would you want to reward Executives for steering the boat into an iceberg? Would you invest your money on a sinking ship?
Shamless1: If you want to pay to bring in new talent I say lets pay, however increasing compensation to people who walked you down the garden path is no different then chasing down the guy who just stole your wallet to give him your watch.
OneOfTheConcerned: Having top quality executives means that Spansion can strategically lay people off and reinstate the executive’s pay…all the while ignoring that they owe Travis County overdue taxes. Ah, the ground is quickly approaching those smug noses.
Brett Stroud: What does this story have to do with retaining top executives? It’s quite clear that the executives they have led them to a position where they have to lay off 35% of their work force. Seems to me like they’re retaining failed executives.
So yeah, let's see how much longer these guys last. Incidentally, the stock (SPSN) is currently trading at about $.06, a far cry from the $17 dollar range when I sold my shares a few years back.
I recently finished up Eric Brechner's I.M. Wright's Hard Code.
One of the more interesting aspects of development and project management that he brings up is the concept of working depth first as opposed to breadth first. Too often, I think management gets this crazy idea in their head that progress is best served by having all hands on code at the same time; that to make the best progress, we should all be tapping away at our keyboards and churning code. I think this mindset is a mistake, especially in small teams.
In his October 1, 2004 article titled "Lean: More than good pastrami", Brechner (as I.M. Wright) writes:
Of course, you can use Scrum and XP poorly by making the customer wait for value while you work on "infrastructure." There is a fundamental premise behind quick iterations built around regular customer feedback: develop the code depth first, not breadth first.
Breadth first in the extreme means spec every feature, then design every feature, then code every feature, and then test every feature. Depth first in the extreme means spec, design, code, and test one feature completely, and then when you are done move on to the next feature. Naturally, neither extreme is good, but depth first is fa r better. For most teams, you want to do a high-level breadth design and then quickly switch into depth-first, low-level design and implementation.
This is just what Microsoft Office is doing with feature crews. First, teams plan what features they need and how the features go together. Then folks break up into small multidiscipline teams that focus on a single spec at a time, from start to finish. The result is a much faster delivery of fully implemented and stable value to demonstrate for customers.
To me, the key sentence is the last sentence: as software developers, project managers, and delivery teams, our goal should be to deliver demonstratable value to the customer in the fastest manner possible while maintaining a sufficient level of quality.
In small teams which work breadth first from start to finish, it becomes more difficult to accomplish this. (Well, I guess this is true for large teams, too. But in a larger team, you have more opportunities to modularize large stacks of the application.)
One of the core problems with working breadth first is that it assumes that everyone is a developer and everyone is equally skilled at every type of development task. This forces developers into roles and tasks which they are perhaps not comfortable with, not proficient with, or perhaps not even very good at. This may be a good thing, in the general case to act as a driver for learning, but at the same time, it's not very conducive to delivering quality (especially on production code).
In a sense, this approach assumes that everyone in the kitchen is a chef when this may not be the optimal usage of the resources at hand. No kitchen is comprised of all chefs; there is a head chef, a few sous chefs, a pastry/dessert chef, there are guys working on prepping ingredients, resources prepping the plates and dishes, resources optimizing the orders as they come from waiters, there are head chefs who are not working but experimenting or learning new techniques, and so on. The point is that too often, management assumes that everyone in the kitchen is a chef and everyone in the kitchen should be cooking. The reality is that no kitchen runs that way just as no development team can be run that way. Fred Brooks captured this in The Mythical Man Month with the concept of a Surgical Team.
For the sake of efficiency and quality, it seems that product development would be better served by using the proper resource for the development task at hand. Certainly, this raises the issue of pigeonholing developers into certain roles, but that's what downtime is for: cross training and developer education.
The second problem with working breadth first is code and functional duplication. In a multi-tiered architecture, if everyone is working on every tier, it becomes increasingly likely that certain functionality will end up being duplicated simply due to lack of immediate visibility. In a depth first approach, a team might be responsible for writing the service interface; they will know intimately which services already exist and which services can be reutilized. In a breadth first approach, in any non-trivial code base, functional duplication becomes rampant without a lot of non-productive effort.
The third major issue I have observed with a breadth first approach is that it causes stress to the test and quality assurance teams. Instead of having testable features trickle in as they are finished, pieces tend to all end up in their queue in a giant tidal wave. This puts more strain on small test and QA teams as it means that they are forced to work in a breadth first manner as well. Instead of having multiple eyes running one test script to ensure that the component is compliant with the design requirements, you end up with testers rushing through test scripts by themselves trying to catch up. Would it not be more desirable to have features delivered in a linear fashion to the test and QA teams so that their work can be more rigorous and comprehensive? I think this helps improve quality by finding usability issues and bugs earlier on in the development cycle as opposed to finding all the bugs near the end of the formal testing phase.
Just as you wouldn't deliver appetizers, the main course, and dessert to the table at once, it doesn't make sense to drop every module on your test and QA team all at once; it makes more sense to hand them deliverables early and often so that their work and effectiveness is not constrained. This is beneficial to dev as well since bugs and usability issues can be returned to the team earlier on in the cycle. Of course, the beauty of working depth first is that, if you do not have the option of pushing back a release date, it's easier to still ship on time with completed and tested modules minus features which could not be completed on time in the event that test or QA invalidates some earlier design assumptions which require significant time to fix or reimplement. In other words, a depth first approach gives you the flexibility to deliver finished and tested code even if some features must be left out for a later release.
The fourth major issue is exactly as Brechner writes: by working breadth first, you are not delivering value to the business users or your customers. Features are never in a fully tested and qualified state until the very end of the development cycle. Using Brechner's suggestion to design breadth first and implement depth first, it is possible to move completed pieces through test and QA (and fix bugs which may return) and deliver a working, functional module to business users or to customers, even if the particular product milestone is not complete. This decreases the feedback cycle and, again, allows usability and functional issues to be caught earlier in a smaller number of cases, rather than later in one huge bucket of tickets.
The fifth major problem with a breadth first approach is that you spread your resources thin. This means that you may not have sufficient resources who have knowledge of the code for a particular component. This can be mitigated to some degree if you run tight code reviews where group A peers into group B's code on a regular basis and has an understanding or the codebase or if group B generates impeccable documentation, but more than likely, you end up with small silos of knowledge where a small number of developers hold most of the knowledge regarding a module. This is bad for any number of reasons, as you can imagine.
So next time management peeks its head into the kitchen asking why everyone isn't in the act of cooking, perhaps you can sit them down and have a little talk with them; I think there is a lot of value to be found in Brechner's suggestion to perform high-level feature design breadth first, but low-level design and implementation depth first.
I encountered an interesting problem while working with our FirstPoint Office add-in (one of the many joys of working with the Office API!).
In this case, a call to ActiveDocument.CanCheckIn() would return true, even when the value was clearly false. Testing in the VB macro editor in Word would return the correct value consistent and so would invoking the method from the Visual Studio command window.
In Word 2007, this API call seems to work just fine. However, in Word 2003, I had to use reflection to invoke the method to get a proper result:
Type type = _host.ActiveDocument.GetType();
MethodInfo m = type.GetMethod("CanCheckin");
object result = m.Invoke(_host.ActiveDocument, null);
canCheckInDocument = Convert.ToBoolean(result);
Got myself a Bialetti cappucino/latte set this week. I have to say, I'm pretty impressed. Check out the results for yourself:
It comes with a milk frother cup, which worked out really well.
Check out my Amazon review:
I'm a pretty "average" coffee drinker; I'm not so into it that I'm going to be roasting my own beans anytime soon. On the other hand, I've also had my share of watered down and bitter tasting sludge water once in a while from national chains as well and I can appreciate a good cup of coffee.
This little device seems like a good middle ground. Not so steep in price that you feel like you need to be a coffee snob to really appreciate it, and yet it produces an above average cup of cappuccino. You can certainly spend a lot more on a coffee preparation device, but there's no guarantee that you're going to get results that really justify the extra cost.
Compared to some devices I've used in the past, I would list the key pros of this one as:
1. Very easy to clean. The frothing cup has a non-stick coating and it's easy to rinse out. The percolator is pretty easy to assemble/disassemble once it's cool. All the parts are easy to remove and rinse clean. The coffee grinds are very easy to remove as well (one of my main concerns).
2. The frothing cup works GREAT. I was a bit tepid to try it out with some organic skim milk, but it worked out great! I set about 3/4 cup of milk over very low heat until I saw a bit of steam coming off the cup and put the plunger over it and within 10-15 pumps, I had a nice, thick, frothy mixture (even with skim milk!).
3. Paired with some Illy coffee, the coffee came out very, very well. Perhaps the best coffee I've had in quite a while. No bitterness and, to my surprise, even with the fine grind of the beans, the coffee was pretty much free of grinds.
4. To my surprise, I was able to get a bit of crema! Yes, it's possible even with this relatively cheap device (there are a few videos on Youtube demonstrating this).
A few things held this combo back from being a 5 star product:
1. The directions are TERRIBLE. No suggestions on the amount of grinds to use (yes, to some degree, this is really dependent on your personal taste, but at least give me a baseline!). There's also no suggestion on the grind to use either (I ended up using a fine grind Illy). There's no measuring cup and at least on mine, there were no water level markers on the inside of the percolator. I ended up filling it about 3/4 of the way to the valve. As for grinds, I ended up using a bit more than 2 tablespoons and it seemed to work out well for me.
2. Unless you're standing next to the thing the whole time, there's really no way of telling when it's done; you really have to kind of stand there and watch it. With enough usage, I assume that you'll get the time down, but it would be nice if had some mechanism to alert you when it's done.
3. At least on my gas stove, it takes a while to heat up. Compared to a drip machine or other electric percolators I've used in the past, this device does take a bit more time since you can't really use high heat on your stove (unless you have an electric one). Since the base is rather small, you have to use it over a smaller burner and even then, you may have to use a rather low heat setting. All of this means that it takes a while for it to heat up. While this might contribute to a better tasting coffee, it also means more time.
All in all, I think it was well worth the $40 some dollars I paid for it. It's a great weekend companion; it's very satisfying to wake up on a lazy Sunday morning and make a cup of cappuccino for the wife and I and sit back and relax, without having to get dressed and rush out into the cold winter air. It's probably not ideal for every day use as it is a bit more time consuming and there is a bit more cleanup involved compared to a paper filter drip machine. But then again, if your working life is busy and hectic, it might be just the thing you need to slow down for a moment and enjoy a hot cup of cappuccino!