<CharlieDigital/> Programming, Politics, and uhh…pineapples

26Aug/10Off

Event Receivers on Content Types

Posted by Charles Chen

Adding this to the category of things-that-I-didn't-know-but-would-have-made-a-lot-of-stuff-I-previously-wrote-much-more-elegant-and-awesome.

You should add it to yours, too!

As a quick summary, it's common knowledge (well, amongst SharePoint developers at least) that you can associate event receivers with a list template type.  However, an interviewer recently brought to light that one can also associate an event receiver directly with a content type.  This is immensely useful for anyone building custom solutions on SharePoint, especially if you make heavy usage of content types in your design.

Here's an example using the same basic content type:

<?xml version="1.0" encoding="utf-8" ?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  <Field DisplayName="Model Code"
    Name="Model_Code"
    StaticName="Model_Code"
    ID="{F0000000-0000-0000-0000-000000000001}"
    Type="Integer"
    SourceID="http://schemas.someusedcarinventory.com"
    Group="My Custom Columns"/>
  <Field DisplayName="VIN"
    Name="VIN"
    StaticName="VIN"
    ID="{F0000000-0000-0000-0000-000000000002}"
    Type="Text"
    SourceID="http://schemas.someusedcarinventory.com"
    Group="My Custom Columns"/>
  <Field DisplayName="Make"
    Name="Make"
    StaticName="Make"
    ID="{F0000000-0000-0000-0000-00000000003}"
    Type="Text"
    SourceID="http://schemas.someusedcarinventory.com"
    Group="My Custom Columns"/>
  <ContentType Name="Vehicle"
    ID="0x0100FC000000000000000000000000000001"
    Description="Used car inventory"
    Group="My Custom Content Types" >
    <FieldRefs>
      <FieldRef ID="{c042a256-787d-4a6f-8a8a-cf6ab767f12d}" Name="ContentType" />
      <FieldRef ID="{fa564e0f-0c70-4ab9-b863-0177e6ddd247}" Name="Title" 
        Required="TRUE" ShowInNewForm="TRUE" ShowInEditForm="TRUE" />
      <FieldRef ID="{F0000000-0000-0000-0000-000000000001}" Name="Model_Code"/>
      <FieldRef ID="{F0000000-0000-0000-0000-000000000002}" Name="VIN"/>
      <FieldRef ID="{F0000000-0000-0000-0000-000000000003}" Name="Make"/>
    </FieldRefs>
    <XmlDocuments>
      <XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/events">
        <spe:Receivers xmlns:spe="http://schemas.microsoft.com/sharepoint/events">
          <Receiver>
            <Name>VehicleCreatedHandler</Name>
            <Type>ItemCreated</Type>
            <SequenceNumber>1</SequenceNumber>
            <Assembly>My.Library, Version=1.0.0.0, Culture=neutral, 
                PublicKeyToken=c5168cbbbb64acf7</Assembly>
            <Class>My.Library.EventReceivers.VehicleCreatedHandler</Class>
            <Data />
            <Filter />
          </Receiver>
        </spe:Receivers>
      </XmlDocument>
    </XmlDocuments>
  </ContentType>  
</Elements>

Oddly enough, I was trying to figure out what the Filter element does when I stumbled upon the stackoverflow posting by accident. As it stands, I still can't figure out what the element is and what XML is valid for it. Perhaps some CAML query filter?

13Aug/10Off

Failing (Gracefully)

Posted by Charles Chen

(Alternate title: Failing Productively)

I posted some snippets from a recent interview with Fred Brooks in the August issue of Wired (by the way, I'm working through his latest compilation of essays, The Design of Design).

I'll repost the relevant bits here:

KK: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?

FB: You can learn more from failure than success.  In failure you're forced to find out what part did not work.  But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.

I think this is an important lesson.  I've written about this topic before in a post about, of all things, The World of Warcraft.

From the Wired article:

Where traditional learning is based on the execution of carefully graded challenges, accidental learning relies on failure. Virtual environments are safe platforms for trial and error. The chance of failure is high, but the cost is low and the lessons learned are immediate.

To expand on this, in software, I think it's important to have lots of little failures.  This is the only way to discover and find solutions that work and solutions that don't work (hopefully on the path to a solution that does work!).  In my book, failure is good; it's a necessary part of the learning process (if I'm not failing, I'm probably not doing anything interesting or challenging).  I expect to fail and I expect other developers that I work with to fail.  My estimations even account for failure.  The important thing, however, is to actually examine your failures and to understand why you've failed.  More than that, it's important to understand how to fail.  The key is to fail early and fail in small, isolated scenarios and be able to extract from that some concept of what will work and what will not; we call this iterating or prototyping or iterating with prototypes.  Then, on a macro scale, examine one's work once a project is done and identify what one did wrong, what was painful, what could have been done better and actually make the effort to improve.

Brooks also expands on this in The Design of Design.  In chapter 8, "Rationalism versus Empiricism in Design", he writes:

Can I, by sufficient thought alone, design a complex object correctly?  This question, particularized to design, represents a crux between two long-established philosophical systems.  Rationalism and empiricism.  Rationalists believe I can; empiricists believe I cannot.

The empiricist believes that man is inherently flawed, and subject repeatedly to temptation and error.  Anything he makes will be flawed.  The design methodology task, therefore, is to learn how to determine the flaws by experiment, so that one can iterate on the design.

Brooks boldly states: "I am a dyed-in-the-wool empiricist."  I'm in Brooks' camp; I'd definitely consider myself an empiricist.  It's evident in my sandbox directory where hundreds of little experiments live that I use to rapidly iterate an idea (and isolate the failures).  If you're an empiricist, then -- as Brooks implies -- iterative models of design and development come naturally.  I find it more productive to go through a series of quick, small prototype and experiments to identify the failures than to end up discovering one big failure (or lots of little small failures) late in a project!  As much as we'd like software engineering to be a purely mechanical process (say an assembly line in an automotive plant), I don't think that this can ever be the case.

So then it follows, if designers and developers work best with an empiricist view of the world, then why do we continue to design, plan, budget, and schedule projects using a waterfall approach?  Why do we continue to use a model that does not allow for failure in design or implementation, yet cannot actually prevent failure?  "Sin."  Brooks writes in chapter 4 "Requirements, Sin, and Contracts":

The one-word answer is sin: pride, greed, and sloth... Because humans are fallen, we cannot trust each other's motivations.  Because humans are fallen, we cannot communicate perfectly.

For these reasons, "Get it in writing."  We need written agreements for clarity and communication; we need enforceable contracts for protection from misdeeds by others and temptations for ourselves.  We need detailed enforceable contracts even more when the players are multi-person organizations, not just individuals.  Organizations often behave worse than any member would.

So it seems that the necessity for contracts best explains the persistence of the Waterfall Model for designing and building complex systems.

I find that quite disappointing and pessimistic and yet, full of truth.

On a recent project, we failed to launch the project entirely even after months of designing, design reviews, sign-offs, and discussions.  I had already started writing some framework level code, fully anticipating the project starting within a matter of weeks after the design had been scrutinized ad nauseum and "finalized".  The client insisted on a rigid waterfall approach and wanted to see the full solution in design documents upfront.  As absurd as this sounds, the client had already spent more for design artifacts (documents and UML diagrams), by this point, than they had budgeted for delivery (development, testing, validation, and deployment).  It was an impossible objective to start with, but we obliged as an organization despite my own protests internally.  Tedious, micro-level designs were constructed and submitted, but to what end?  The project was scheduled to go live this April.  It is now August and after a change of vendors, it isn't even close to getting off the ground.  Instead of many micro-failures along the path to success, this client's fear of failures (embodied by their goal of designing out all of the risk) has lead them down to the path of one big failure.

So the question then is: how can we overcome this?  How do you negotiate and write a contract to build a solution iteratively?  How can you effectively build that relationship of trust to break down the sins and the communication barriers?  Brooks touches upon various models and why they work, but doesn't necessarily offer much insight and guidance in how to overcome the "sins" while still working within an enforceable contract.  This, I think, is an important lesson to learn not just for individuals, but for organizations.  A certain level of failure must be acceptable and in fact, encouraged; this is essentially what iterative design and development means: iterate quickly and find what does and doesn't work.  Make many small mistakes early instead of finding big mistakes in your design or assumptions later.

Footnote: I'm still working through the book and, so far, it has been a great read.

12Aug/10Off

Laptop Buying – For Developers

Posted by Charles Chen

About a year ago, I caught on to Dell's refurbished laptops over at Dell Outlet and since then I've purchased a total of three four laptops from there and each one has worked out great.

My first purchase was a Dell Latitude E6400 which I used as a primary development machine as I was traveling heavily.  At the time, as configured, the laptop that I acquired was over $500 cheaper than a brand new laptop from their business channel with the addition of a 15% off coupon (which they throw out there all the time; you can check their Twitter stream for updates).  That's a huge savings.  I used it to run Visual Studio 2008 and VMWare 6.5.  It was plenty good, but with the rollout of Visual Studio 2010 and SharePoint 2010, I definitely noticed a HUGE decrease in performance.  It was excruciating.

I was torn between upgrading the E6400, which I had for less than a year, by adding another 4GB of RAM and an SSD or getting a new laptop, but it just so happened that my mom needed a laptop for some contract work that she picked up.  So I turned to Dell Outlet again and picked up a Core-i7 packing Latitude E6410, purchased an extra 4GB of RAM (total of 8GB), a Muskin Calisto Deluxe from Newegg (a Sandforce based SSD), and a second drive tray from NewModeUS for somewhere around $1600 (note that this includes almost $80 from shipping and taxes from Newegg and NewModeUS) after using a 15% off coupon for the laptop.  It's a great value considering configuring the same laptop from the business channel would have cost around $400-500 more.

The E6410, with the 8GB and the Calisto SSD, is able to lay down some serious computing power.  It handles my SharePoint 2010 Enterprise VM without a sweat.  Visual Studio 2010 is far more usable now as well.  As I almost never use my DVD drive, I swapped it out for a Western Digital Scropio Black (at $80 for 7200RPM, 320GB, it can't be beat in terms of price/performance) and store all of my large files and VM images on the second drive.

I've also purchased an E4310 for my wife this year.  My experience with the E-class Latitudes from Dell Outlet has been so overwhelmingly positive, that it was a no-brainer.  It's a great little machine for the road warrior developer and now that I've felt the heft and the size, I'd seriously consider it myself (although it doesn't have an option for a Core i7 CPU -- i3 and i5 only) as NewModeUS also has a drive tray for the E4310.  She tends to use laptops for far longer than I do :-D Her last one lasted her about 5 years now so I hope that this one can last at least as long.

Refurbished? I'm not really sure what this means.  It's pretty broad I guess, but considering that I got my E6410 in July and the laptop itself was released only in April or May, I figured that it had to be in pretty good shape.  How much wear could a laptop accumulate in two months?  My guess is that the refurbished laptops fall into one of a few scenarios (just my guess):

  1. Ordered too many -- perhaps a hiring freeze or some employees were let go before IT was notified.
  2. Not needed anymore -- perhaps a company went bankrupt or went out of business?
  3. Some malfunctioning component -- maybe the power supply didn't work or the video card was wonky and the whole chassis was returned.
  4. Misconfigured -- IT department receives shipment and finds that a batch of the laptops were misconfigured with the wrong CPU or missing other features.

I don't know the answer and I don't know why my laptop is "refurbished", but for all intents and purposes, when I pulled it out of the box, it was brand spanking new; no wear to speak of.

Dell E64xx.  I'd like to take a moment to reflect on these laptops.  I spent quite a bit of time looking into the offerings from HP as well.  In particular, the HP EliteBook 8440w and 8540w which I was also considering.  Ultimately, having had my experiences with the E6400 the first time and seeing the build quality of the E-class Latitudes, it was hard to justify shelling out the additional premium for the HP units (the pretty consistent 15% off coupons for the Latitudes at Dell Outlet are a big incentive).  Given that the performance difference between the two would be largely marginal, I stuck with the E-class laptop once I found out about NewModeUS (Dell doesn't let you configure a laptop with two 2.5" hard drives the way I wanted it configured and it was one of my key criteria as I keep several multi-GB VM images on my laptop).

Overall, these laptops have been a joy to work with.  Far better than that Lenovo T series laptops (which my sister purchased herself despite my suggestions and which I use for some clients).  The screen is bright, the connectivity is great (though no USB3, it does have eSATA and a DisplayPort connector), the keyboard is excellent (especially with the backlighting), the web cam and microphone are excellent, it has a pointer "nipple", and the build quality is top notch.  I regularly pick up the laptop one handed and there's little discernible flex; the chassis is very rigid.  I also like that the system is so easy to customize for the do-it-yourselfer.  This allows you to buy a cheap chassis (focus on the CPU) and simply just replace the RAM and the HDD.  The entire underside (a thin, magnesium alloy plate) is held in by one screw (to my surprise).

Even with the Core-i7 onboard, it isn't any noisier nor does it run appreciably hotter than my Core 2 Duo packing E6400.

I've also come to really like the overall design of the E-class Latitudes.  They're relatively thin, simple, and classy looking.  Much better looking than the Lenovos.

Dual Core or Quad Core? I struggled with this for a while as I was heavily considering one of the quad core Core-i7 processors.  However, I'm glad I chose the dual core.  I've found the performance to be excellent and the price, heat, and battery life trade-offs to be the big win.  Generally speaking, in development, it would seem that your limiting factors are the disk speed and RAM rather than the number of physical cores.  Given that the dual core CPUs have faster physical cores than the quad core CPUs, my feeling is that one is probably better off with the dual core Core i7 CPUs for a development laptop.

There was some good discussion on a thread over at NotebookReview.com with great insight on the topic.  Highly recommended read for developers in the same quandary as I was on dual core vs. quad core.

At the time, I was also thinking that having a quad core would help in terms of the VM (I was getting terrible performance on my SharePoint 2010 VM) by being able to assign two cores to the VM, but the VMWare documentation seems to advise against this (can't find it now, but there was a whitepaper on this very topic) in most scenarios.  In practice, with the 8GB of RAM and the SSD, the dual core Core-i7 has proven to be more than enough.

Suggestions for Developers. For any developers looking to get your own laptops or for small development shops, I'd definitely recommend looking at Dell Outlet and the E6410 and E4310 laptops.  Wait for the 15% off coupons and you'll get yourself a steal.  For the time being, unless you plan on getting the top of the line quad core Core i7 and you aren't concerned about heat or battery life, I'd stick with the dual core Core i5 or Core i7 CPUs.

Here's what I would do (once I've got a 15% off coupon code):

  1. Buy the chassis with the best CPU and ancillary features that are important to you (web cam, battery size, BlueTooth, Windows 7, x64, etc.) that you can find in their database.  For the most part, disregard the HDD, even if it comes equipped with an SSD.  You can kind of disregard the RAM, but look for something that has 4GB in one slot.
  2. Buy a Sandforce based SSD (the Calisto is a great SSD -- I've already purchased two of these).  You can check LogicBuy.com as amazing deals do occasionally surface.  Target at least 120GB.
  3. Buy an extra 4GB of RAM from Newegg.
  4. Buy a drive tray from NewModeUS for your chassis (do note that the drive tray is an actual SATA interface -- WIN!).
  5. Buy a Western Digital Scorpio Black HDD and plug that into your new drive tray (Amazon has good prices if you have Prime).  Use this drive to store you large files and your VMs (store your source files on the SSD for speed).
  6. Buy an external enclosure for whatever drive you take out of the chassis.  I've used the ACOMDATA Tango enclosures (see my review at the link) which supports eSATA.  Use this as an external drive or for backups.
  7. Do a clean install with the SSD as the primary.
  8. Once you have you system reinstalled, be sure to change the write caching policy to improve performance on the disk in the tray.  Follow these steps:
    1. Right click on Computer
    2. Select Manage
    3. Click Disk Management
    4. Right click on the disk and select Properties
    5. In the Hardware tab, select the disk and click Properties
    6. In the new dialog, select the Policies tab
    7. Here, you should enable write caching and you can also turn off the Windows write cache buffer flushing if you want.  Since it's essentially an internal drive now (unless you plan on hot swapping it) with battery backup, it should be pretty safe (but do so at your own risk!)

Write caching configuration

I'm not sure how the Seagate Momentus XT hybrid drive does in terms of large files that you'd be working with in terms of VMs, but I've had pretty good success with the Scorpio Black.

Suggestions for Dell. Get some better web developers.  Seriously.  The Dell Outlet site is barely usable.  It was terrible before they fixed it up, but they've somehow made it prettier, but much harder to use -- I wouldn't have thought that possible given the state the site was in when I first used it.

With a bit of patience (waiting for the coupon), luck (finding the right configuration for your needs), and elbow grease (upgrading a few components yourself), you'll have yourself a killer development machine at a great, budget friendly price.  My E6410 is now my primary and only development machine.

Filed under: DevLife No Comments
4Aug/10Off

Book Review: Building Solutions for SharePoint 2010

Posted by Charles Chen

I've been working my way through Sahil Malik's Microsoft SharePoint 2010: Building Solutions for SharePoint 2010 and I'm almost finished now.  Just a quick review for anyone working on ramping up on 2010 or considering this book.

First, this book is good. I would recommend it without question to developers who are working on the SharePoint platform. Sahil Malik covers many of the new features as well as goes over some of the basics in a practical, mostly easy to read manner; there's practical advice in every chapter that you'll want to highlight and tuck away in your brain.

However, it's pretty apparent that the book was rushed (at least editorially) due to the large number of grammar mistakes, awkward sentences, terrible analogies (some of them a bit inappropriate in any textbook), and somewhat questionable structure of the content.

Again, Sahil does a good job of capturing a lot of the key changes in 2010 and gives good examples (chapter 5 being the most interesting to me, personally). If I were grading on content alone, the book would be closer to 5 stars. It's the editing team at Apress that have let down Mr. Malik by not putting enough attention in properly structuring and organizing the content and not performing adequate proofreading. I feel that many of the concepts and ideas could have been organized a bit differently and more coherently to help the reader better link the concepts together and have a clearer path to ramping up. It may have helped if the book had a more focused target audience (Mr. Malik himself points out that the book is broad in nature).

That said, as I mentioned in the opening of the review, I would definitely recommend this for SharePoint developers who are transitioning or preparing to transition from 2007. Despite it's flaws, it's still worthy of the time and money that you'll invest in it. I do wish that the author had gone into more detail regarding best practices and design patterns for developing solutions in SharePoint; this is an area that is sorely missing in terms of publications. Mr. Malik might have made the book even better by incorporating the different examples around a central solution or problem instead of the scattershot approach of one-off examples. In other words, his examples (in a book with "Building Solutions" in the title) would have been better served in the context of a more comprehensive, overarching example.

Update:  Finished the book.  A copy of the email I sent out to the folks in our practice:

Malik's book is worth the time to work through.  I'd recommend it for all of the devs here.  It provides good coverage of the most important topics in 2010 and is very readable (suffering somewhat from poor editors at Apress and questionable organization of data).  Chapter 5 (client object model), 8 (ECM), 9 (BCS), 10 (Workflow), and 11 (BI) all demonstrate some of the core new features in 2010 and should be required reading for any dev. or architect.  Chapter 3 covers some of the core restrictions of the new sandboxed solutions model.  It's an important new feature, but it's equally important to understand the walls that are thrown up when using this model.

The book is generally light on code and, in most of the later chapters, covers how tasks can be accomplished from SharePoint Designer as well as from Visual Studio.  It doesn't really go into development practices and things like building custom content type forms and so on.  It doesn't cover building applications for the new services infrastructure.  Again, it's light on code and should be readable by non-devs as well.

It's short by tech book standards, so it is understandably lacking a bit in depth.  For example, some of the BI features in chapter 11 are really, really killer, but Malik doesn't dive into the technical details (one example would be the new REST APIs for Excel Services which are supremely powerful, but Malik only gives a few examples without details for the API).  The BCS chapter (9) is also light and could have used much more meat since it covers the 80% scenario, but leaves a lot to be desired on the dirty work that would be required to build the 20% scenario.  On the other hand, the ECM chapter (8), while light like the other chapters, provides information that is important specific to life sciences (and maybe financials).

In summary, I recommend getting this book used and going through it front-to-back.  It's not that long, but surfaces a lot of the important features that are new to 2010.