Effective Hiring for Small (or All?) Teams
I was reflecting with a previous collaborator recently on the product I am working on now. The team I lead is a small team of only 4 engineers. Yet the product we’ve developed transacts upwards of a million documents a year and is used by tens of thousands of clinical trial investigators and trial site support staff from around the world across hundreds of clinical trials and tens of thousands of clinical investigative sites. While sometimes progress seems glacial, when I reflect on the reality of what we’ve achieved, even I am caught off guard.
I think that a key for us is this idea of social capital. I have worked with the core engineering team here for almost 6 years now and I have worked with some of the folks in the company for even longer than that, spanning previous endeavors and companies long gone. I have also been fortunate to have hired some really, really good teammates. As I’ve reflected on this, it has been important to think about why things worked out the way they did. Is it just pure chance that we made good hires? Or was there some underlying driver that increased our odds of finding people that fit our team and our mission?
One story I like to tell is that one of the key engineers on my team failed every item on our standard interview assessment (though he begs to differ) and yet we hired him. I often get asked why I gave the OK, and I say that it’s the way he failed. Several years ago, I developed a simple assessment for an offshore team as we were looking to expand as the team was 12 hours ahead, making it difficult to interactively screen each candidate myself. I ended up liking the assessment so much that I’ve now used it myself over several dozen interviews (at least 50+ candidates). On the one hand, it’s meant to test for key fundamentals; but on the other, it’s designed to be challenging enough that people will get things wrong — especially on open-ended design-oriented questions.
For me, what I’ve found is that what I value is how people think about the challenging questions. I’ve been in screenings and interviews where a paper or digital test is provided and you are given time to complete the questions. I think in these cases, a lot of the richness of the individual is lost; such interactions test what an individual knows now, but doesn’t work really well to capture intangibles or subjective aspects of a candidate such as whether the individual has a capacity for growth, whether the candidate’s ego will impede decision making and thinking, or whether the candidate is just a good teammate. As Todd Rose covers in The End of Average, “talent is always jagged”. He cites Jim Sinegal — the founder and former CEO or Costco:
“Fit is everything,” Sinegal explained to me. “We look beyond simplistic ideas like a [college] transcript or things like that for hiring…. There are attributes that matter at Costco, like being idustrious. But how do you see that on a resume?”
There are aspects of talent and fit that are may simply be impossible to determine from a static assessment. For that reason, every assessment I give is interactive; there is no score cutoff or pass/fail metric. In terms of measurement, the goal of the assessment is purely to measure current aptitude, but the interactive manner in which it is completed provides insight into the candidate’s character. I have used a combination of jsfiddle.net and dotnetfiddle.net with live collaboration as tools to work with candidates in real-time remotely. If an individual fails an assessment item, it’s just as important — if not more so — for me to understand the way in which the candidate failed.
Early on, I don’t think I had given this enough thought to formalize this approach, but 6 years on I think I can summarize it in a simple graphic:
Our society and professional environments very much values aptitude, but I think that there is a separate characteristic that I find just as important: drive. A candidate with low aptitude and high drive can become high aptitude with time and training, but a candidate with low drive will always be limited. To me, the value placed on aptitude should be lower than the value placed on drive. The reason is simple: ideas and technology changes and iterates so quickly now that aptitude in any given technology or platform will expire in a flash if it’s not paired with a drive to learn, adapt, and adopt new technologies and new approaches. And it’s not that I don’t value high aptitude individuals, but such individuals will never be the source of innovation and create value; they will always contribute solid work, but cannot be relied upon to bring change and efficiencies.
I would strongly recommend Todd Rose’s The End of Average and Carol Dweck’s Growth Mindset; I think that both have been influential in helping me better understand the “luck” I’ve had in building a great team.