<CharlieDigital/> Programming, Politics, and uhh…pineapples


The Future of the Automotive Industry

Posted by Charles Chen

Lately, I've been having a lot of discussion with various folks on my thoughts on the future of the automotive industry.

Within the next 10 years, we will see a huge transformation in the industry and how consumers use cars.

Every manufacturer has announced or is working on some variant of ride sharing and autonomous driving.


Driverless cars could imminently be operating on London's streets, after Nissan announced it had been cleared by the UK government to commence limited trials.

While Google has been testing its own autonomous vehicles on public roads near its Californian headquarters, Nissan claimed that its driverless cars will be the first to hit public roads in Europe—if, that is, the Japanese manufacturer receives final approval from an undisclosed local authority in the UK's capital.

Volvo ride sharing and Drive Me:

DETROIT—Volvo is among the leaders of the pack of automakers when it comes to autonomous driving. The various advanced driver assists in its current XC90 and S90 are some of the best we've tested, and the carmaker recently linked up with Uber to develop redundant systems in self-driving cars. But before there was the Uber collaboration, there was Drive Me, a multiyear research program that the company will use to look at how it, as a car maker, can contribute to a "sustainable society." In the video above, we speak to Trent Victor, senior technical leader of crash avoidance at Volvo, about the program.

Volvo chose this year's North American International Auto Show to hand over the first set of keys in the Drive Me program. It's in the process of recruiting 100 families in Gothenburg, Sweden, but the first lucky family is the Hains. Over the next few years, the Hains and the other participating families will be testing out a number of different research vehicles like the XC90 SUV seen in the video. In addition to testing out new iterations of self-driving systems, the vehicles will also be fitted with sensors and data loggers in the cabin to monitor the occupants.

GM and Lyft, in whom they invested $500m:

General Motors Co. and Lyft Inc. within a year will begin testing a fleet of self-driving Chevrolet Bolt electric taxis on public roads, a move central to the companies’ joint efforts to challenge Silicon Valley giants in the battle to reshape the auto industry.

Cadillac (GM) has started their BOOK program:

A flat monthly fee of $1,500 eliminates the hassles of car ownership so members can experience uninhibited driving. Membership is month-to-month with no long-term commitment required. Members can use a mobile app to reserve vehicles that will be delivered to their specified locations via a white-glove concierge service. Certain location restrictions apply. Members will have access to the current year Platinum Level Trim Cadillacs, including the XT5, CT6, Escalade and V Series. Registration, taxes, insurance and maintenance costs are included in the monthly rate and there is no limit on mileage.

And of course, there is Tesla, Google, Nvidia.

While there has been a lot of skepticism with many whom I've talked to, the reality is that we are at a convergence of technology and engineering that will transform the automotive industry -- and many ancillary industries -- in the next 10 years.  Leaders in the auto industry really deserve some credit given that we've seen industries like music and television wholly unprepared for the transition wrought upon them by technology; by and large it seems clear that the industry can see the shift and have invested in shaping their futures.

This moment is the convergence of computer vision, improvements in mobile computing, and maturity in the field of neural networks and deep learning -- the latter two are now commodities that anyone can take advantage of for pennies on either Amazon or Azure.  As we saw with the shift with enterprise infrastructure once compute became a commodity, so too will we see a shift in the prevalence of "AI" given the advancements and commoditization of these capabilities.

There are powerful business drivers, of course.  From the perspective of companies like Google, it allows them to achieve higher engagement and serve more ads.  It also opens up new models of revenue and advertising; imagine a smart car that can suggest "sponsored" restaurants in the area if you are heading out for a meal.  For companies like GM, Nissan, Volvo, etc., it is an evolve or die scenario as the industry is transformed.

The degree of transformation in this industry will be massive.  The first wave will be an increase in programs like BOOK that will continue the trend of unbinding the need for transportation from the need of ownership.  We are already seeing this with the tremendous growth of Uber and Lyft in the last few years.  BOOK is currently only available for Cadillac's most premium cars with "white glove" concierge services -- and the price reflects that!  However, I think we will see this model move downmarket.

One might fairly ask how this is any different from renting a car.  I think there's one key difference: the model is to "rent" directly through the manufacturer and this is a precursor to an all new model.  In a future when autonomous driving is mainstreamed, these services will reach their full potential whereby manufacturers will sell services directly to consumers.  You won't buy or lease a car, but rather summon an autonomous car from a local hub that will come and transport you for your trip in much the same way that you would hail an Uber ride.  It is this transformation that every manufacturer sees and it is this inevitability that they are prototyping and investing in.

In some sense, Tesla has been at the forefront of some aspects of this model.  Obviously, Autopilot is one of the most competent semi-autonomous driving systems currently on the market.  But beyond that, Tesla has already done away with the traditional dealership model, though it has been as a result of legal challenges from entrenched dealerships.  While Tesla has been fighting these legal battles, traditional manufacturers like GM, Renault-Nissan, Ford, etc. have been sitting on the sidelines, waiting to see the result and observing.  After all, there is no logical reason why Toyota can't sell directly to you except that there is an entrenched model and legal framework that prevents them from doing so.  But they have been preparing, learning, and now experimenting as we see with BOOK.

With the inevitable reduction in car ownership, we can expect quite a bit of fallout across many industries.

For starters, the model of automotive insurance will need to change.  Consumers may carry additional personal injury insurance, but the cost of insurance will be shifted to the service provider in much the same way that the insurance on your Uber ride is the responsibility of the Uber driver.

Automotive dealerships will also need to evolve.  It is likely that we will see dealerships transform into hubs that largely provide service and maintenance as well as a central distribution point (though a fully distributed model is certainly also possible).  Many dealerships will likely go out of business in this process as profitability drops and more competitive options arise for manufacturers.

Used car dealerships and the entire infrastructure supporting that will need to evolve as there will be less buyers of used cars.  The value of used cars themselves may take a large hit as the market of buyers shrinks and the cost of ownership increases.

Municipalities will need to rethink their model of infrastructure planning.  A township in New Jersey just last year piloted a program that reimbursed for Uber rides instead of investing in a new parking lot.  What effect will a new model have on bond commitments related to existing infrastructure?  How will it affect zoning and planning?  Municipalities will also have another challenge: how will autonomous vehicles affect their revenues from traffic violations when these cars will obey posted speed limits and stop at every stop sign and red light?  What happens when no one needs to park at meters because the cars will operate on-demand?  How will they make up this gap in their revenue stream?

Small businesses like car washes and even big businesses like auto parts stores will need to plan for a future where ownership decreases.  It is likely that most will go out of business, but more likely in a time span of 15-20 years.

There will be new industries and new opportunities as well that will transform local businesses.  For example, an autonomous food or parcel delivery vehicle can be configured very differently from any typical vehicle designed to transport humans.  Will we have a need for pizza or takeout delivery drivers when an autonomous vehicle can deliver the food more cost effectively?  It is not likely that a small local restaurant would buy these vehicles, but rather rent them from a vendor that specializes in these vehicles and increase their delivery capacity on demand.

Manufacturers themselves will need to figure out how to shift their business and resource models in a future where the ratio of cars to riders is significantly lower.  How will it affect their current investments in manufacturing facilities?  What about their commitments to their human resources?  What will the effect be on their current real estate investments be?  What types of resources will they need in the future when they transition into not only a manufacturer, but also a service provider?  Or will the model be altogether different and will they spin off the service provider from the manufacturer?  Uber will be redundant when manufacturers can provide the services directly without a middleman much like how streaming has shifted the relationship between content produces and consumers.  Perhaps Uber will become more like a Hulu where it provides a consortia a platform for managing services.  Tesla is already heading down this route.

Maybe a more important question for manufacturers is how will the market shift once ownership is a thing of the past?  Will people still care about brands?  Or will they care more about the purpose?  I need to transport 6 adults.  I need to transport sheets of plywood and drywall.  I want something a bit flashier for my date.  By and large, I think most folks don't care what their Uber driver is driving; rather they care about the class of vehicle: luxury for a high end experience, mini-van or SUV for carrying people and luggage to the airport, typical sedan for lowest cost.  Perhaps we may see the death of a few brands as the market contracts in reaction to this new model.

The fossil fuel industry will also be impacted as transportation models become more efficient (forget about electrification) and less cars are needed to meet the same demand.  For example, we could see lower priced services that allow multiple riders per vehicle based on smart routing.  We could even see models like we see in the airline industry where smart routing will pool and "transfer" riders to maximize efficiency and provide a lower cost service.  Google already has a patent for smart pickup and dropoff locations.  It's not difficult to make the leap that they could more intelligently route pickups and dropoffs to maximize efficient routing of the vehicles matched to demand. "Passenger Charles Chen, please exit the vehicle here.  A blue Prius with license plate 247X3K will be here shortly to continue your trip.  Thank you for using Google Transit; you saved $3.50 and 4000 grams of CO2 by using Google Transit Eco today!  You should arrive at your destination in 15 minutes; you are still on time for your 6:00PM appointment; would you like to stop at Starbucks for coffee first?"

In discussions with skeptics, one argument that comes up is the ownership experience.  There are various aspects of this such as status or pride of ownership (Americans do have a strong history of sentiment attached to their cars) or even conveniences such as keeping your things in your car.  But I think that this, too, will change culturally as a matter of convenience in much the same way that we have shifted on from physical media for music and video.  My 2016 Mazda CX-9 doesn't even come with a CD player.  How did this happen?  After all, there is a deep culture associated with physical media from vinyl to mix tapes even to CD jackets.  The same with books; there is a certain experience associated with reading a physical book that is strongly ingrained into our culture.  Libraries, the smell of books, taking notes and putting dog ears in pages of a book, passing a book down from generation to generation.  And yet, e-books are here to stay despite their limitations and restricted ownership models (Amazon can wipe your account at any time, after all).  The answer is part convenience and how it is enabled by technology; with the availability of always connected devices, the need to even carry digital media around is redundant.  Why do so when you can stream any song, wherever you may be?  I've seen a shift even in flights where airlines are no longer investing in screens on their planes and instead investing in streaming to personal devices.

It is not the extension or progression of a model such as renting a car, as suggested by one of my counterparties, but rather an all new model.  In much the same way, Uber is not an extension of the model of taxis; it entirely disrupts the business model by removing the barrier of medallions and licenses.  Even if you slapped the Uber app on top of existing taxi businesses, it would not be the same model.  Likewise, Airbnb is not an extension of the hotel business model; it is an entirely new model that disrupts the existing business model.  The shift we will see in the automotive industry is not an extension or progression of an existing model, it is a wholly new model which will come to dominate how we consume transportation services.

It will be far more convenient for a generation of consumers who have no interest in maintaining cars.  For parents that are too busy to shuttle their kids around to this practice or that lesson.  For business travelers who need a vehicle all over the country but don't want the hassle of booking rentals.  For restaurants who can rent delivery capacity on demand instead of hiring drivers.  For a generation that will grow up with devices and have no desire to suffer boredom and tediousness when they have a choice.  The spaces that we reserve for parking cars can be used for better purposes.  Townships will not need make heavy capital investments in wasted "dead zones" like parking decks.

I look forward to this future and it will be interesting to observe what other types of fallout we see from this shift in the next decade.


The Science of Organic Milk

Posted by Charles Chen

If you're like me, you've noticed that organic milk tends to have a longer shelf life and tastes better so I tend to spend the extra money because aside from our 1 year-old, the family drinks milk erratically and I prefer the taste.  But are these properties of the organic nature of the milk?  It turns out that there is a simple explanation for both that is quite interesting and may (or may not) change your mind on spending extra on organic milk.

First is the question of longer shelf life.  This is actually a result of logistics.  There are fewer farms providing organic milk so it often has to travel further and undergoes a high temperature pasteurization process that kills all bacteria:

The process that gives the milk a longer shelf life is called ultrahigh temperature (UHT) processing or treatment, in which milk is heated to 280 degrees Fahrenheit (138 degrees Celsius) for two to four seconds, killing any bacteria in it.

Compare that to pasteurization, the standard preservation process. There are two types of pasteurization: "low temperature, long time," in which milk is heated to 145 degrees F (63 degrees C) for at least 30 minutes*, or the more common "high temperature, short time," in which milk is heated to roughly 160 degrees F (71 degrees C) for at least 15 seconds.

The different temperatures hint at why UHT-treated milk lasts longer: Pasteurization doesn’t kill all bacteria in the milk, just enough so that you don't get a disease with your milk mustache. UHT, on the other hand, kills everything.

Interestingly, UHT treated milk no longer needs refrigeration (prior to opening).  Your grocer keeps it refrigerated as a matter of consumer expectation (how silly we Americans are).

The answer to the second question actually arises from the answer to the first.  The process of UHT actually changes the chemical nature of the milk by breaking down some proteins and cooking some of the sugars.  Organic milk tastes different not because it's organic, but because of the pasteurization process which happens to change some of the molecular structure of the milk:

UHT sweetens the flavor of milk by burning some of its sugars (caramelization)....UHT also destroys some of the milk’s vitamin content—not a significant amount—and affects some proteins

So there you have it; organic milk does indeed taste different from non-organic milk, but it's not a placebo effect and it's not because it's organic.  If you're a European in the US and you find our milk tastes funny, try the organic milk.

I may take up this article on giving non-organic UHT milk a try.


Why Cursive Should be Taught in the Age of STEM

Posted by Charles Chen

It's a topic that comes up from time to time in various channels on the topic of education and modern curricula.  NPR just had an article today on it and how Alabama passed a law requiring it to be taught.  Some might see that as a backwards policy from a backwards state.  Do we have a need nowadays for something as outdated as cursive?  Wouldn't it be better to spend that time focusing on math, science, or reading instead?  Won't our kids in the future just use voice dictation or typing -- why bother with handwriting?

Actually, I think there is a very good reason why cursive handwriting should continue to be taught and graded: fine motor skills and dexterity.  Even in the digital age, we still need these skills as it is a precursor to touch typing and necessary for the skills needed to manipulate small objects that are still very relevant in the age of STEM (motors, wires, circuitry, microscopes, pipettes, scalpels, transistors, etc.).  It brings to mind an awesome video of a show called "Supreme Skills" out of Japan that pitted aerospace engineers against machinists to see who could design and build a more precise spinning top:

Master Craftsmen Vs Rocket Engineers: The... by GAG_TV

(I've sometimes wondered if the reason why Asians are stereotypically good at musical instruments or manufacturing electronics and goods is because Asian languages are much more difficult to write with more patterns and strokes required to be learned and executed.)

Some argue that you can develop those skills through other means like using a mouse instead.  Imagine a simple, timed game where a child has to click or touch precisely to score.  But there are practical reasons why this doesn't work so well like cost, for example, and also having a mouse available at every desk (which implies a laptop or workstation at every desk) would be a logistical nightmare.  Cursive?  It's cheap and practical; all you need is a $0.10 pencil and a $0.001 sheet of paper to teach and practice.  Every child can practice writing cursive at home, regardless of their socico-economic background and it doesn't require much of an expense at all.  Even as the cost and prevalence of tablets and phones becomes ubiquitous and ever cheaper, it's hard to beat practically free.

Observing my 5 year-old, I can see the purpose in a lot of activities that otherwise seem like they are just for fun.  Even as a child colors or cuts shapes or glues balls of cotton to a piece of paper, all of these activities are training for precision and fine motor skills that are required for all sorts of more complex activities from playing an instrument, to being able to touch type, to having the ability to manipulate minute electronics.  These activities are not just for fun; it's how they learn to precisely control their fingers for pressure and motion.  Anyone that's had a child knows that young kids will have trouble pushing together or separating Legos or they'll squeeze out too much glue or they have trouble drawing a straight line or they can't copy shapes precisely or their cuts will be jagged and off the line.  In repeatedly performing these types of activities through the course of play, they develop the fine motor skills they need later on in life; I tend to see cursive handwriting as a natural extension of these types of activities and necessary for all young children.

So even in the digital age, as a parent, I would welcome cursive into my child's curriculum because the purpose isn't to learn cursive, but to develop the fine motor skills that will be required to perform more complex digital manipulations as my child matures.

Filed under: Uncategorized No Comments

Thought of the Day

Posted by Charles Chen

From this post:


Filed under: Uncategorized No Comments

Adventures in Single-Sign-On: SharePoint Login Via Office 365

Posted by Charles Chen

If you are still working with on-premise SharePoint 2010/2013 or you application only supports SAML 1.1 but you'd like to leverage your new and shiny Office 365 accounts for single-sign-on, you can achieve this relatively painlessly by using Windows Azure ACS as a federation provider (FP).  It's possible to use Office 365 (Azure AD) directly as your identity provider (IdP), but for SharePoint 2010, this involves writing custom code since SharePoint 2010 can't consume SAML 2.0 tokens without custom code (only SAML 1.1 via configuration).

The overall flow for the scenario is diagrammed below:

Using Windows Azure ACS as a Federation Provider for Azure AD

Using Windows Azure ACS as a Federation Provider for Azure AD

In this scenario, the trust relationship from SharePoint is only to the FP (Azure ACS) which acts as a proxy for the trust to the IdP (Azure AD).  Azure AD contains the actual credentials, but we proxy those credentials through ACS to take advantage of the SAML 2.0 -> SAML 1.1 translation without writing code (otherwise, it would be possible to directly establish trust to Azure AD or through AD FS).  When the user accesses a protected resource in SharePoint, the user is redirected first to the FP which then redirects to the IdP and proxies that response via a series of redirects to the relying party (RP), SharePoint.

The first step is to create an Azure ACS namespace.  You'll need your ACS namespace URL, which should look like: https://{NAMESPACE}.accesscontrol.windows.net

Next, in Azure AD, create a new application which will allow your ACS to use Azure AD as an IdP.  On the APPLICATIONS tab, click ADD at the bottom to add a new application.  Enter a descriptive name for the application (note: you may want a more "friendly" name -- see last picture):


Then enter the following for the SIGN-ON URL and APP ID URI:


Before leaving the Azure Portal, click on the VIEW ENDPOINTS button at the bottom of the dashboard and copy the URL for the FEDERATION METADATA DOCUMENT:azure-ad-endpoints

Now hop back over to Azure ACS management and add a new identity provider.  Select WS-Federation identity provider and on the next screen enter a descriptive name and paste the URL into the field:


Once you've set up the IdP, the next step is to set up the relying party (RP).  The key is to get the following settings correct:


In this example, I'm just using my local development environment as an example, but you must specify the _trust URL and explicitly select SAML 1.1 from the Token format dropdown.  Additionally, uncheck Windows Live ID under Identity providers so that only the one configured previously remains checked.

Finally, on this screen, you will need to specify a certificate to use for signing.  Under Token singing, select Use a dedicated certificate and then either use an existing valid X.509 certificate or create one for testing purposes.

Create the certificate using the following command:

MakeCert.exe -r -pe -n "CN={ACS_NAMESPACE}.accesscontrol.windows.net" -sky exchange -ss my -len 2048 -e 06/01/2017

You will need to export the certificate from the certificate store with the private key.  So WIN+R and type in mmc.exe.  From the MMC, click File and then Add or Remove Snap-ins.  Select the Certificates snap-in and click OK.  Locate the certificate and export it with the private key.  While you're here, export it again without the private key; you will need this certificate when setting up the authentication provider in SharePoint.

Back in the ACS management app, upload the first certificate that was exported and enter the password.  An important note is that ACS will always append a "/" after your realm; we will need to make sure what when we register the authentication provider, we include this in the login URL.

Before leaving the ACS management app for good, we need to update the rule group to pass through the claims.  On the left hand side, click on Rule groups and select the default rule group created for our RP.  Now click Generate to create the default rule set.

One thing I discovered through trial and error (mostly error) is that Azure AD does not seem to be providing a value for the emailaddress claim which we will be using later (you don't technically have to use this as the identifying claim, but I did in SharePoint before discovering that this causes an error).  So we'll remap the "name" claim to "emailaddress".  Click on the name claim  (http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name) and select the emailaddress claim type (http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress) as the output claim type.

Now back in the SharePoint environment, we'll now use the certificate exported without the private key to create a new trusted root authority for SharePoint and create and register the identity provider.  Fire up a management shell and enter the following commands:

$certificate = new-object System.Security.Cryptography.X509Certificates.X509Certificate2("c:\temp\cert-no-private-key.cer")
new-sptrustedrootauthority -name "ACS Token Signing Certificate" -Certificate $certificate
$cm0 = New-SPClaimTypeMapping "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
$loginUrl = "https://{ACSNAMESPACE}.accesscontrol.windows.net/v2/wsfederation?wa=wsignin1.0&wtrealm=https://sso.dev.local/&redirect=false"
$realm = "https://sso.dev.local/"
$issuer = New-SPTrustedIdentityTokenIssuer -Name "ACS" -Description "ACS" -Realm $realm -ImportTrustCertificate $certificate -ClaimsMappings $cm0 -SignInUrl $loginUrl -IdentifierClaim $cm0.InputClaimType

And finally, from SharePoint Central Admin, we can now add the "ACS" authentication provider:


The name that we gave earlier to the application in Azure AD can now also be configured to display in the app drawer of Office 365:


Filed under: Azure, SharePoint, SSO No Comments

Pau Gasol on Kobe Bryant

Posted by Charles Chen

SI has an excellent "farewell" from Pau Gasol to Kobe Bryant.

One of the key takeaways, for me:

He was challenging me because he expected more from me. When somebody cares about you, that’s when they challenge you. When they don’t care about you, they ignore you. That’s when you should worry.

It is natural for people to react negatively to critical feedback, especially when that feedback addresses a shortcoming or a weakness in some work, some effort that one has invested heavily in.

It is important to step back and detach the emotional aspect of coming up short of perfection and consider whether this critical feedback is dismissive or whether it is a revelation of some unmet potential that we, ourselves, have not yet recognized.

The whole thing is a good read and great piece of motivation to understand what it takes to succeed.



A Recipe for Execution in Crunch Time

Posted by Charles Chen

For many small software teams with loose (or non-existent) project management, "crunch time" usually leads to the unraveling of the project to some degree and inevitable crisis management.

In most of these cases, crunch time is the result of:

  1. Poor initial scoping - putting too much work into too little time or insufficient resource load
  2. Poor project methodology - poor tracking of estimates to reality, no mechanism of measuring progress
  3. Bad luck (or good luck?) - team is spread too thin due to unforeseen circumstances like a critical bug or landing a big contract with a new customer
  4. Poor communication and setting of customer expectations - if your team can't deliver, always communicate reality (assuming you know it) as early as possible so that there is no surprise and all parties can make sound, rational decisions.

So what do you do once you find yourself in crunch time, up against a deadline?  My recipe is simple:

  1. Find the gap - get the stakeholders together and identify the functional gap that remains.  The BA's, the SME's, the developers, the testers, maybe the customer -- get everyone together in room (or conference call) and identify the work that remains.  There must be agreement so that nothing is left un-considered.
  2. Estimate the gap - once the gap is identified, the next job is to estimate how much work remains in the functional gap.  Again, all stakeholders should be involved as testing and documentation must also be factored into the remaining work.
  3. Prioritize the remaining work - whether the gap estimate is less than the remaining time or greater than the remaining time, it is important to prioritize the remaining work so that the most important functional gaps are addressed first, just in case other unforeseen circumstances arise.
  4. De-scope or move timelines - if more work remains than the time available, there are really only two options: de-scope some functionality or move the timeline.  It is rarely the case that you can solve issues in crunch time by adding more resources.  Even if there is less work than time available, it often makes sense to create some breathing room by de-scoping
  5. Communicate - once reality has been clearly mapped out, communicate with the client to manage expectations and make sure all parties agree that it makes sense to move some less important features off into a future release.  Focus the conversation and communications on quality and ensure that the customer understands that the decisions are for the sake of software quality and the end users.

Fred Brooks summarized this decades ago in The Mythical Man-Month:

More software projects have gone awry for lack of calendar time than for all other causes combined.  Why is this cause of disaster so common?

First, our techniques for estimating are poorly developed.  More seriously, they reflect unvoiced assumption which is quite untrue, i.e., that all will go well.

Second, our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable.

Third, because we are uncertain of our estimates, software managers often lack the courteous stubbornness of Antoine's chef.

Fourth, schedule progress is poorly monitored.  Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering.

Fifth, when schedule slippage is recognized, the natural (and traditional) response is to add manpower.  Like dousing a fire with gasoline, this makes matters worse, much worse.

Brooks' reference to "Antoine's chef" is a reference to a quote:

Good cooking takes time.  If you are made to wait, it is to serve you better, and to please you.

- Menu of Restaurant Antoine, New Orleans

He adds, later:

Observe that for the programmer, as for the chef, the urgency of the patron may govern the scheduled completion of the task, but it cannot govern the actual completion.  An omelette promised in two minutes, may appear to be progressing nicely.  But when it has not set in two minutes, the customer has two choices -- wait or eat it raw.  Software customers have had the same choices.

The cook has another choice; he can turn up the heat.  The result is often an omelette nothing can save -- burned in one part, raw in another.

Brooks provides other thoughts on this topic, of course, but my take is to realize that the "omelette" will be delayed and communicate the reality to the customer and let her make the call: does she still want it if it will be late?  Can we bring out other parts of the meal before the omelette?  Or will she sacrifice the "done-ness" to get it on schedule?

But of course, job one is to recognize that it will be late and then identify how late; nothing can progress until those two activities are accomplished.

Filed under: Uncategorized No Comments

Adventures in Single-Sign-On: Cross Domain Script Request

Posted by Charles Chen

Consider a scenario where a user authenticates with ADFS (or equivalent identity provider (IdP)) when accessing a domain such as https://www.domain.com (A) and then, from this page, a request is made to https://api.other-domain.com/app.js (B) to download a set of application scripts that would then interact with a set of REST based web services in the B domain.  We'd like to have SSO so that claims provided to A are available to B and that the application scripts downloaded can then subsequently make requests with an authentication cookie.

Roughly speaking, the scenario looks like this:

Depiction of the scenario

Depiction of the scenario

It was straightfoward enough to set up the authentication with ADFS using WIF 4.5 for each of A and B following the MSDN "How To"; I had each of the applications separately working with the same ADFS instance, but the cross domain script request from A to B at step 5 for the script file generated an HTTP redirect sequence (302) that resulted in an XHTML form from ADFS with Javascript that attempts to execute an HTTP POST for the last leg of the authentication.  This was good news because it meant that ADFS recognized the user session and tried to issue another token for the user in the other domain without requiring a login.

However, this obviously posed a problem as, even though it appeared as if it were working, the request for the script could not succeed because of the text/html response from ADFS.

Here's what https://www.domain.com/default.aspx looks like in this case:

    <script type="text/javascript" src="https://api.other-domain.com/app.js"></script>

This obviously fails because the HTML content returned from the redirect to ADFS cannot be consumed.

I scratched my head for a bit and dug into the documentation for ADFS, trawled online discussion boards, and tinkered with various configurations trying to figure this out with no luck.  Many examples online that discuss this scenario when making a web service call from the backend of one application to another using bearer tokens or WIF ActAs delegation, but these were ultimately not suited for what I wanted to accomplish as I didn't want to have to write out any tokens into the page (for example, adding a URL parameter to the app.js request), make a backend request for the resource, or use a proxy.

(I suspect that using the HTTP GET binding for SAML would work, but for the life of me, I can't figure out how to set this up on ADFS...)

In a flash of insight, it occurred to me that if I used a hidden iframe to load another page in B, I would then have a cookie in session to make the request for the app.js!

Here's the what the page looks like on the page in A:

<script type="text/javascript">
    function loadOtherStuff()
        var script = document.createElement('script');
        script.setAttribute('type', 'text/javascript');
        script.setAttribute('src', 'https://api.other-domain.com/appscript.js');
<iframe src="https://api.other-domain.com" style="display: none" 

Using the iframe, the HTTP 302 redirect is allowed to complete and ADFS is able to set the authentication cookie without requiring a separate sign on since it's using the same IdP, certificate, and issuer thumbprint.  Once the cookie is set for the domain, then subsequent browser requests in the parent document to the B domain will carry along the cookie!

The request for appscript.js is intercepted by an IHttpHandler and authentication can be performed to check for the user claims before returning any content. This then allows us to stream back the client-side application scripts and templates via AMD through a single entry point (e.g. appscript.js?app=App1 or a redirect to establish a root path depending on how you choose to organize your files).

Any XHR requests made subsequently still require proper configuration of CORS on the calling side:

    url: 'https://api.other-domain.com/api/Echo', 
    type: 'GET',
    crossDomain: true,
    xhrFields: {
        withCredentials: true
    success: function(result){ window.alert('HERE'); console.log('RETRIEVED'); console.log(result); }

And on the service side:

    Needed to allow cross domain request.
        <add name="Access-Control-Allow-Origin" value="https://www.domain.com" />
        <add name="Access-Control-Allow-Credentials" value="true" />
        <add name="Access-Control-Allow-Headers" value="accept,content-type,cookie" />
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />

    Allow CORS pre-flight
    <requestFiltering allowDoubleEscaping="true">
            <add verb="OPTIONS" allowed="true" />

    Handle CORS pre-flight request
<add name="CorsOptionsModule" type="WifApiSample1.CorsOptionsModule" />

The options handler module is a simple class that responds to OPTION requests and also dynamically adds a header to the response:

    /// <summary>
    ///     <c>HttpModule</c> to support CORS.
    /// </summary>
    public class CorsOptionsModule : IHttpModule
        #region IHttpModule Members
        public void Dispose()
            //clean-up code here.

        public void Init(HttpApplication context)
            context.BeginRequest += HandleRequest;
            context.EndRequest += HandleEndRequest;

        private void HandleEndRequest(object sender, EventArgs e)
            string origin = HttpContext.Current.Request.Headers["Origin"];

            if (string.IsNullOrEmpty(origin))

            if (HttpContext.Current.Request.HttpMethod == "POST" && HttpContext.Current.Request.Url.OriginalString.IndexOf(".svc") < 0)
                HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", origin);

        private void HandleRequest(object sender, EventArgs e)
            if (HttpContext.Current.Request.HttpMethod == "OPTIONS")


The end result is that single-sign-on is established across two domains for browser to REST API calls using simple HTML-based trickery (only tested in FF!).

Filed under: .Net, Identity, SSO No Comments

5 Pitfalls to Software Project Failure

Posted by Charles Chen

Poorly Controlled Scope

Scope is enemy number 1; it is the amorphous blob that threatens to consume and grow until it is an uncontrollable monster, swallowing all of your carefully planned man hours.

Increases in scope are often the result of failure to manage the customer and expectations.  In any given project, there are only so many levers that can be used to control the successful delivery and it is up to the skilled project manager or client interface to toggle these levers of team size, timelines, requirements, and so on.

The worst is when growth of scope originates from within the team as it is a form of cancer that only causes teams to compromise on quality to meet timelines promised to the customer.  You see, when scope creep originates from the customer, there is a certain expectation that of course, costs will increase or timelines will need to be shifted.  After all, they are asking you to do more than was initially agreed upon.  But when new scope originates from the team itself, the customer will not readily accept this delay.

The cost of scope increases is often not well accounted for.  A change that takes a developer 2 days to make will cause ripples that force test teams to adjust their scripts, documentation teams to update their documents, and possibly trigger expensive regression testing.

Smart teams and leaders will understand that these can be controlled, in many cases, by simply creating a roadmap and understanding that desired features and capabilities that don't fit into existing timelines can be added to "v.next".

(Over) Reliance on Manual Effort

To a certain extent, software engineering requires raw manpower to execute large projects that require many hundreds of thousands of lines of code and lots of moving parts.

But within the lifecycle of a project, there are many activities that can be simplified by the use of automation.  Teams must judiciously balance the cost and effort of the automation versus the savings gained, but more often than not, even a little bit of automation is better than none.  It's crazy to think that it was once the case that all phone calls were manually routed between parties.


Can you imagine if we never evolved past this?

Nowadays, the idea seems crazy!  Imagine if the billions of people on this Earth were to rely on the same processes to connect phone calls today!

Testing is a great example where failure to automate creates a bottleneck to progress.  It increases the cost of changes and bug fixes because it increases the cost of regression testing.  Make the regression testing virtually free and the cost of introducing changes (whether small scope creep for critical bug fixes) is decreased dramatically.

Technologies like Selenium WebDriver and Visual Studio's built in tooling make it possible to achieve significant gains in productivity when it comes to testing.  Don't let excuses hold your team back.


Author's depiction of trying to convince test teams to automate

One skilled test automation engineer is worth her weight in gold!

Poor Communication and Collaboration

Strong and open channels of communication are critical for the success of projects, especially so when some or all of the resources are remote.

The flow of information and feedback from the customer to the design and engineering teams must be swift and clear so that expectations are known and any roadblocks can be communicated back.  Engineering teams will often have insights into the challenges and nuances of a customer's input and it can be dangerous to agree to timelines or make promises without clearly engaging the teams executing the implementation. Ideas that seem simple on paper or in concept can require massive engineering changes or sacrifices to achieve and not properly estimating this work is a common pitfall.

Demarco and Lister's Peopleware offers excellent insight into how to foster better communication and collaboration between teams.

Often, one of the simplest solutions is to simply talk to each other instead of using emails, chat messages, and worst of all: assumption ("Oh, I thought you already knew that"; we've all heard that one before!).  Get in front of a whiteboard and draw out ideas, deadlines, goals, and so on.  Go out to eat lunch together.  Plan team activities that engage everyone.  Make sure that everyone is on the same page on a professional level as well as a personal level.

Not Keeping Your Eyes on the Prize

It's easy for a team to get distracted and lose their focus on the goals of the project and the conditions of victory.

It is therefore critical that teams focus on a goal-oriented approach to the delivery of software projects.  This is a mind-set that scales up from daily scrums to weekly reviews and so on.  Even a short coffee break can be used to re-orient a wandering team member towards the goal posts.  Small, daily victories can help teams build momentum and continuously align towards the long term milestones.

It's important that individuals and teams know, at any given time, what is expected of them and what the priorities of the project are.  This allows individuals to make decisions autonomously and with little managerial overhead as they understand how to align themselves with the goals of the project and team.  Clear communication of goals allows any misunderstandings to surface early by pinning expectations to milestones -- be they simply daily ones, weekly ones, or project level milestones.

Teams and leaders that are poor at communication and collaboration will often lose their focus on the prize because there is a lack of understanding about shifting goals and priorities; there is a dependence on assumption instead of clearly aligning all parties to a set of well-defined conditions of victory.  These anti-leaders will focus on the tasks instead of the goals; it should be the other way around - focus on the goals and derive your tasks from them.

Unwillingness to Compromise

Teams must always be ready to compromise because this is the real world where timelines and successful delivery of usable software matters, but people also have families and life outside of work.  Unplanned circumstances arise that challenge the best laid blueprints.

If it is discovered that a feature will negatively impact performance of the system in the current architecture, compromise must be made on either the feature or the timelines to ensure that the desired capability can be delivered as usable software.

If unforeseen circumstances eat into the project timelines, compromise must be made to clearly redefine the scope and conditions of victory.

This is the real-world; man-hours are not unlimited and an unwillingness to compromise when necessary leads to poor quality as a team pushes to make up time.

In many cases, it is a bitter pill to swallow as it may mean telling a customer that a feature must be delayed or built into the next release, but I find that more often than not, openness and clearly communicating these issues as early as reasonable is productive and allows for rational decision making.


Adding Support for Azure AD Login (O365) to MVC Apps

Posted by Charles Chen

I spent the day toying around with ASP.NET MVC 5 web applications and authentication.  I won't cover the step-by-step as there are plenty of blogs that have it covered.

It seems that online, most examples and tutorials show you either how to use your organizational Azure AD account or social identity providers but not both.

I wanted to be able to log in using Facebook, Google, and/or the organizational account I use to connect to Office 365.

This requires that you select Individual User Accounts when prompted to change the authentication mode (whereas most tutorials have you select "Organization Accounts"):


This will give you the baseline needed to add the social login providers (more on that later).

To enable Windows Azure AD, you will need to first login into Azure and add an application to your default AD domain.  In the management portal:

  1. Click on ACTIVE DIRECTORY in the left nav
  2. Click the directory
  3. Click the APPLICATIONS link at the top
  4. Now at the bottom, click ADD to add a new application
  5. Select Add an application my organization is developing
  6. Enter an arbitrary name and click next
  7. Now in the App properties screen, you will need to enter your login URL (e.g. https://localhost:4465/Account/Login) and for the APP ID URI, you cannot use "localhost".  You should use your Azure account info like: https://myazure.onmicrosoft.com/MyApp.  The "MyApp" part is arbitrary, but the bolded text must match your directory identifier.

Most importantly, once you've created it, you need to click on the CONFIGURE link at the top and turn on the setting APPLICATION IS MULTI-TENANT:


If you fail to turn this on, the logins are limited to the users that are in your Azure AD instance only; you will not be able to log on with accounts you use to connect to Office 365.  You'll get an error like this:

Error: AADSTS50020: User account ‘jdoe@myo365domain.com’ from external identity provider ‘https://sts.windows.net/1234567e-b123-4123-9112-912345678e51/’ is not supported for application ‘2123456f-b123-4123-9123-4123456789e5'. The account needs to be added as an external user in the tenant. Please sign out and sign in again with an Azure Active Directory user account.

An important note is that if you used "localhost" in step 7, the UI will not allow you to save the settings with an error "The App ID URI is not available. The App ID URI must be from a verified domain within your organization's directory."

Once you've enabled this, we're ready to make the code changes required.

First, you will need to install the OpenId package from nuget using the following command:

install-package microsoft.owin.security.openidconnect

Next, in the default Startup.Auth.cs file generated by the project template, you will need to add some additional code.

First, add this line:


Then, add this:

app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
    ClientId = "138C1130-4B29-4101-9C84-D8E0D34D222A",
    Authority = "https://login.windows.net/common",
    PostLogoutRedirectUri = "https://localhost:44301/",                
    Description = new AuthenticationDescription
        AuthenticationType = "OpenIdConnect",
        Caption = "Azure OpenId  Connect"
    TokenValidationParameters = new TokenValidationParameters
        // If you don't add this, you get IDX10205
        ValidateIssuer = false   

There are two very important notes.  The first is that the Authority must have the /common path and not your Azure AD *.onmicrosoft.com path.

The second note is that you must add the TokenValidationParameters and set ValidateIssuer to false.

If you don't set this to false, you'll get the following 500 error after you successfully authenticate against Azure AD with your organizational O365 account:

IDX10205: Issuer validation failed. Issuer: ‘https://sts.windows.net/F92E09B4-DDD1-40A1-AE24-D51528361FEC/’. Did not match: validationParameters.ValidIssuer: ‘null’ or validationParameters.ValidIssuers: ‘https://sts.windows.net/{tenantid}/’

I think that this is a hack and to be honest, I'm not quite certain of the consequences of not validating the issuer, but it seems that there aren't many answers on the web for this scenario yet.  Looking at the source code where the exception originates, you'll see the method that generates it:

public static string ValidateIssuer(string issuer, SecurityToken securityToken, TokenValidationParameters validationParameters)
    if (validationParameters == null)
        throw new ArgumentNullException("validationParameters");
    if (!validationParameters.ValidateIssuer)
        return issuer;
    if (string.IsNullOrWhiteSpace(issuer))
        throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10211));
    // Throw if all possible places to validate against are null or empty
    if (string.IsNullOrWhiteSpace(validationParameters.ValidIssuer) && (validationParameters.ValidIssuers == null))
        throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10204));
    if (string.Equals(validationParameters.ValidIssuer, issuer, StringComparison.Ordinal))
        return issuer;
    if (null != validationParameters.ValidIssuers)
        foreach (string str in validationParameters.ValidIssuers)
            if (string.Equals(str, issuer, StringComparison.Ordinal))
                return issuer;
    throw new SecurityTokenInvalidIssuerException(
        string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10205, issuer, validationParameters.ValidIssuer ?? "null", Utility.SerializeAsSingleCommaDelimitedString(validationParameters.ValidIssuers)));

We're simply short circuiting the process.  It's clear that there is no matching issuer, but it's not quite clear to me yet where/how to configure that.

So what about the other social IdP's?  It's important to note that for Google, not only do you have to create a new client ID in the Google Developer Console, but you also need to enable the Google+ API:


You'll just get a bunch of useless error messages if you don't enable the API.

If you manage to get it all working, you should see the following options in the login screen:


And when you click it, you should be able to log in using the same organizational credentials that you use to connect to Office 365:


Filed under: .Net, MVC 1 Comment