<CharlieDigital/> Programming, Politics, and uhh…pineapples


Adventures in Single-Sign-On: SharePoint Login Via Office 365

Posted by Charles Chen

If you are still working with on-premise SharePoint 2010/2013 or you application only supports SAML 1.1 but you'd like to leverage your new and shiny Office 365 accounts for single-sign-on, you can achieve this relatively painlessly by using Windows Azure ACS as a federation provider (FP).  It's possible to use Office 365 (Azure AD) directly as your identity provider (IdP), but for SharePoint 2010, this involves writing custom code since SharePoint 2010 can't consume SAML 2.0 tokens without custom code (only SAML 1.1 via configuration).

The overall flow for the scenario is diagrammed below:

Using Windows Azure ACS as a Federation Provider for Azure AD

Using Windows Azure ACS as a Federation Provider for Azure AD

In this scenario, the trust relationship from SharePoint is only to the FP (Azure ACS) which acts as a proxy for the trust to the IdP (Azure AD).  Azure AD contains the actual credentials, but we proxy those credentials through ACS to take advantage of the SAML 2.0 -> SAML 1.1 translation without writing code (otherwise, it would be possible to directly establish trust to Azure AD or through AD FS).  When the user accesses a protected resource in SharePoint, the user is redirected first to the FP which then redirects to the IdP and proxies that response via a series of redirects to the relying party (RP), SharePoint.

The first step is to create an Azure ACS namespace.  You'll need your ACS namespace URL, which should look like: https://{NAMESPACE}.accesscontrol.windows.net

Next, in Azure AD, create a new application which will allow your ACS to use Azure AD as an IdP.  On the APPLICATIONS tab, click ADD at the bottom to add a new application.  Enter a descriptive name for the application (note: you may want a more "friendly" name -- see last picture):


Then enter the following for the SIGN-ON URL and APP ID URI:


Before leaving the Azure Portal, click on the VIEW ENDPOINTS button at the bottom of the dashboard and copy the URL for the FEDERATION METADATA DOCUMENT:azure-ad-endpoints

Now hop back over to Azure ACS management and add a new identity provider.  Select WS-Federation identity provider and on the next screen enter a descriptive name and paste the URL into the field:


Once you've set up the IdP, the next step is to set up the relying party (RP).  The key is to get the following settings correct:


In this example, I'm just using my local development environment as an example, but you must specify the _trust URL and explicitly select SAML 1.1 from the Token format dropdown.  Additionally, uncheck Windows Live ID under Identity providers so that only the one configured previously remains checked.

Finally, on this screen, you will need to specify a certificate to use for signing.  Under Token singing, select Use a dedicated certificate and then either use an existing valid X.509 certificate or create one for testing purposes.

Create the certificate using the following command:

MakeCert.exe -r -pe -n "CN={ACS_NAMESPACE}.accesscontrol.windows.net" -sky exchange -ss my -len 2048 -e 06/01/2017

You will need to export the certificate from the certificate store with the private key.  So WIN+R and type in mmc.exe.  From the MMC, click File and then Add or Remove Snap-ins.  Select the Certificates snap-in and click OK.  Locate the certificate and export it with the private key.  While you're here, export it again without the private key; you will need this certificate when setting up the authentication provider in SharePoint.

Back in the ACS management app, upload the first certificate that was exported and enter the password.  An important note is that ACS will always append a "/" after your realm; we will need to make sure what when we register the authentication provider, we include this in the login URL.

Before leaving the ACS management app for good, we need to update the rule group to pass through the claims.  On the left hand side, click on Rule groups and select the default rule group created for our RP.  Now click Generate to create the default rule set.

One thing I discovered through trial and error (mostly error) is that Azure AD does not seem to be providing a value for the emailaddress claim which we will be using later (you don't technically have to use this as the identifying claim, but I did in SharePoint before discovering that this causes an error).  So we'll remap the "name" claim to "emailaddress".  Click on the name claim  (http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name) and select the emailaddress claim type (http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress) as the output claim type.

Now back in the SharePoint environment, we'll now use the certificate exported without the private key to create a new trusted root authority for SharePoint and create and register the identity provider.  Fire up a management shell and enter the following commands:

$certificate = new-object System.Security.Cryptography.X509Certificates.X509Certificate2("c:\temp\cert-no-private-key.cer")
new-sptrustedrootauthority -name "ACS Token Signing Certificate" -Certificate $certificate
$cm0 = New-SPClaimTypeMapping "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
$loginUrl = "https://{ACSNAMESPACE}.accesscontrol.windows.net/v2/wsfederation?wa=wsignin1.0&wtrealm=https://sso.dev.local/&redirect=false"
$realm = "https://sso.dev.local/"
$issuer = New-SPTrustedIdentityTokenIssuer -Name "ACS" -Description "ACS" -Realm $realm -ImportTrustCertificate $certificate -ClaimsMappings $cm0 -SignInUrl $loginUrl -IdentifierClaim $cm0.InputClaimType

And finally, from SharePoint Central Admin, we can now add the "ACS" authentication provider:


The name that we gave earlier to the application in Azure AD can now also be configured to display in the app drawer of Office 365:


Filed under: Azure, SharePoint, SSO No Comments

Pau Gasol on Kobe Bryant

Posted by Charles Chen

SI has an excellent "farewell" from Pau Gasol to Kobe Bryant.

One of the key takeaways, for me:

He was challenging me because he expected more from me. When somebody cares about you, that’s when they challenge you. When they don’t care about you, they ignore you. That’s when you should worry.

It is natural for people to react negatively to critical feedback, especially when that feedback addresses a shortcoming or a weakness in some work, some effort that one has invested heavily in.

It is important to step back and detach the emotional aspect of coming up short of perfection and consider whether this critical feedback is dismissive or whether it is a revelation of some unmet potential that we, ourselves, have not yet recognized.

The whole thing is a good read and great piece of motivation to understand what it takes to succeed.



A Recipe for Execution in Crunch Time

Posted by Charles Chen

For many small software teams with loose (or non-existent) project management, "crunch time" usually leads to the unraveling of the project to some degree and inevitable crisis management.

In most of these cases, crunch time is the result of:

  1. Poor initial scoping - putting too much work into too little time or insufficient resource load
  2. Poor project methodology - poor tracking of estimates to reality, no mechanism of measuring progress
  3. Bad luck (or good luck?) - team is spread too thin due to unforeseen circumstances like a critical bug or landing a big contract with a new customer
  4. Poor communication and setting of customer expectations - if your team can't deliver, always communicate reality (assuming you know it) as early as possible so that there is no surprise and all parties can make sound, rational decisions.

So what do you do once you find yourself in crunch time, up against a deadline?  My recipe is simple:

  1. Find the gap - get the stakeholders together and identify the functional gap that remains.  The BA's, the SME's, the developers, the testers, maybe the customer -- get everyone together in room (or conference call) and identify the work that remains.  There must be agreement so that nothing is left un-considered.
  2. Estimate the gap - once the gap is identified, the next job is to estimate how much work remains in the functional gap.  Again, all stakeholders should be involved as testing and documentation must also be factored into the remaining work.
  3. Prioritize the remaining work - whether the gap estimate is less than the remaining time or greater than the remaining time, it is important to prioritize the remaining work so that the most important functional gaps are addressed first, just in case other unforeseen circumstances arise.
  4. De-scope or move timelines - if more work remains than the time available, there are really only two options: de-scope some functionality or move the timeline.  It is rarely the case that you can solve issues in crunch time by adding more resources.  Even if there is less work than time available, it often makes sense to create some breathing room by de-scoping
  5. Communicate - once reality has been clearly mapped out, communicate with the client to manage expectations and make sure all parties agree that it makes sense to move some less important features off into a future release.  Focus the conversation and communications on quality and ensure that the customer understands that the decisions are for the sake of software quality and the end users.

Fred Brooks summarized this decades ago in The Mythical Man-Month:

More software projects have gone awry for lack of calendar time than for all other causes combined.  Why is this cause of disaster so common?

First, our techniques for estimating are poorly developed.  More seriously, they reflect unvoiced assumption which is quite untrue, i.e., that all will go well.

Second, our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable.

Third, because we are uncertain of our estimates, software managers often lack the courteous stubbornness of Antoine's chef.

Fourth, schedule progress is poorly monitored.  Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering.

Fifth, when schedule slippage is recognized, the natural (and traditional) response is to add manpower.  Like dousing a fire with gasoline, this makes matters worse, much worse.

Brooks' reference to "Antoine's chef" is a reference to a quote:

Good cooking takes time.  If you are made to wait, it is to serve you better, and to please you.

- Menu of Restaurant Antoine, New Orleans

He adds, later:

Observe that for the programmer, as for the chef, the urgency of the patron may govern the scheduled completion of the task, but it cannot govern the actual completion.  An omelette promised in two minutes, may appear to be progressing nicely.  But when it has not set in two minutes, the customer has two choices -- wait or eat it raw.  Software customers have had the same choices.

The cook has another choice; he can turn up the heat.  The result is often an omelette nothing can save -- burned in one part, raw in another.

Brooks provides other thoughts on this topic, of course, but my take is to realize that the "omelette" will be delayed and communicate the reality to the customer and let her make the call: does she still want it if it will be late?  Can we bring out other parts of the meal before the omelette?  Or will she sacrifice the "done-ness" to get it on schedule?

But of course, job one is to recognize that it will be late and then identify how late; nothing can progress until those two activities are accomplished.

Filed under: Uncategorized No Comments

Adventures in Single-Sign-On: Cross Domain Script Request

Posted by Charles Chen

Consider a scenario where a user authenticates with ADFS (or equivalent identity provider (IdP)) when accessing a domain such as https://www.domain.com (A) and then, from this page, a request is made to https://api.other-domain.com/app.js (B) to download a set of application scripts that would then interact with a set of REST based web services in the B domain.  We'd like to have SSO so that claims provided to A are available to B and that the application scripts downloaded can then subsequently make requests with an authentication cookie.

Roughly speaking, the scenario looks like this:

Depiction of the scenario

Depiction of the scenario

It was straightfoward enough to set up the authentication with ADFS using WIF 4.5 for each of A and B following the MSDN "How To"; I had each of the applications separately working with the same ADFS instance, but the cross domain script request from A to B at step 5 for the script file generated an HTTP redirect sequence (302) that resulted in an XHTML form from ADFS with Javascript that attempts to execute an HTTP POST for the last leg of the authentication.  This was good news because it meant that ADFS recognized the user session and tried to issue another token for the user in the other domain without requiring a login.

However, this obviously posed a problem as, even though it appeared as if it were working, the request for the script could not succeed because of the text/html response from ADFS.

Here's what https://www.domain.com/default.aspx looks like in this case:

    <script type="text/javascript" src="https://api.other-domain.com/app.js"></script>

This obviously fails because the HTML content returned from the redirect to ADFS cannot be consumed.

I scratched my head for a bit and dug into the documentation for ADFS, trawled online discussion boards, and tinkered with various configurations trying to figure this out with no luck.  Many examples online that discuss this scenario when making a web service call from the backend of one application to another using bearer tokens or WIF ActAs delegation, but these were ultimately not suited for what I wanted to accomplish as I didn't want to have to write out any tokens into the page (for example, adding a URL parameter to the app.js request), make a backend request for the resource, or use a proxy.

(I suspect that using the HTTP GET binding for SAML would work, but for the life of me, I can't figure out how to set this up on ADFS...)

In a flash of insight, it occurred to me that if I used a hidden iframe to load another page in B, I would then have a cookie in session to make the request for the app.js!

Here's the what the page looks like on the page in A:

<script type="text/javascript">
    function loadOtherStuff()
        var script = document.createElement('script');
        script.setAttribute('type', 'text/javascript');
        script.setAttribute('src', 'https://api.other-domain.com/appscript.js');
<iframe src="https://api.other-domain.com" style="display: none" 

Using the iframe, the HTTP 302 redirect is allowed to complete and ADFS is able to set the authentication cookie without requiring a separate sign on since it's using the same IdP, certificate, and issuer thumbprint.  Once the cookie is set for the domain, then subsequent browser requests in the parent document to the B domain will carry along the cookie!

The request for appscript.js is intercepted by an IHttpHandler and authentication can be performed to check for the user claims before returning any content. This then allows us to stream back the client-side application scripts and templates via AMD through a single entry point (e.g. appscript.js?app=App1 or a redirect to establish a root path depending on how you choose to organize your files).

Any XHR requests made subsequently still require proper configuration of CORS on the calling side:

    url: 'https://api.other-domain.com/api/Echo', 
    type: 'GET',
    crossDomain: true,
    xhrFields: {
        withCredentials: true
    success: function(result){ window.alert('HERE'); console.log('RETRIEVED'); console.log(result); }

And on the service side:

    Needed to allow cross domain request.
        <add name="Access-Control-Allow-Origin" value="https://www.domain.com" />
        <add name="Access-Control-Allow-Credentials" value="true" />
        <add name="Access-Control-Allow-Headers" value="accept,content-type,cookie" />
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />

    Allow CORS pre-flight
    <requestFiltering allowDoubleEscaping="true">
            <add verb="OPTIONS" allowed="true" />

    Handle CORS pre-flight request
<add name="CorsOptionsModule" type="WifApiSample1.CorsOptionsModule" />

The options handler module is a simple class that responds to OPTION requests and also dynamically adds a header to the response:

    /// <summary>
    ///     <c>HttpModule</c> to support CORS.
    /// </summary>
    public class CorsOptionsModule : IHttpModule
        #region IHttpModule Members
        public void Dispose()
            //clean-up code here.

        public void Init(HttpApplication context)
            context.BeginRequest += HandleRequest;
            context.EndRequest += HandleEndRequest;

        private void HandleEndRequest(object sender, EventArgs e)
            string origin = HttpContext.Current.Request.Headers["Origin"];

            if (string.IsNullOrEmpty(origin))

            if (HttpContext.Current.Request.HttpMethod == "POST" && HttpContext.Current.Request.Url.OriginalString.IndexOf(".svc") < 0)
                HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", origin);

        private void HandleRequest(object sender, EventArgs e)
            if (HttpContext.Current.Request.HttpMethod == "OPTIONS")


The end result is that single-sign-on is established across two domains for browser to REST API calls using simple HTML-based trickery (only tested in FF!).

Filed under: .Net, Identity, SSO No Comments

5 Pitfalls to Software Project Failure

Posted by Charles Chen

Poorly Controlled Scope

Scope is enemy number 1; it is the amorphous blob that threatens to consume and grow until it is an uncontrollable monster, swallowing all of your carefully planned man hours.

Increases in scope are often the result of failure to manage the customer and expectations.  In any given project, there are only so many levers that can be used to control the successful delivery and it is up to the skilled project manager or client interface to toggle these levers of team size, timelines, requirements, and so on.

The worst is when growth of scope originates from within the team as it is a form of cancer that only causes teams to compromise on quality to meet timelines promised to the customer.  You see, when scope creep originates from the customer, there is a certain expectation that of course, costs will increase or timelines will need to be shifted.  After all, they are asking you to do more than was initially agreed upon.  But when new scope originates from the team itself, the customer will not readily accept this delay.

The cost of scope increases is often not well accounted for.  A change that takes a developer 2 days to make will cause ripples that force test teams to adjust their scripts, documentation teams to update their documents, and possibly trigger expensive regression testing.

Smart teams and leaders will understand that these can be controlled, in many cases, by simply creating a roadmap and understanding that desired features and capabilities that don't fit into existing timelines can be added to "v.next".

(Over) Reliance on Manual Effort

To a certain extent, software engineering requires raw manpower to execute large projects that require many hundreds of thousands of lines of code and lots of moving parts.

But within the lifecycle of a project, there are many activities that can be simplified by the use of automation.  Teams must judiciously balance the cost and effort of the automation versus the savings gained, but more often than not, even a little bit of automation is better than none.  It's crazy to think that it was once the case that all phone calls were manually routed between parties.


Can you imagine if we never evolved past this?

Nowadays, the idea seems crazy!  Imagine if the billions of people on this Earth were to rely on the same processes to connect phone calls today!

Testing is a great example where failure to automate creates a bottleneck to progress.  It increases the cost of changes and bug fixes because it increases the cost of regression testing.  Make the regression testing virtually free and the cost of introducing changes (whether small scope creep for critical bug fixes) is decreased dramatically.

Technologies like Selenium WebDriver and Visual Studio's built in tooling make it possible to achieve significant gains in productivity when it comes to testing.  Don't let excuses hold your team back.


Author's depiction of trying to convince test teams to automate

One skilled test automation engineer is worth her weight in gold!

Poor Communication and Collaboration

Strong and open channels of communication are critical for the success of projects, especially so when some or all of the resources are remote.

The flow of information and feedback from the customer to the design and engineering teams must be swift and clear so that expectations are known and any roadblocks can be communicated back.  Engineering teams will often have insights into the challenges and nuances of a customer's input and it can be dangerous to agree to timelines or make promises without clearly engaging the teams executing the implementation. Ideas that seem simple on paper or in concept can require massive engineering changes or sacrifices to achieve and not properly estimating this work is a common pitfall.

Demarco and Lister's Peopleware offers excellent insight into how to foster better communication and collaboration between teams.

Often, one of the simplest solutions is to simply talk to each other instead of using emails, chat messages, and worst of all: assumption ("Oh, I thought you already knew that"; we've all heard that one before!).  Get in front of a whiteboard and draw out ideas, deadlines, goals, and so on.  Go out to eat lunch together.  Plan team activities that engage everyone.  Make sure that everyone is on the same page on a professional level as well as a personal level.

Not Keeping Your Eyes on the Prize

It's easy for a team to get distracted and lose their focus on the goals of the project and the conditions of victory.

It is therefore critical that teams focus on a goal-oriented approach to the delivery of software projects.  This is a mind-set that scales up from daily scrums to weekly reviews and so on.  Even a short coffee break can be used to re-orient a wandering team member towards the goal posts.  Small, daily victories can help teams build momentum and continuously align towards the long term milestones.

It's important that individuals and teams know, at any given time, what is expected of them and what the priorities of the project are.  This allows individuals to make decisions autonomously and with little managerial overhead as they understand how to align themselves with the goals of the project and team.  Clear communication of goals allows any misunderstandings to surface early by pinning expectations to milestones -- be they simply daily ones, weekly ones, or project level milestones.

Teams and leaders that are poor at communication and collaboration will often lose their focus on the prize because there is a lack of understanding about shifting goals and priorities; there is a dependence on assumption instead of clearly aligning all parties to a set of well-defined conditions of victory.  These anti-leaders will focus on the tasks instead of the goals; it should be the other way around - focus on the goals and derive your tasks from them.

Unwillingness to Compromise

Teams must always be ready to compromise because this is the real world where timelines and successful delivery of usable software matters, but people also have families and life outside of work.  Unplanned circumstances arise that challenge the best laid blueprints.

If it is discovered that a feature will negatively impact performance of the system in the current architecture, compromise must be made on either the feature or the timelines to ensure that the desired capability can be delivered as usable software.

If unforeseen circumstances eat into the project timelines, compromise must be made to clearly redefine the scope and conditions of victory.

This is the real-world; man-hours are not unlimited and an unwillingness to compromise when necessary leads to poor quality as a team pushes to make up time.

In many cases, it is a bitter pill to swallow as it may mean telling a customer that a feature must be delayed or built into the next release, but I find that more often than not, openness and clearly communicating these issues as early as reasonable is productive and allows for rational decision making.


Adding Support for Azure AD Login (O365) to MVC Apps

Posted by Charles Chen

I spent the day toying around with ASP.NET MVC 5 web applications and authentication.  I won't cover the step-by-step as there are plenty of blogs that have it covered.

It seems that online, most examples and tutorials show you either how to use your organizational Azure AD account or social identity providers but not both.

I wanted to be able to log in using Facebook, Google, and/or the organizational account I use to connect to Office 365.

This requires that you select Individual User Accounts when prompted to change the authentication mode (whereas most tutorials have you select "Organization Accounts"):


This will give you the baseline needed to add the social login providers (more on that later).

To enable Windows Azure AD, you will need to first login into Azure and add an application to your default AD domain.  In the management portal:

  1. Click on ACTIVE DIRECTORY in the left nav
  2. Click the directory
  3. Click the APPLICATIONS link at the top
  4. Now at the bottom, click ADD to add a new application
  5. Select Add an application my organization is developing
  6. Enter an arbitrary name and click next
  7. Now in the App properties screen, you will need to enter your login URL (e.g. https://localhost:4465/Account/Login) and for the APP ID URI, you cannot use "localhost".  You should use your Azure account info like: https://myazure.onmicrosoft.com/MyApp.  The "MyApp" part is arbitrary, but the bolded text must match your directory identifier.

Most importantly, once you've created it, you need to click on the CONFIGURE link at the top and turn on the setting APPLICATION IS MULTI-TENANT:


If you fail to turn this on, the logins are limited to the users that are in your Azure AD instance only; you will not be able to log on with accounts you use to connect to Office 365.  You'll get an error like this:

Error: AADSTS50020: User account ‘jdoe@myo365domain.com’ from external identity provider ‘https://sts.windows.net/1234567e-b123-4123-9112-912345678e51/’ is not supported for application ‘2123456f-b123-4123-9123-4123456789e5'. The account needs to be added as an external user in the tenant. Please sign out and sign in again with an Azure Active Directory user account.

An important note is that if you used "localhost" in step 7, the UI will not allow you to save the settings with an error "The App ID URI is not available. The App ID URI must be from a verified domain within your organization's directory."

Once you've enabled this, we're ready to make the code changes required.

First, you will need to install the OpenId package from nuget using the following command:

install-package microsoft.owin.security.openidconnect

Next, in the default Startup.Auth.cs file generated by the project template, you will need to add some additional code.

First, add this line:


Then, add this:

app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
    ClientId = "138C1130-4B29-4101-9C84-D8E0D34D222A",
    Authority = "https://login.windows.net/common",
    PostLogoutRedirectUri = "https://localhost:44301/",                
    Description = new AuthenticationDescription
        AuthenticationType = "OpenIdConnect",
        Caption = "Azure OpenId  Connect"
    TokenValidationParameters = new TokenValidationParameters
        // If you don't add this, you get IDX10205
        ValidateIssuer = false   

There are two very important notes.  The first is that the Authority must have the /common path and not your Azure AD *.onmicrosoft.com path.

The second note is that you must add the TokenValidationParameters and set ValidateIssuer to false.

If you don't set this to false, you'll get the following 500 error after you successfully authenticate against Azure AD with your organizational O365 account:

IDX10205: Issuer validation failed. Issuer: ‘https://sts.windows.net/F92E09B4-DDD1-40A1-AE24-D51528361FEC/’. Did not match: validationParameters.ValidIssuer: ‘null’ or validationParameters.ValidIssuers: ‘https://sts.windows.net/{tenantid}/’

I think that this is a hack and to be honest, I'm not quite certain of the consequences of not validating the issuer, but it seems that there aren't many answers on the web for this scenario yet.  Looking at the source code where the exception originates, you'll see the method that generates it:

public static string ValidateIssuer(string issuer, SecurityToken securityToken, TokenValidationParameters validationParameters)
    if (validationParameters == null)
        throw new ArgumentNullException("validationParameters");
    if (!validationParameters.ValidateIssuer)
        return issuer;
    if (string.IsNullOrWhiteSpace(issuer))
        throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10211));
    // Throw if all possible places to validate against are null or empty
    if (string.IsNullOrWhiteSpace(validationParameters.ValidIssuer) && (validationParameters.ValidIssuers == null))
        throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10204));
    if (string.Equals(validationParameters.ValidIssuer, issuer, StringComparison.Ordinal))
        return issuer;
    if (null != validationParameters.ValidIssuers)
        foreach (string str in validationParameters.ValidIssuers)
            if (string.Equals(str, issuer, StringComparison.Ordinal))
                return issuer;
    throw new SecurityTokenInvalidIssuerException(
        string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10205, issuer, validationParameters.ValidIssuer ?? "null", Utility.SerializeAsSingleCommaDelimitedString(validationParameters.ValidIssuers)));

We're simply short circuiting the process.  It's clear that there is no matching issuer, but it's not quite clear to me yet where/how to configure that.

So what about the other social IdP's?  It's important to note that for Google, not only do you have to create a new client ID in the Google Developer Console, but you also need to enable the Google+ API:


You'll just get a bunch of useless error messages if you don't enable the API.

If you manage to get it all working, you should see the following options in the login screen:


And when you click it, you should be able to log in using the same organizational credentials that you use to connect to Office 365:


Filed under: .Net, MVC 1 Comment

One of the More Creative Ways to Advertise Career Opportunities

Posted by Charles Chen

Soundcloud Console

As seen on Soundcloud.com as I was examining why the page wasn't loading...

Filed under: Uncategorized No Comments

Adding The Google Test to Your Interviews

Posted by Charles Chen

On a message board, I read a thread where a poster -- a research scientist -- was describing how he ended up becoming the defacto IT guy in his department simply because of his superior Google skills and willingness to Google for and apply solutions to fix issues for his colleagues.

This is something I've personally never been asked to do in an interview nor have I thought to ask others when I interview them, but it seems that being able to quickly Google and sift through results quickly to separate the wheat from the chaff is a skill that is supremely underrated in today's world of software engineering.

The fact is that developers and technology specialists today need to deal with so many technologies and understand deep nuances, Google is often the only way that any of us can get anything done, especially with obscure errors and what not that Microsoft and SharePoint loooove to throw at you.

In fact, I'm quite surprised that I've never been asked to do a Google search speed and accuracy test.

How would one design such a test to be effective at measuring a candidate's speed and accuracy at using Google?  Should the topics be relevant to the candidates job domain?  Or should it be more generic?  Should it test a candidate's knowledge of Google's advanced features?


Indoor Rock Climbing – Try It!

Posted by Charles Chen

One of my recently discovered activities that I'm falling in love with is indoor rock climbing (though I suppose I may try outdoor rock climbing and bouldering one day, too).

In a weird way, it's the ultimate thinking person's type of sport that is physically demanding, but also mentally challenging as well.  Climbers like to talk in terminology like "problems", "projects", and "solutions" and it's entirely accurate and applicable way to describe what climbing is all about.  If you walk into a bouldering area in a gym, you will see climbers just sitting around, planning routes, thinking about how to position their bodies to make the right move and attacking routes over and over again.  Difficult routes demand that you plan and think about how you can make your way up a vertical face while expending the least amount of energy.

It's odd, but I also think that it's a very "romantic" (or "bromantic"?) activity because you'll have the most fun climbing with someone else.  There is a lot of communication and trust involved when one person is controlling the safety and well-being of another person suspended 40 feet in the air.  For that same reason, it's a great team building activity for companies because to climb, you need to be able to work together, communicate, and have trust in your partners.

To get started, you can look up Google Maps and find some nearby rock climbing gyms and just call and take a class.  I took my first class at Rockreation in Costa Mesa, CA where you had to schedule ahead and the classes are far more formalized, but there are also places like Rockville back home in NJ, where the classes are much more informal and you can just show up and take a short intro class.

Most intro classes will teach you the basic elements of indoor climbing:

  • Using harnesses and shoes
  • Basic double-figure-8 knot tying
  • Belaying
  • Basic safety including verbal commands and communications.

But in looking through some videos, I've found that there is LOT more to learn and I've developed an even deeper appreciation for it. Take a look for yourself:

Five Fundamentals of Indoor Rock Climbing

How to Grip Indoor Climbing Holds

Footwork for Climbing

Five Advanced Bouldering Techniques

What I hope that you can get from this is that there is a real art to this that is beautiful to watch in action.  In that last video, the Bat Hang at 1:45  is a thing of beauty.  Seasoned climbers make it look easy, but it really takes a lot of practice, experience, and creativity to move around like Cliff Simanski does in the video.

Charlotte and Sandra working a wall.

Charlotte and Sandra working a wall.

I've also learned that I've been wrecking my forearms because I've basically been muscling my way up the walls with my upper body strength alone.  A strong grip and upper body are certainly beneficial for climbing, but you need far more than that to advance in the sport.

In a sense, rock climbing has a lot in common with dance or gymnastics: it demands creative body movement, flexibility, balance, body awareness, and spatial awareness (maybe even more so because your life is on the line in some cases).

It's a great activity for kids of all ages (Charlotte is 3.5 years old) to enjoy.


An Alternate Meaning for FOCKED

Posted by Charles Chen

Eric Brechner came up with one of my favorite acronyms of all time in software development: FOCKED.


I want to add an alternate: Failure to Orchestrate Collective Knowledge Effectively for Delivery.

Successful delivery of software requires that different members of the team come together and understand the goals that have to be achieved and the priorities of those goals.

It's as simple as communicating to the team on a regular basis (no more than once a week, but maybe at least once a month):

  • where we are,
  • where do we want to go,
  • when do we have to get there,
  • how are we getting there,
  • who's driving

It can make the whole process of delivery of software much less stressful and maybe more successful simply by aligning all of the stakeholders periodically.

Hey, maybe you learned this in some fancy MBA class or something, but I'm starting to appreciate -- more and more -- that the real secret to successful delivery of software is driving the successful collaboration and communication of people and alignment of all pieces to a strategy, vision, or goal.  Having a bunch of smart, capable people doesn't help you much if no one knows what's going on.

Filed under: DevLife, Rants No Comments