<CharlieDigital/> Programming, Politics, and uhh…pineapples

18Feb/17Off

.NET Core MVC on Azure Container and Registry Service

Posted by Charles Chen

I've recently been playing around with .NET Core MVC and trying to get it running on Azure Container Service (ACS).

Why .NET Core?  Because you'll be able to run those containers on Linux environments, which will generally be lower cost.

It has been quite challenging as there is quite a bit of documentation out there and some of it is out of date already (I certainly expect this to be out of date as we march towards release versions) due to the pace of change with Docker, Docker on Windows, Azure, and the Azure CLI tools.  There are some tutorials which show you how to do it via VSTS and some that seem to show VS2017 tooling for deployment (I sure hope its easier...); I have not been able to activate any tooling support for Docker in VS2015.

Some of the newer capabilities like managing Azure Container Registries (ACR) is only available in CLI 2.0 preview. For example, one would think that once you've created a container registry in Azure, you'd easily be able to see a listing of images you have registered in the Azure dashboard; not so!  The images can seemingly only be seen via the CLI 2.0 commands for now.

There are also multiple ways to take advantage of containers in Azure, but we'll walk through the easiest way using "Web App on Linux" to handle most of the Docker configuration.

Before you get started, you'll need the following:

Now be warned that Docker for Windows runs on HyperV while Docker Toolbox uses VirtualBox.  If you run VMWare locally on your workstation, you won't be able to have HyperV enabled so you've got to settle for Docker Toolbox.  This walkthrough assumes you're using Docker Toolbox, but I assume there's not too much deviation.

For this walkthrough, we're going to focus on the ACR/ACS side of things so we're going to leave the .NET Core side just a basic site.  Follow the instructions here to create a .NET Core web application.

Once you've got that set up and compiled, we need to add one file to the mix: the Dockerfile.  At the root of your web project, add a file named "Dockerfile":

My file contents, based off of this Stormpath tutorial, look like so:

FROM microsoft/dotnet:1.0-sdk-projectjson
COPY . /app
WORKDIR /app
 
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
 
EXPOSE 5000/tcp
ENV ASPNETCORE_URLS http://*:5000
 
ENTRYPOINT ["dotnet", "run"]

Nate Barbettini has a good description of what each of the lines means.  Pay attention to the 5000; when we move this into Azure, we'll want to change this to port 80.

For production environments, you'll want to revisit this for sure and build an optimized image using Steve Lasker's walkthrough (I have not tried it yet!) as it will yield a significantly smaller image.  The base image size for the dotnet image is 542MB.  The final size of my basic web application image is a whopping 683MB!

Now if you are using Docker Toolbox, you need to click on the Docker Quickstart Terminal icon which should be on your desktop.  This launches the terminal that we'll be using to:

  • Build the image
  • Tag the image
  • Upload to Azure

We're going to follow along with Barbettini's tutorial.  From the Docker Terminal (terminal), change directories into your .NET application directory (where the Dockerfile was created) and build the image:

docker build -t mywebapp:latest .

This builds a Docker "repository" with the name "mywebapp" and "tag" value of "latest".  Important: note the trailing "." which indicates "from the current directory".

While not strictly necessary, I would recommend running it at this point locally to make sure that it actually works.   Again, we follow along with Barbettini's tutorial and run:

docker run -d -p 8080:5000 -t mywebapp:latest

The application should load at the IP address assigned to the Docker virtual machine and not necessarily localhost.  To see and manage the running instance, you can use the Kitematic application that was installed with Docker Toolbox.  It will generate a random name for the running container and show up on the left hand side.

Hopefully, you've got it running and you've confirmed that it works.  While you're in Kitematic, terminate the container as we're going to now prep it for Azure.  For Azure Web App on Linux (WAL), we will need to map the port directly to 80 (I'm sure there's a way to instruct WAL to map it at runtime, but I haven't figured that out yet).  So update your Dockerfile and replace 5000 with 80.

Now we're going to remove the previous image before we compile a new one.  From the terminal, type:

docker images

This will list the images we have locally. Then:

docker rmi mywebapp:latest

Verify that the image has been removed.  Now we run the build command again and it's time to move onto Azure.

Our first step is to create a private ACR where we can store our image to be used in Azure.  Log into the portal and type in "registry" in the search to find it:

Add a new registry and configure the settings:

I recommend creating a new resource group so that you can more easily delete all of the artifacts later.  Also note to enable "Admin user".  The registry should take a few moments to complete provisioning, but once it's done, go back to the Azure Container Registry to find your newly created registry and select it. Click on the Access key tab and grab the Username and Password:

Next, click on the Quick start tab.  This tab provides the customized commands that we'll need to access our ACR from our local Docker terminal.

Run the following command from the terminal:

docker login mywebappreg-innovocommerce.azurecr.io

The "mywebappreg" will be whatever name you gave to your ACR registry and the "innovocommerce" will be your Azure domain.  This is where you'll need the Username and Password from above.  Next, we're going to tag or rename the image we built earlier:

docker tag mywebapp mywebappreg-innovocommerce.azurecr.io/mywebapp

And finally, we're going to push it into the repository:

docker push mywebappreg-innovocommerce.azurecr.io/mywebapp

This will move the image into your repository -- though there's no way to actually see it in there without the CLI 2.0 preview.  The command should be:

az acr repository list -n mywebappreg -o json

(You'll need to log in first!  Use az login and follow the instructions)

Now we're ready to create our Docker container instance!  You have two options at this point: use the actual Azure Container Service (ACS) OR use Web App on Linux (WAL).  We'll go the WAL route since it's significantly easier!  From the search bar, type "web app" and you should see the Web App On Linux option:

Now we configure the web app:

Here, enter:

  • Your App name for the desired external URL
  • Select your existing registry (again, for easier cleanup later!)
  • And click the Configure container option
  • Select Private registry for the Image source
  • Now enter the tag of the image we pushed earlier: mywebappreg-innovocommerce.azurecr.io/mywebapp:latest
  • Enter the URL of the ACR we created: https://mywebappreg-innovocommerce.azurecr.io
  • Enter the login name and password for the admin user you used to log into the terminal.
  • Leave Startup Command blank (note to Microsoft: inconsistent casing!); I suppose this may be where you could potentially map the ports (maybe?)

If everything works out, you'll be able to click OK and it will go off and provision your container!

To test it out, you can try hitting your site.  As per my example, that would be http://mywebapp5542.azurewebsites.net.  Be patient as it may take a bit of time for the site to come on line.  Remember that this isn't an optimized image.

Next time, we'll explore how to achieve the same result with ACS and maybe optimizing the image for release!

Filed under: .Net, Azure No Comments
17Jul/15Off

Adventures in Single-Sign-On: Cross Domain Script Request

Posted by Charles Chen

Consider a scenario where a user authenticates with ADFS (or equivalent identity provider (IdP)) when accessing a domain such as https://www.domain.com (A) and then, from this page, a request is made to https://api.other-domain.com/app.js (B) to download a set of application scripts that would then interact with a set of REST based web services in the B domain.  We'd like to have SSO so that claims provided to A are available to B and that the application scripts downloaded can then subsequently make requests with an authentication cookie.

Roughly speaking, the scenario looks like this:

Depiction of the scenario

Depiction of the scenario

It was straightfoward enough to set up the authentication with ADFS using WIF 4.5 for each of A and B following the MSDN "How To"; I had each of the applications separately working with the same ADFS instance, but the cross domain script request from A to B at step 5 for the script file generated an HTTP redirect sequence (302) that resulted in an XHTML form from ADFS with Javascript that attempts to execute an HTTP POST for the last leg of the authentication.  This was good news because it meant that ADFS recognized the user session and tried to issue another token for the user in the other domain without requiring a login.

However, this obviously posed a problem as, even though it appeared as if it were working, the request for the script could not succeed because of the text/html response from ADFS.

Here's what https://www.domain.com/default.aspx looks like in this case:

<html>
  <body>
    ...
    <script type="text/javascript" src="https://api.other-domain.com/app.js"></script>
  </body>
</html>

This obviously fails because the HTML content returned from the redirect to ADFS cannot be consumed.

I scratched my head for a bit and dug into the documentation for ADFS, trawled online discussion boards, and tinkered with various configurations trying to figure this out with no luck.  Many examples online that discuss this scenario when making a web service call from the backend of one application to another using bearer tokens or WIF ActAs delegation, but these were ultimately not suited for what I wanted to accomplish as I didn't want to have to write out any tokens into the page (for example, adding a URL parameter to the app.js request), make a backend request for the resource, or use a proxy.

(I suspect that using the HTTP GET binding for SAML would work, but for the life of me, I can't figure out how to set this up on ADFS...)

In a flash of insight, it occurred to me that if I used a hidden iframe to load another page in B, I would then have a cookie in session to make the request for the app.js!

Here's the what the page looks like on the page in A:

<script type="text/javascript">
    function loadOtherStuff()
    {
        var script = document.createElement('script');
        script.setAttribute('type', 'text/javascript');
        script.setAttribute('src', 'https://api.other-domain.com/appscript.js');
        document.body.appendChild(script);
    }
</script>
<iframe src="https://api.other-domain.com" style="display: none" 
    onload="javascript:loadOtherStuff()"></iframe>  

Using the iframe, the HTTP 302 redirect is allowed to complete and ADFS is able to set the authentication cookie without requiring a separate sign on since it's using the same IdP, certificate, and issuer thumbprint.  Once the cookie is set for the domain, then subsequent browser requests in the parent document to the B domain will carry along the cookie!

The request for appscript.js is intercepted by an IHttpHandler and authentication can be performed to check for the user claims before returning any content. This then allows us to stream back the client-side application scripts and templates via AMD through a single entry point (e.g. appscript.js?app=App1 or a redirect to establish a root path depending on how you choose to organize your files).

Any XHR requests made subsequently still require proper configuration of CORS on the calling side:

$.ajax({
    url: 'https://api.other-domain.com/api/Echo', 
    type: 'GET',
    crossDomain: true,
    xhrFields: {
        withCredentials: true
    },
    success: function(result){ window.alert('HERE'); console.log('RETRIEVED'); console.log(result); }
});

And on the service side:

<!--//
    Needed to allow cross domain request.
    configuration/system.webServer/httpProtocol
//-->
<httpProtocol>
    <customHeaders>
        <add name="Access-Control-Allow-Origin" value="https://www.domain.com" />
        <add name="Access-Control-Allow-Credentials" value="true" />
        <add name="Access-Control-Allow-Headers" value="accept,content-type,cookie" />
        <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" />
    </customHeaders>
</httpProtocol>

<!--//
    Allow CORS pre-flight
    configuration/system.webServer/security
//-->
<security>
    <requestFiltering allowDoubleEscaping="true">
        <verbs>
            <add verb="OPTIONS" allowed="true" />
        </verbs>
    </requestFiltering>
</security>

<!--//
    Handle CORS pre-flight request
    configuration/system.webServer/modules
//-->
<add name="CorsOptionsModule" type="WifApiSample1.CorsOptionsModule" />

The options handler module is a simple class that responds to OPTION requests and also dynamically adds a header to the response:

    /// <summary>
    ///     <c>HttpModule</c> to support CORS.
    /// </summary>
    public class CorsOptionsModule : IHttpModule
    {
        #region IHttpModule Members
        public void Dispose()
        {
            //clean-up code here.
        }

        public void Init(HttpApplication context)
        {
            context.BeginRequest += HandleRequest;
            context.EndRequest += HandleEndRequest;
        }

        private void HandleEndRequest(object sender, EventArgs e)
        {
            string origin = HttpContext.Current.Request.Headers["Origin"];

            if (string.IsNullOrEmpty(origin))
            {
                return;
            }

            if (HttpContext.Current.Request.HttpMethod == "POST" && HttpContext.Current.Request.Url.OriginalString.IndexOf(".svc") < 0)
            {
                HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", origin);
            }
        }

        private void HandleRequest(object sender, EventArgs e)
        {
            if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
            {
                HttpContext.Current.Response.End();
            }
        }

        #endregion
    }

The end result is that single-sign-on is established across two domains for browser to REST API calls using simple HTML-based trickery (only tested in FF!).

Filed under: .Net, Identity, SSO No Comments
14Mar/15Off

Adding Support for Azure AD Login (O365) to MVC Apps

Posted by Charles Chen

I spent the day toying around with ASP.NET MVC 5 web applications and authentication.  I won't cover the step-by-step as there are plenty of blogs that have it covered.

It seems that online, most examples and tutorials show you either how to use your organizational Azure AD account or social identity providers but not both.

I wanted to be able to log in using Facebook, Google, and/or the organizational account I use to connect to Office 365.

This requires that you select Individual User Accounts when prompted to change the authentication mode (whereas most tutorials have you select "Organization Accounts"):

mvc-use-individual-account

This will give you the baseline needed to add the social login providers (more on that later).

To enable Windows Azure AD, you will need to first login into Azure and add an application to your default AD domain.  In the management portal:

  1. Click on ACTIVE DIRECTORY in the left nav
  2. Click the directory
  3. Click the APPLICATIONS link at the top
  4. Now at the bottom, click ADD to add a new application
  5. Select Add an application my organization is developing
  6. Enter an arbitrary name and click next
  7. Now in the App properties screen, you will need to enter your login URL (e.g. https://localhost:4465/Account/Login) and for the APP ID URI, you cannot use "localhost".  You should use your Azure account info like: https://myazure.onmicrosoft.com/MyApp.  The "MyApp" part is arbitrary, but the bolded text must match your directory identifier.

Most importantly, once you've created it, you need to click on the CONFIGURE link at the top and turn on the setting APPLICATION IS MULTI-TENANT:

mvc-multi-tenant

If you fail to turn this on, the logins are limited to the users that are in your Azure AD instance only; you will not be able to log on with accounts you use to connect to Office 365.  You'll get an error like this:

Error: AADSTS50020: User account ‘jdoe@myo365domain.com’ from external identity provider ‘https://sts.windows.net/1234567e-b123-4123-9112-912345678e51/’ is not supported for application ‘2123456f-b123-4123-9123-4123456789e5'. The account needs to be added as an external user in the tenant. Please sign out and sign in again with an Azure Active Directory user account.

An important note is that if you used "localhost" in step 7, the UI will not allow you to save the settings with an error "The App ID URI is not available. The App ID URI must be from a verified domain within your organization's directory."

Once you've enabled this, we're ready to make the code changes required.

First, you will need to install the OpenId package from nuget using the following command:

install-package microsoft.owin.security.openidconnect

Next, in the default Startup.Auth.cs file generated by the project template, you will need to add some additional code.

First, add this line:

app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

Then, add this:

app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
{
    ClientId = "138C1130-4B29-4101-9C84-D8E0D34D222A",
    Authority = "https://login.windows.net/common",
    PostLogoutRedirectUri = "https://localhost:44301/",                
    Description = new AuthenticationDescription
    {
        AuthenticationType = "OpenIdConnect",
        Caption = "Azure OpenId  Connect"
    },
    TokenValidationParameters = new TokenValidationParameters
    {
        // If you don't add this, you get IDX10205
        ValidateIssuer = false   
    }
});

There are two very important notes.  The first is that the Authority must have the /common path and not your Azure AD *.onmicrosoft.com path.

The second note is that you must add the TokenValidationParameters and set ValidateIssuer to false.

If you don't set this to false, you'll get the following 500 error after you successfully authenticate against Azure AD with your organizational O365 account:

IDX10205: Issuer validation failed. Issuer: ‘https://sts.windows.net/F92E09B4-DDD1-40A1-AE24-D51528361FEC/’. Did not match: validationParameters.ValidIssuer: ‘null’ or validationParameters.ValidIssuers: ‘https://sts.windows.net/{tenantid}/’

I think that this is a hack and to be honest, I'm not quite certain of the consequences of not validating the issuer, but it seems that there aren't many answers on the web for this scenario yet.  Looking at the source code where the exception originates, you'll see the method that generates it:

public static string ValidateIssuer(string issuer, SecurityToken securityToken, TokenValidationParameters validationParameters)
{
    if (validationParameters == null)
    {
        throw new ArgumentNullException("validationParameters");
    }
    
    if (!validationParameters.ValidateIssuer)
    {
        return issuer;
    }
    
    if (string.IsNullOrWhiteSpace(issuer))
    {
        throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10211));
    }
    
    // Throw if all possible places to validate against are null or empty
    if (string.IsNullOrWhiteSpace(validationParameters.ValidIssuer) && (validationParameters.ValidIssuers == null))
    {
        throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10204));
    }
    
    if (string.Equals(validationParameters.ValidIssuer, issuer, StringComparison.Ordinal))
    {
        return issuer;
    }
    
    if (null != validationParameters.ValidIssuers)
    {
        foreach (string str in validationParameters.ValidIssuers)
        {
            if (string.Equals(str, issuer, StringComparison.Ordinal))
            {
                return issuer;
            }
        }
    }
    
    throw new SecurityTokenInvalidIssuerException(
        string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10205, issuer, validationParameters.ValidIssuer ?? "null", Utility.SerializeAsSingleCommaDelimitedString(validationParameters.ValidIssuers)));
}

We're simply short circuiting the process.  It's clear that there is no matching issuer, but it's not quite clear to me yet where/how to configure that.

So what about the other social IdP's?  It's important to note that for Google, not only do you have to create a new client ID in the Google Developer Console, but you also need to enable the Google+ API:

mvc-google-api

You'll just get a bunch of useless error messages if you don't enable the API.

If you manage to get it all working, you should see the following options in the login screen:

mvc-azure

And when you click it, you should be able to log in using the same organizational credentials that you use to connect to Office 365:

mvc-login-azure

Filed under: .Net, MVC 1 Comment
12Aug/14Off

Invoking Custom WCF Services in SharePoint with Claims

Posted by Charles Chen

In SharePoint, if you host a custom WCF service in a claims-enabled application, the authentication via NTLM is actually quite tricky if you are attempting to invoke it from a console application, for example.

There are various articles and Stackoverflow entries on using System.ServiceModel.Description.ClientCredentials on either the ChannelFactory or the client instance, but all of these did not work in the sense that on the server side, SPContext.Current.Web.CurrentUser was null and ServiceSecurityContext.Current.IsAnonymous returned true.

It seems like it should be possible to invoke the service authenticating through NTLM as if the user were accessing it through the web site.

In fact, it is possible, but it involves some manual HTTP requests to get this to work without doing some Windows Identity Foundation programming and consequently setting up tons of infrastructure to get what seems like a relatively simple and straightforward scenario to work.

The first step is to actually manually retrieve the FedAuth token:

/// <summary>
///     Gets a claims based authentication token by logging in through the NTLM endpoint.
/// </summary>
/// <returns>The FedAuth token required to connect and authenticate the session.</returns>
private string GetAuthToken()
{
    string authToken = string.Empty;

    CredentialCache credentialCache = new CredentialCache();
    credentialCache.Add(new Uri(_portalBaseUrl), "NTLM", new NetworkCredential(_username, _password, _domain));

    HttpWebRequest request = WebRequest.Create(string.Format("{0}/_windows/default.aspx?ReturnUrl=%2f_layouts%2fAuthenticate.aspx%3fSource%3d%252F&Source=%2F ", _portalBaseUrl)) as HttpWebRequest;
    request.Credentials = credentialCache;
    request.AllowAutoRedirect = false;
    request.PreAuthenticate = true;

    // SharePoint doesn't like it if you don't include these (403 Forbidden)?
    request.UserAgent = "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko";
    request.Accept = "text/html, application/xhtml+xml, */*";

    HttpWebResponse response = request.GetResponse() as HttpWebResponse;

    authToken = response.Headers["Set-Cookie"];

    return authToken;
}

There are three keys here:

  1. The first is that AllowAutoRedirect must be false or you will get an error that you are getting too many redirects.  What seems to happen is that the cookies are not set correctly when using auto redirect so the chain will continue until an exception is thrown.  In Fiddler, you will see this as a long cycle of requests and redirects.
  2. The second is that the URL must be the NTLM authentication endpoint (/_windows...") as any other URL will return a 302 and for that, you will need to set AllowAutoRedirect to true.
  3. The third is that it seems as if SharePoint really doesn't like it when the user agent and accept headers are not included.  I tried it later without these and it seemed to work, but I could not get it to work without them (403 unauthorized) initially.

Once you have the FedAuth token, you are able to basically impersonate the user.  To do so, you will need to include a cookie in your HTTP header request:

// Get the FedAuth cookie
var authToken = GetAuthToken();

// Create the connection artifacts.            
EndpointAddress endpointAddress = new EndpointAddress(endpointUrl);
BasicHttpBinding binding = new BasicHttpBinding();            

ChannelFactory<ISomeService> channelFactory = 
    new ChannelFactory<ISomeService>(binding, endpointAddress);

// Initiate the client proxy using the connection and binding information.
ISomeService client = channelFactory.CreateChannel();

using (new OperationContextScope((IContextChannel) client))
{
    // Set the authentication cookie on the outgoing WCF request.
    WebOperationContext.Current.OutgoingRequest.Headers.Add("Cookie", authToken);

    // YOUR API CALLS HERE    
}

The key is to add the header on the outgoing request before making your service API calls.

With this, you should see that you are able to invoke SharePoint hosted custom WCF service calls in claims-based web applications with NTLM authentication.

Filed under: .Net, SharePoint, WCF No Comments
26Nov/13Off

Preventing the Garbage Collector From Ruining Your Day

Posted by Charles Chen

If you're working with ZeroMQ, you may run into an exception with the message "Context was terminated".

It turns out that this is due to the garbage collector cleaning up (or attempting to clean up?) the ZmqContext.

Found this out via this handy thread on Stack, but what about cases where you can't use a using statement?

For example, in a Windows Service, I create the context on the OnStart method and destroy the context on the OnStop method.

In this case, an alternative is to use the GC.KeepAlive(Object obj) method to prevent the garbage collector from collecting the object until after the call to this method.  It seems counter intuitive, but it is actually a signal to tell the garbage collector that it can collect the object at any point after this call.

Filed under: .Net, Self Note, ZeroMQ No Comments
12Nov/13Off

An Architecture for High-Throughput Concurrent Web Request Processing

Posted by Charles Chen

I've been working with ZeroMQ lately and I think I've fallen in love.

It's rare that a technology or framework just jumps out at you, but here is one that will get your head spinning on the different ways that it can make your architecture more scalable, more powerful, and all the while offering a frictionless way of achieving this.

I've been building distributed, multi-threaded applications since college, and ZeroMQ has changed everything for me.

It initially started with a need to build a distributed event processing engine.  I had wanted to try implementing it in WCF using peer-to-peer and/or MSMQ endpoints, but the thought of the complexity of managing that stack along with the configuration and setup seemed like it would be at least fruitful to look into a few other alternatives.

RabbitMQ and ZeroMQ were the clear front-runners for me.  I really liked the richness of documentation and examples with RabbitMQ and if you look at some statistics, it has a much greater rate of mentions on Stack so we can assume that it has a higher rate of adoption.  But at the core of it, I think that there really is no comparison between these two except for the fact that they both have "MQ" in their names.

It's true that one could build RabbitMQ like functionality on top of ZeroMQ, but to a degree, I think that would be defeating the purpose.  The beauty of ZeroMQ is that it's so lightweight and so fast that it's really hard to believe; there's just one reference to add to your project.  No central server to configure.  No single point of failure.  No configuration files.   No need to think about failovers and clustering.  Nothing.  Just plug and go.  But there is a cost to this: a huge tradeoff in some of the higher level features that -- if you want -- you have to build yourself.

If you understand your use cases and you understand the limitations of ZeroMQ and where it's best used, you can find some amazing ways to leverage it to make your applications more scalable.

One such use case I've been thinking about is using it to build a highly scalable web-request processing engine which would allow scaling by adding lots of cheap, heterogeneous nodes.  You see, with ASP.NET, unless you explicitly build a concurrency-oriented application, your web server processing is single-threaded per request and you can only ever generate output HTML at the sum of the costs of generating each sub part of your view.  To get around this, we could consider a processing engine that would be able to parse controls and send the processing off -- in parallel -- to multiple processors and then reassemble the output HTML before feeding it back to the client.  In this scenario, the cost of rendering the page is the overhead of the request plus the cost of the most expensive part of the view generation.

The following diagram conceptualizes this in ZeroMQ:

zmq-processing

Still a work in progress...

Even if an ASP.NET application is architected and programmed for concurrency from the get-go, you are limited by the constraints of the hardware (# of concurrent threads).  Of course, you can add more servers and put a load balancer in front of them, but this can be an expensive proposition.  Perhaps a better architecture would be to design a system that allows adding cheap, heterogeneous server instances that do nothing but process parts of a view.

 In such an architecture, it would be possible to scale the system at any level by simply adding more nodes -- at any level.  They could be entirely heterogeneous; no need for IIS, in fact, the servers don't even have to be Windows servers.  The tradeoff is that you have to manage the session information yourself and push the relevant information down through the pipeline or at least make it accessible via a high speed interface (maybe like a Redis or Memcached?).

But the net gain is that it would allow for concurrent processing of a single web request and build an infrastructure for handling web requests that is easily scaled with cheap, simple nodes.

Filed under: .Net, Awesome, ZeroMQ No Comments
16Aug/12Off

SharePoint ListData.svc 500 Error

Posted by Charles Chen

If you're fighting with the SharePoint ListData.svc with an odd error:

An error occurred while processing this request.

And you are using an OData operator like endswith, you may encounter this error and be puzzled with why it works for some fields but not others.

Tried various theories -- indexed column?  use the column in a view?  maybe error with the column? -- with no love until Rob thought that it might have to do with empty values.

Turns out that the underlying implementation of ListData.svc doesn't quite like it if you have un-set or "null" values in your text fields.  So a query like this:

http://collab.dev.com/_vti_bin/ListData.svc/Test?$filter=endswith(PrimaryFaxNumber, '6481099') eq true

Will fail if there is an item in the list with an empty value for PrimaryFaxNumber.

However, using a nullity check will fix the issue:

http://collab.dev.com/_vti_bin/ListData.svc/Test?$filter=PrimaryFaxNumber ne null and endswith(PrimaryFaxNumber, '6481099') eq true
Filed under: .Net, SharePoint No Comments
15Sep/11Off

Now I REALLY Can’t be Bothered to Learn Silverlight

Posted by Charles Chen

I've blogged about it before, but seriously, the question has to be asked: if you're a developer with limited bandwidth to focus on mastering new technologies, why would you spend that time on Silverlight?

Not only is WP7 floundering, but now the news is out: the Metro version of IE 10 in Windows 8 won't support any plugins - including Silverlight:

Windows 8 will have two versions of Internet Explorer 10: a conventional browser that lives on the legacy desktop, and a new Metro-style, touch-friendly browser that lives in the Metro world. The second of these, the Metro browser, will not support any plugins. Whether Flash, Silverlight, or some custom business app, sites that need plugins will only be accessible in the non-touch, desktop-based browser.

Should one ever come across a page that needs a plugin, the Metro browser has a button to go to that page within the desktop browser. This yanks you out of the Metro experience and places you on the traditional desktop.

The rationale is a familiar one: plugin-based content shortens battery life, and comes with security, reliability, and privacy problems. Sites that currently depend on the capabilities provided by Flash or Silverlight should switch to HTML5.

If you're not on the HTML5 boat yet, I think the writing is on the wall: the Silverlight party is over (thank goodness).

Filed under: .Net, Awesome No Comments
13Sep/11Off

Lesson Learned on SharePoint Service Applications

Posted by Charles Chen

If you're setting out on writing your own SharePoint service applications, there is an important lesson that you should keep in mind (instead of learning it the hard way): ensure that all of your proxy, application proxy, service, service application, and service instance classes have public parameterless (default) constructors.

Otherwise, you'll have a heck of a time starting, instantiating, and uninstalling services with lots of MissingMethodExceptions and "{class} cannot be deserialized because it does not have a public default constructor" error messages.

Oddly enough, one thing I've learned from this is that the STSADM commands are often more "powerful" than the equivalent Powershell commands.  For example, Remove-SPSolution, even with the -force parameter, still failed with the aforementioned exceptions.  On the other hand, stsadm -o deletesolution {name} -override seemed to work fine.  Puzzling, for the moment, but it got the job done.  Similarly, stopping a service application that's AWOL (stuck on the processing screen) can be accomplished with stsadm -o provisionservice. Deleting it can be done using stsadm -o deleteconfigurationobject (though this one does seem to have side effects...).

Seems that Powershell is still a second class citizen when it comes to basic SharePoint command-line management.

But in any case, if you set out building your own service applications (<rant>and damn it Microsoft, can't you put some better examples out there?!  Even the few that are out there are convoluted, missing key details, hard to follow...</rant>), be sure to include public, default, parameterless constructors.

Filed under: .Net, Rants, SharePoint No Comments
29Jul/11Off

Working with GUIDs in MongoDB and ASP.NET MVC3

Posted by Charles Chen

Just a small tip for those looking to use GUIDs as document IDs in MongoDB in conjunction with ASP.NET MVC3: it's a lot more straightforward than it may seem at the onset.

These examples are based off of the ASP.NET MVC3 tutorials...except with MongoDB instead of EF+SQL Server.

I've set up my model class like so:

public class Movie
{
    [BsonId]
    public Guid ID { get; set; }
    public string Title { get; set; }
    public DateTime ReleaseDate { get; set; }
    public string Genre { get; set; }
    public decimal Price { get; set; }
}

When the application creates an object and persists it to the database, you'll see that it shows up like this in the Mongo console (I've formatted the JSON for clarity):

> db.movies.find()
{
   "_id":BinData(3,"n2FLBkAkhEOCkX42BGXRqg=="),
   "Title":"Test",
   "ReleaseDate":   ISODate("2011-05-11T04:00:00   Z"),
   "Genre":"Comedy",
   "Price":"9.99"
}

If you try to serialize this to JSON, instead of getting a GUID string, you'll get:

// Get a document
BsonDocument document = movies.FindOneAs<BsonDocument>();

// Direct to JSON
document.ToJson();
/*
{
   "_id":new BinData(3,
   "n2FLBkAkhEOCkX42BGXRqg=="   ),
   "Title":"Test",
   "ReleaseDate":   ISODate("2011-05-11T04:00:00   Z"),
   "Genre":"Comedy",
   "Price":"9.99"
}
*/

// With settings
JsonWriterSettings settings = new JsonWriterSettings{OutputMode = JsonOutputMode.JavaScript };
document.ToJson(settings);
/*
{
   "_id":{
      "$binary":"n2FLBkAkhEOCkX42BGXRqg==",
      "$type":"03"
   },
   "Title":"Test",
   "ReleaseDate":Date(1305086400000),
   "Genre":"Comedy",
   "Price":"9.99"
}
*/

This is somewhat inconvenient if you want to work with it from a pure JavaScript perspective; I was hoping that it would have returned a GUID as a string instead.  I was also concerned that this meant that I'd have to manage this manually as well on the server side in my actions, but it turns out that it works better than expected.  The only caveat is that you have to use "_id" when creating queries; otherwise, you can use the GUID as-is and the Mongo APIs will convert it behind the scenes:

public ActionResult Details(Guid id)
{
    MongoCollection<Movie> movies = _database.GetCollection<Movie>("movies");

    // No need to mess with the GUID; use it as is.
    Movie movie = movies.FindOneAs<Movie>(Query.EQ("_id", id)); 

    return View(movie);
}

You can see the result below in the browser:

Note the properly formatted GUID in the URL

So far, so good with my little Mongo+MVC3 experiment 😀

Filed under: .Net, Mongo, MVC No Comments