Step-by-Step Guide to Getting SSL for Free using LetsEncrypt on Windows IIS

SSL should be simpler and cheaper, and Let’s Encrypt is a great project to make this happen, but it’s biased toward *nix systems. So what do you do if you want to install a free Let’s Encrypt SSL certificate on your Microsoft IIS server?

Well I did this today, and even using LEWS (see below) there does not seem to be a clear step-by-step guide. So having stumbled and googled my way though the process, I thought I’d document it for all.

This approach uses the letsencrypt-win-simple tool (LEWS) to validate and set up the SSL certificates.

A couple of prerequisites. First, you’re going to need to have admin console access to the hosting server – if you don’t have this you won’t be able to follow the simple guide here. Second, the DNS names must already be configured externally, e.g. mysite.example.org should be resolvable to your server on http for this to work.

Assuming you do, here is what you need to do:

  1. Log onto the console of the Windows Server that is hosting the site you want to add SSL to
  2. Run Internet Information Services (IIS) Manager (I’ll call it IISM for short)
  3. Select the site you want to add SSL to in the list of Sites
  4. Click the Bindings button in the menu on the right
  5. Ensure your site has a named http binding in the host name section, even if you only have one site and the hostname is blank (to accept all requests). This is required so that LEWS knows what certificate name to create
  6. Close the bindings dialog box
  7. Download the LEWS client from https://github.com/Lone-Coder/letsencrypt-win-simple/releases – the ZIP file contains the client. This guide was written using version 1.7
  8. Unzip the contents to a folder on the server, e.g. C:\SSL
  9. Start a command prompt in Administrator mode (right click the Command Prompt on Start and select Run as Administrator)
  10. Navigate to the folder you unzipped LEWS to, e.g. cd \ssl
  11. Type the command letsencrypt and press enter
  12. If this is your first time you’ll be prompted to enter your email address to register and accept the EULA .. do this and accept to continue (subsequent runs will bypass this)
  13. You should now have a text menu, with a list of numbered Site Bindings and options for M, A and Q (manual, all and quit)
  14. Select the site (1-9) you want to add SSL to, or use A if you want to do all. If your site does not appear it may be because you forgot to add a host-name binding.
  15. The process will now create a verification file in a subfolder of the site, and use this to try to authorize the SSL certificate (see Note A below).
  16. If this is successful, you should see a confirmation. If you get block of red text this indicates an error.
  17. Assuming it worked, go back to IISM and select “Bindings” on the site – you should now have an SSL (https) binding with a dated certificate name: note that LetsEncrypt certificates are limited to 90 days for security reasons, but of course renewals are free.

Possible problems you might encounter:

Authorization Failures

If you’ve recently added a DNS entry, or recently change DNS configurations (e.g. IP address change) these changes can take time to become effective (lots of systems cache or only update infrequently). This can cause the

Note A

The authorization process creates a subfolder .well-known/acme-challenge and a verification file on your site and the LetsEncrypt service tries to access this static text file using DNS.

For example, if I want to set up SSL for mydomain.example.org it might create a file called 7sYBFMggYCsR3roQ2SqpNkgwXCs8aD1NoXaUnnZDdQ0 and then attempt to access http://mydomain.example.org/.well-known/acme-challenge/7sYBFMggYCsR3roQ2SqpNkgwXCs8aD1NoXaUnnZDdQ0 

Because the default behaviour of IIS is to block extensionless URLs this would normally cause an error, so LEWS also adds a simple web.config file in the acme-challenge folder.

If you still run into issues check the issues for LEWS.

RequireJS, TypeScript and Knockout Components

Despite the teething problems I had getting my head around RequireJS, I had another go this week on sorting this out. The motivation for this was Knockout Components – re-usable asynchronous web components that work with Knockout. So this article details the steps required to make this work on a vanilla ASP.NET MVC website.

Components

If you’re not familiar with components, they are similar to ASP.NET Web Controls – re-usable, modular components that are loosely coupled – but all using client-side JS and HTML.

Although you can use and implement components without an asynchronous module loader, it makes much more sense to do so and to modularise your code and templates. This means you can specify the JS code and or HTML template is only loaded at runtime, asynchronously, and on-demand.

Demo Project

To show how to apply this to a project, I’ve created a GitHub repository with a web app. Step one was to create the ASP.NET MVC application. I used .NET 4.5 and MVC version 5 for this but older versions would work just as well.

Next I upgraded all the default Nuget packages including JQuery so that we have the latest code, and then amended the home page to remove the standard ASP.NET project home page. So far, nothing new.

RequireJS

Next step is to add RequireJS support. At present our app loads the JavaScript modules synchronously from the HTML pages, using the ASP.NET bundler.

        public static void RegisterBundles(BundleCollection bundles)
        {
            bundles.Add(new ScriptBundle("~/bundles/jquery").Include(
                        "~/Scripts/jquery-{version}.js"));

These are inserted via the _Layout.cshtml template:

    @Scripts.Render("~/bundles/jquery")
    @Scripts.Render("~/bundles/bootstrap")
    @RenderSection("scripts", required: false)

First we add RequireJS to the web app from the Nuget project, and the Text Addon for RequireJS. This is required to allow us to load non-JavaScript items (e.g. CSS and HTML) using RequireJS. We’ll need that to load HTML templates. These will load into the /Scripts folder.

Configuring RequireJS

Next, we create a configuration script file for RequireJS. Mine looks like this, yours may be different depending on how your scripts are structured:

require.config({
    baseUrl: "/Scripts/",
    paths: {
        jquery: "jquery-2.1.1.min",
        bootstrap: "bootstrap.min"
    },
    shim: {
        "bootstrap": ["jquery"]
    }
});

This configuration uses the default Scripts folder, and maps the module ‘jquery’ to the version of JQuery we have updated to. We’ve also mapped ‘bootstrap’ to the minified version of the Bootstrap file, and added a shim that tells RequireJS that we need to load JQuery first if we use Bootstrap.

I created this file in /Scripts/require/config.ts  using TypeScript. At this point TypeScript is flagging an error saying that require is not defined. So now we need to add some TypeScript definition files. The best resource for these is the Github project Definitely Typed, and these are all on Nuget to make it even easier. We can do this from the Package Manager console:

   1: Install-Package jquery.TypeScript.DefinitelyTyped

   2: Install-Package bootstrap.TypeScript.DefinitelyTyped

   3: Install-Package requirejs.TypeScript.DefinitelyTyped

Implementing RequireJS

At this point we have added the scripts but not actually changed our application to use RequireJS. To do this, we open the _Layout.cshtml file, and change the script segment to read as follows:

    <script src="~/Scripts/require.js"></script>
    <script src="~/Scripts/require/config.js"></script>
    @* load JQuery and Bootstrap *@
    <script>
        require(["jquery", "bootstrap"]);
    </script>
    @RenderSection("scripts", required: false)

This segment loads require.js first, then runs the config.js file which configures RequireJS.

Important: Do not use the data-main attribute to load the configuration as I originally did –  you’ll find that you cannot guarantee that RequireJS is properly configured before the require([…]) method is called. If the page loads with errors, e.g. if the browser tries to load /Scripts/jquery.js or /Scripts/bootstrap.js then you’ve got it wrong.

Check the network load for 404 errors in the browser developer tools.

Adding Knockout

We add Knockout version 3.2 from the package manager along with the TypeScript definitions as follows:

install-package knockoutjs
Install-Package knockout.TypeScript.DefinitelyTyped

You need at least version 3.2 to get the Component support.

I then modified the configuration file to add a mapping for knockout:

require.config({ baseUrl: "/Scripts/", paths: { jquery: "jquery-2.1.1.min", bootstrap: "bootstrap.min", knockout: "knockout-3.2.0" }, shim: { "bootstrap": ["jquery"] } });

Our last change is to change the TypeScript compilation settings to create AMD output instead of the normal JavaScript they generate. We do this using the WebApp properties page:

image

This allows us to make us of require via the import and export keywords in our TypeScript code.

We are now ready to create some components!

Demo1 – Our First Component: ‘click-to-edit’

I am not going to explain how components work, as the Knockout website does an excellent job of this as do Steve Sanderson’s videos and Ryan Niemeyer’s blog.

Instead we will create a simple page with a view model with an editable first and last name. These are the steps we take:

  1. Add a new ActionMethod to the Home controller called Demo1
  2. Add a view with the same name
  3. Create a simple HTML form, and add a script section as follows:
    <p>This demonstrates a simple viewmodel bound with Knockout, and uses a Knockout component to handle editing of the values.</p>
    <form role="form">
        <label>First Name:</label>
        <input type="text" class="form-control" placeholder="" data-bind="value: FirstName" />
        @* echo the value to show databinding working *@
        <p class="help-block">
            Entered: {<span data-bind="text: FirstName"></span>}
        </p>
        <button data-bind="click: Submit">Submit</button>
    </form>
    
    @section scripts {
        <script>
            require(["views/Demo1"]);
        </script>
    }
  4. We then add a folder views to the Scripts folder, and create a Demo1.ts TypeScript file.
// we need knockout
import ko = require("knockout");

export class Demo1ViewModel {
    FirstName = ko.observable<string>();

    Submit() {
        var name = this.FirstName();
        if (name)
            alert("Hello " + this.FirstName());
        else
            alert("Please provide a name");
    }
}

ko.applyBindings(new Demo1ViewModel());

This is a simple viewmodel but demonstrates using RequireJS via an import statement. If you use a network log in the browser development tools, you’ll notice that the demo1.js script is loaded asynchronously after the page has finished loading.

So far so good, but now to add a component. We’ll create a click-to-edit component where we show a text observable as a label that the user has to click to be able to edit.

Click to Edit

Our component will show the current value as a span control, but if the user clicks this, it changes to show an input box, with Save and Cancel buttons. If the user edits the value then clicks cancel, the changes are not saved.

image view mode and  image edit mode

To create this we’ll add three files in a components subfolder of Scripts:

  • click-to-edit.html – the template
  • click-to-edit.ts – the viewmodel
  • click-to-edit-register.ts – registers the component

The template and viewmodel should be simple enough if you’re familiar with knockout: we have a viewMode observable that is true if in view mode, and false if editing. Next we have value – an observable string that we will point back to the referenced value. We’ll also have a newValue observable that binds to the input box, so we only change value when the user clicks Save.

TypeScript note: I found that TypeScript helpfully removes ‘unused’ imports from the compiled code, so if you don’t use the resulting clickToEdit object in the code, it gets removed from the resulting JavaScript output. I added the line var tmp = clickToEdit; to force it to be included.

Registering and Using the Component

To use the component in our view, we need to register it, then we use it in the HTML.

Registering we can do via import clickToEdit = require(“click-to-edit-register”); at the top of our Demo1 view model script. The registration script is pretty short and looks like this:

import ko = require("knockout");

// register the component
ko.components.register("click-to-edit", {
    viewModel: { require: "components/click-to-edit" },
    template: { require: "text!components/click-to-edit.html" }
});

The .register() method has several different ways of being used. This version is the fully-modular version where the script and HTML templates are loaded via the module loader. Note that the viewmodel script has to return a function that is called by Knockout to create the viewmodel.

TypeScript note: the viewmodel code for the component defines a class, but to work with Knockout components the script has to return a function that is used to instantiate the viewmodel, in the same form as this script. To do this, I added a line  return ClickToEditViewModel; at the end of this module. This appears to return the class, which of course is actually a function that is the constructor. This function takes a params parameter that should have a value property that is the observable we want to edit.

 

Using the component is easy: we use the name that we used when it was registered (click-to-edit) as if it were a valid HTML tag.

<label>First Name:</label>
    <div class="form-group">
        <click-to-edit params="value: FirstName"></click-to-edit>
    </div>

We use the params attribute to pass the observable value through to the component. When the component changes the value, you will see this reflected in the page’s model.

Sequence of Events

It’s interesting to follow through what happens in sequence:

  1. We navigate to the web URL /Home/Demo1, which maps to the controller action Demo1
  2. The view is returned, which only references one script “views/Demo1” using require()
  3. RequireJS loads the Demo1.js script after the page has finished loading
  4. This script references knockout and click-to-edit-register, which are both loaded before the rest of the script executes
  5. The viewmodel binds using applyBindings(). Knockout looks for registered components and finds the <click-to-edit> tags in our view, so it makes requests to RequireJS to load the viewModel script and the HTML template.
  6. When both have been loaded, the find binding is completed.

It’s interesting to watch the network graph of the load:

image

The purple vertical line represents when the page finished loading, after about 170ms – about half way through the page being completed and ready. In this case I had cleared the cache so everything was loaded fresh. The initial page load only has just over 33KB of data, whereas the total loaded was 306KB. This really helps make sites more responsive on a first load.

Another feature of Knockout components is that they are dynamically loaded only when they are used. If I had a component used for items in an observable array, and the array was empty, then the components’ template and viewModel would not be used. This is really great if you’re creating Single Page Applications.

Re-using the Component

One of the biggest benefits of components is reuse. We can now extend our viewModel in the page to add a LastName property, and use the click-to-edit component in the form again:

    <label>First Name:</label>
    <div class="form-group">
        <click-to-edit params="value: FirstName"></click-to-edit>
    </div>
    <label>Last Name:</label>
    <div class="form-group">
        <click-to-edit params="value: LastName"></click-to-edit>
    </div>

Now the page has two values using the same control, independently bound.

Using Google Drive API with C# – Part 2

Welcome to Part 2 which covers the authorization process. If you have not yet set up your Google API access, please read part 1 first.

OpenAuth

The OpenAuth initially seems pretty complicated, but once you get your head around it, it’s not that scary, honest!

If you followed the steps in Part 1 you should now have a Client ID and Client Secret, which are the ‘username’ and ‘password’. However, these by themselves are not going to get you access directly.

Hotel OpenAuth

You can think of OpenAuth of being a bit like a hotel, where the room is your Google Drive. To get access you need to check in at the hotel and obtain a key.

When you arrive at reception, they check your identity, and once they know who you are, they issue a card key and a PIN number to renew it. This hotel uses electronic card keys and for security reasons they stop working after an hour.

When the card stops working you have to get it re-enabled. There is a machine in the lobby where you can insert your card, enter the PIN and get the card renewed, so you don’t have to go back to the reception desk and ask for access again.

Back To OpenAuth

In OpenAuth ‘Reception’ is the authentication request that appears in the browser you get when you first attempt to use a Client ID and Client Secret. This happens when you call GoogleWebAuthorizationBroker.AuthorizeAsync the first time.

This allows the user to validate the access being requested from your application. If the access is approved, the client receives a TokenResponse object.

The OpenAuth ‘key card’ is called an AccessToken, and will work for an hour after being issued. It’s just a string property in the TokenResponse. This is what is used when you try to access files or resources on the API.

When the AccessToken expires you need to request a new one, and the ‘PIN number’ is a RefreshToken (another property in TokenResponse) which also got issued when the service validated you. You can save the refresh token and re-use it as many times as you need. It won’t work without the matching Client ID and Client Secret, but you should still keep it confidential.

With the .NET API this renewal process is automatic – you don’t need to request a new access key if you’ve provided a RefreshToken. If the access is revoked by the Drive’s owner, the RefreshToken will stop working, so you need to handle this situation when you attempt to gain access.

Token Storage

The first time you make a call to AuthorizeAsync will result in the web authorization screen popping up, but in subsequent requests this doesn’t happen, even if you restarted the application. How does this happen?

The Google .NET client API stores these access requests using an interface called IDataStore. This is an optional parameter in the AuthorizeAsync method, and if you didn’t provide one, a default FileDataStore (on Windows) would have been used. This stores the TokenResponse in a file in a folder [userfolders]\[yourname]\AppData\Roaming\Drive.Auth.Store

When you call AuthorizeAsync a second time, the OpenAuth API uses the key provided to see if there is already a TokenResponse available in the store.

Key, what key? The key is the third parameter of the AuthorizeAsync method, which in most code samples is just “user”.

   1: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync(

   2:                 secrets,

   3:                 new string[] {DriveService.Scope.Drive}, 

   4:                 "user", 

   5:                 CancellationToken.None).Result;

It follows that if you run your drive API application on a different PC, or logged in as a different user, the folder is different and the stored TokenResponse isn’t accessible, so the user will get prompted to authorise again.

Creating Your Own IDataStore

Since Google uses an interface, you can create your own version of the IDataStore. For my application, I would only be using a single client ID and secret for the application, but I wanted it to work on the live server without popping up a web browser.

I’d already obtained a TokenResponse by calling the method without a store, and authorised the application in the browser. This generated the TokenResponse in the file system as I just described.

I copied the value of just the RefreshToken, and created a MemoryDataStore that stores the TokenResponses in memory, along with a key value to select them. Here’s the sequence of events:

  1. When my application starts and calls AuthorizeAsync the first time, I pass in MemoryDataStore.
  2. The Google API then calls the .GetAsync<T> method in my class, so I hand back a TokenResponse object where I’ve set only the ResponseToken property.
  3. This prompts the Google OAuth API to go and fetch an AccessToken (no user interaction required) that you can use to access the drive.
  4. Then the API calls StoreAsync<T> with the resulting response. I then replace the original token I created with the fully populated one.
  5. This means the API won’t keep making requests for AccessTokens for the next hour, as the next call to GetAsync<T> will return the last key (just like the FileStore does).

Note that the ResponseToken we get back has an ExpiresInSeconds value and an Issued date. The OAuth system has auto-renewal (although I’ve not confirmed this yet) so when your AccessToken expires, it gets a new one without you needing to do this.

My code for the MemoryDataStore is as follows:

   1: using Google.Apis.Auth.OAuth2.Responses;

   2: using Google.Apis.Util.Store;

   3: using System;

   4: using System.Collections.Generic;

   5: using System.Linq;

   6: using System.Text;

   7: using System.Threading.Tasks;

   8:  

   9: namespace Anvil.Services.FileStorageService.GoogleDrive

  10: {

  11:     /// <summary>

  12:     /// Handles internal token storage, bypassing filesystem

  13:     /// </summary>

  14:     internal class MemoryDataStore : IDataStore

  15:     {

  16:         private Dictionary<string, TokenResponse> _store;

  17:  

  18:         public MemoryDataStore()

  19:         {

  20:             _store = new Dictionary<string, TokenResponse>();

  21:         }

  22:  

  23:         public MemoryDataStore(string key, string refreshToken)

  24:         {

  25:             if (string.IsNullOrEmpty(key))

  26:                 throw new ArgumentNullException("key");

  27:             if (string.IsNullOrEmpty(refreshToken))

  28:                 throw new ArgumentNullException("refreshToken");

  29:  

  30:             _store = new Dictionary<string, TokenResponse>();

  31:  

  32:             // add new entry

  33:             StoreAsync<TokenResponse>(key,

  34:                 new TokenResponse() { RefreshToken = refreshToken, TokenType = "Bearer" }).Wait();

  35:         }

  36:  

  37:         /// <summary>

  38:         /// Remove all items

  39:         /// </summary>

  40:         /// <returns></returns>

  41:         public async Task ClearAsync()

  42:         {

  43:             await Task.Run(() =>

  44:             {

  45:                 _store.Clear();

  46:             });

  47:         }

  48:  

  49:         /// <summary>

  50:         /// Remove single entry

  51:         /// </summary>

  52:         /// <typeparam name="T"></typeparam>

  53:         /// <param name="key"></param>

  54:         /// <returns></returns>

  55:         public async Task DeleteAsync<T>(string key)

  56:         {

  57:             await Task.Run(() =>

  58:             {

  59:                 // check type

  60:                 AssertCorrectType<T>();

  61:  

  62:                 if (_store.ContainsKey(key))

  63:                     _store.Remove(key);

  64:             });

  65:         }

  66:  

  67:         /// <summary>

  68:         /// Obtain object

  69:         /// </summary>

  70:         /// <typeparam name="T"></typeparam>

  71:         /// <param name="key"></param>

  72:         /// <returns></returns>

  73:         public async Task<T> GetAsync<T>(string key)

  74:         {

  75:             // check type

  76:             AssertCorrectType<T>();

  77:  

  78:             if (_store.ContainsKey(key))

  79:                 return await Task.Run(() => { return (T)(object)_store[key]; });

  80:  

  81:             // key not found

  82:             return default(T);

  83:         }

  84:  

  85:         /// <summary>

  86:         /// Add/update value for key/value

  87:         /// </summary>

  88:         /// <typeparam name="T"></typeparam>

  89:         /// <param name="key"></param>

  90:         /// <param name="value"></param>

  91:         /// <returns></returns>

  92:         public Task StoreAsync<T>(string key, T value)

  93:         {

  94:             return Task.Run(() =>

  95:             {

  96:                 if (_store.ContainsKey(key))

  97:                     _store[key] = (TokenResponse)(object)value;

  98:                 else

  99:                     _store.Add(key, (TokenResponse)(object)value);

 100:             });

 101:         }

 102:  

 103:         /// <summary>

 104:         /// Validate we can store this type

 105:         /// </summary>

 106:         /// <typeparam name="T"></typeparam>

 107:         private void AssertCorrectType<T>()

 108:         {

 109:             if (typeof(T) != typeof(TokenResponse))

 110:                 throw new NotImplementedException(typeof(T).ToString());

 111:         }

 112:     }

 113: }

This sample uses the following Nuget package versions:

   1: <package id="Google.Apis" version="1.8.2" targetFramework="net45" />

   2: <package id="Google.Apis.Auth" version="1.8.2" targetFramework="net45" />

   3: <package id="Google.Apis.Core" version="1.8.2" targetFramework="net45" />

   4: <package id="Google.Apis.Drive.v2" version="1.8.1.1270" targetFramework="net45" />

   5: <package id="log4net" version="2.0.3" targetFramework="net45" />

   6: <package id="Microsoft.Bcl" version="1.1.9" targetFramework="net45" />

   7: <package id="Microsoft.Bcl.Async" version="1.0.168" targetFramework="net45" />

   8: <package id="Microsoft.Bcl.Build" version="1.0.14" targetFramework="net45" />

   9: <package id="Microsoft.Net.Http" version="2.2.22" targetFramework="net45" />

  10: <package id="Newtonsoft.Json" version="6.0.3" targetFramework="net45" />

  11: <package id="Zlib.Portable" version="1.9.2" targetFramework="net45" />

Raspberry Pi Wallboard System

One of the directors wanted to have a wallboard displaying real-time numbers from our new VoIP phone system. We had a web page which could show the stats, so we now had to decide how to get these onto a wall-mounted display in the office.

The display part was easy, there are quite a few 32 inch or larger LCD TV systems with HDMI input. The question was what to use to display and update the web page.

At first we considered setting up a PC to do this but even the Intel NUC devices are quite expensive – £250-plus for a device and a disk. They are also much more powerful than we need.

My colleague working on this project is looking at the Google ChromeCast device, but this is designed for media streaming rather than web pages. I decided to explore the Raspberry Pi as an alternative.

Toys!

To be honest, I’d been itching to mess about with a Pi since they came out. But with so much else to do I couldn’t justify the time to myself. This was a good opportunity to experiment with a specific, worthwhile goal in mind.

model_a_1_of_4_grande

I chose Pimoroni as my supplier as they had a good selection of starter kits. Our “production” unit would consist of a Pi model B, a case, 8GB SD card, WiFi dongle, 5v USB power adapter and a short HDMI cable.

This comes to £67 including VAT in the UK – a lot less than the NUC option. Add that to a 32” Samsung TV for about £219 including VAT. These are excellent as they also have a powered USB 5V connector – so the Pi runs off the TV power supply.

So a wallboard system for less than £300! A wall mounting bracket is extra – these vary from £10 upward, so we might just break the £300 limit for that.

I got two “production” units and a “Deluxe Raspberry Pi Starter Kit” which includes a simple USB keyboard, mouse, USB hub – this one was to act as our development box.

Configuration and Setup

The SD cards come pre-installed with NOOBS so I selected Raspbian and got to a working desktop. After configuring the WiFi we had network access.

The main requirement was a hands-free boot-to-display operation. Fortunately someone else had done the heavy lifting for this part of the design by configuring their Pi to act as a wall mounted outlook calendar. A tip of the hat to Piney for his excellent guide.

Once I had a working development system, I purchased a USB card reader (that supports SD card format – almost all do) for my PC and installed Win32 Disk Imager. I copied the working SD card from the development Pi to an image, and then wrote this image back to the two production SD cards.

Testing

I had so far had this on my office WiFi on a computer monitor, so I unhooked the development setup, and went into my house, where the children’s 42” TV sat in their den, practically begging me to try it out.

PiTV

I was impressed that I didn’t need to touch the screen setup for the HDMI to work correctly. I had to reconfigure the WiFi, but once that was done I could plug the Pi directly into the TV’s powered USB connector, and the HDMI cable. 

Installation

Installation at site isn’t totally shrink-wrap unless you know the WiFi configuration in advance. Our production Pis were delivered to my office and our excellent support tech Cristian used a USB mouse and keyboard to configure them with the correct WiFi setup and key. He then mounted the two Pis behind the monitors (you could use Velcro tape for this), connected their USB power to the monitor USB output and the HDMI to the TV’s HDMI.

Operation

The TV is switched on in the morning, which powers up the USB port. The Pi boots up directly into a full-screen display of the web page that shows our call stats.

PiTV

ASP.NET MVC attribute routing

As we start to add more pages to our ASP.NET MVC section of our application, I’ve found that the MVC routing table in version 4 is somewhat constraining. I’m not alone, it seems others find it awkward in doing more intelligent handling of routes.

Problem

Our site has customers, and we have different services for these customers. I’ve partitioned everything customer-related into it’s own area, so that the base URL for it is /Customer/

We use integer account ID’s from the database as our primary key, so logically if I want to refer to a specific customer I prefer the URL

/Customer/123

as our root. If we followed the standard MVC default, which would be

/Customer/Home/Index/123

Ugh. That just sounds long-winded and over-complicated.

So for example, the home page should be

/Customer/123/Home

The default ASP.NET route won’t work for this of course: /Customer/{controller}/{action}/{id} will interpret this as a controller called 123 and an action Home, with no ID. Easy enough to change the routing as we have until now:

/Customer/{customerID}/{controller}/{action}

First problem with this approach is that customerID is now a required value: what if I want to create a search page, to find a customer? I don’t have a customerID in this context, but it’s still customer related. What I’d like is a URL like

/Customer/Search

While this is possible in the MVC4 routing engine, it leads to a longer and longer set of routes to be registered in our RegisterRoutes method. What we actually want to specific is some kind of pattern matching a la regex, e.g.

/Customer/{CustomerID:\d+}/{controller}/{action}
/Customer/{controller:[^\d*]}/{action}

Attribute Based Routing

Alas the current MVC4 routing engine does not support this. However there is an alternative that it seems the ASP.NET team is adopting to appear in MVC 5: attribute based routing(ABR).

Installing is a breeze: Install the Nuget package it adds a setup file to the App_Start folder which loads the configuration option. That’s it.

Each controller/action can then decorated with attributes which define the routing. Using my example above, all my controllers live in the Customer area. However, I generally want to prefix each route/action with the customerID. In ABR, I can define the area and a prefix at the controller level:

    [RouteArea("Customer")]
    [RoutePrefix("{CustomerID:int}/Issue")]
    public class IssueController : Controller

In the above sample, the URL

~/Customer/{CustomerID}/Issue

is now the base URL for all actions on this controller. ABR will extract and parse the customer ID as an integer from the URL, and I can use it directly in my action methods, e.g.

        [GET("")]
        [GET("List")]
        public ActionResult List(int CustomerID)
        {
            return View();
        }

The List action works with a blank or “List” URL part, so either of the following routes will map to it:

~/Customer/{CustomerID}/Issue

~/Customer/{CustomerID}/Issue/List

No More IDs

When I refer to a specific Issue by ID, I can avoid the generic ‘{ID}’ we get in the old routing engine:

        // GET: /Customer/{CustomerID}/Issue/{IssueID}
        [GET("{IssueID:int}")]
        public ActionResult Index(int CustomerID, int IssueID)
        {

Now let’s say I want my search page, which does not have a customer ID:

~/Customer/Search

We can define a Search controller by dropping the CustomerID part:

    [RouteArea("Customer")]
    [RoutePrefix("Search")]
    public class SearchController : Controller
    {

This will map to the URL

~/Customer/Search

I won’t go into the ABR specification and all the options it provides, as the library is well documented and logical.

A big thanks to Tim McCall for his work!

IT Developer Search – Job Site Comparison for Employers

Our company had need of another developer (recession.. what recession?) so I decided to review the current players and try to get some comparisons before I posted my advert, and search CVs. Here are my research notes. I’ve tried to compare like-with-like where that is possible. This is not intended to be an exhaustive list, I tried to stick with the bigger on-line players.

Site 1 month advert 1 week advert CV search sample job search1 sample cv search2

Score (out of 5)

Monster UK £199 (only as part of packages) £799 for month, £249 for week. 278 7 candidates (4 in London)

3

JobServe £299 £120 £200/month, 200 CV views 698 55

4

JobSite £1983 £99 £600 (24hrs) 61 n/a

2

TotalJobs £99 n/a £250 (week) 11 n/a

2

Jobs £199 n/a £499? 20 (see notes) n/a

1

Indeed Free/CPC Free/CPC Free/$1 per CV 971 about 350

5

Notes

All prices quoted correct at time of research (24th June 2013) and are exclusive of VAT.

1 sample job search is “asp.net web developer” and criteria is “Guildford, Surrey”. If a distance is specified we use 20 miles.

2 CV search price for one month access (if available)

3 JobSite advert was for two weeks, so doubled the £99 cost to match month.

Monster

I had recruited one person through a recruitment consultant, who it seems used Monster as a CV source and sent me candidates from there,so it was a logical place to start.

There are a lot of different options including discount combined advert + CV searches, and also geographic filters with discounted pricing, which looked good. After a bit of searching I found their CV search test-drive facility, which allows you to run your queries and get real results back with the contact details removed. The search found only seven candidates, four of which were in “London”, so if I used the service for a week (assuming no new candidates appeared in that time) that would be a cost of £35 per CV!

The job search returned a respectable number of positions, but was priced at a similar level to the other services and wasn’t especially tempting. In their favour they did have regionally-restricted packages, where the searches and listings were for a smaller area. This makes a lot of sense – many smaller employers (like me) won’t be willing to interview staff from the other side of the UK, and it wastes my time and theirs.

The only criticism of this offering from Monster is that the regions are fixed areas: so “South East” includes people from Essex, as well as Guildford, but not someone from Crawley. They should have had regional listings based on a radius from the job location, which would make more sense.

JobServe

Founded in the earliest days of the UK Internet, they used to dominate the UK IT recruitment industry, but everyone else has caught up a bit. The job listing test returned almost 700 hits, although the distance was set for 25 miles, which brings much of central London into scope. I changed it to 15 (there is no 20 option) and this dropped to just 153.

The CV search facility is not competitive at £198 for a month (and 200 CV downloads), but there is no online test-drive facility. I was contacted by a sales manager from JobServe who offered to do a demo, which involved using a remote-client view whilst he talked me through the demonstration. My CV search test returned 55 CVs, which is a more reasonable £3.60 per CV.

JobSite

I included this site as I had the impression they had scale on their side. Their job listing prices were similar to Monster, JobServe but the number of job matches was pretty derisory at only 61, suggesting their database isn’t very large.

The CV search facility was laughably over-priced: £600 for 24 hours access. How many CVs are in the database? No idea. CV search test-drive? Nope. Good luck with selling that, JobSite.

Jobs

Not a good start for this company: the basic job prices are available on the site but access to anything else as an employer requires you to register. So one set of fake login details later, I gain access.

Initial impressions are that this is a company focused on payment and your details. The services they offer (job listings, CV search etc.) are the fourth and fifth items on the employers menu. Clicking the first item “My Account” seems to be broken: until I realise it’s not a tab or a link.. it’s a title for the section. I also notice that the site is classic ASP – which suggests this is a company that has not invested in it’s website infrastructure in the last ten years.

The job search test returned 20 results, none of which were in or near Guildford as requested. Then I noticed the results were actually coming from a different site: Indeed –  an aggregator. Which suggests the level of service being offered by Jobs is very basic: I did the same search directly on the Indeed site, and got far more jobs that the Jobs.co.uk search, which suggests they’re not even using it properly.

The CV search function only works if you’ve purchased it. I can’t tell if it’s any good, or how many CVs I can search, or do a trial run. I have to purchase a subscription first. Kind of like meeting a bloke in a pub who wants to sell you a mobile phone. Except the phone is in an unmarked box, and he won’t let you see what you’re buying until you’ve paid him for it. Do you think I’m going to take a punt for £… for… er – how much is it?

And there is the second problem.. there is no mention of CV search in the purchase options. There is something called “Full Internet Sweep” for £499.. is that it? I guess it must be, since there’s little else in this site.

Given the job search just subcontracts the list to Indeed, one wonders if the “CV search” would do the same – a service you can get for free from Indeed directly? This site is a thin veneer of not-very-much, trying to trade on an easily-remembered domain name. It’s no wonder the recruitment industry has such a bad name.

Indeed

I am embarrassed to say I’d not heard of Indeed before. I stumbled on their name from jobs.co.uk (so at least that site had one redeeming feature!). Their model is so different and refreshing that it took me a while to get it. They are “Google for Jobs”.

If you think about it, it’s a totally logical concept. Google helps you find web pages, and Indeed helps you find jobs (if you’re an employee) or find staff (if you’re an employer). Their business model is the same too, with job adverts charged on a pay-per-click basis. However there is an option to turn off the “sponsor” option so it’s possible to list jobs for free.

The test search for job produced an excellent 971 results. This is partly because Indeed acts as an aggregator for jobs from other locations and sites, so it gives a wider choice. A very good reason for a potential employee to use it over the competition.

The CV search seems to be totally free, although I’ve not tried contacting anyone through it. It took a Google search to determine that contacting people via the CV service costs about $1 per CV. The CV search provides so much information that finding the person with a Google search for their name wasn’t that difficult to do: frequently these people also appeared on LinkedIn, Google+ and Facebook.

However, at a cost of $1 per CV I’m happy to pay for the service and use the online contact facility. I can also have new CVs sent to me that meet my criteria, also at no extra charge unless I chose to contact that person.

I have only two criticisms of Indeed: first, that they make it a bit difficult to work out what their services cost for the CV-resume service, and secondly that they need to spread the word of their existence more!

Conclusion

It’s a bit of a no-brainer really. The days are numbered for the “classified-ad” listing model for recruitment agencies. Pay £199 for 30 days listing when possibly no-one would search for or see your vacancy. The CV search products on most sites are overpriced rip-offs, with only JobServe’s offering looking like it would be worth a try.

Excel Art – The New ASCII Art?

Reading my daily CodeProject newsletter today, I came across this article about a Japanese artist who uses the graphic editing tools in Excel to create art.

Like many commenters, I disappointed that he hadn’t actually used Excel cells to create the image in the style of Pointillism.

If you know C# this is not a problem, and a few minutes later we have this code:

            // gembox spreadsheet key set
            GemBox.Spreadsheet.SpreadsheetInfo.SetLicense("[redacted]");

            // create the Excel output file
            var xl = new GemBox.Spreadsheet.ExcelFile();
            var ws = xl.Worksheets.Add("IBMPC");

            const string imageResource= "ExcelPaint.IBM PC 5150.jpg";

            var thisEXE = System.Reflection.Assembly.GetExecutingAssembly();
            using (var resource = thisEXE.GetManifestResourceStream(imageResource)) 
            {
                var img = new Bitmap(resource);
                for (int x = 0; x < img.Width; x++)
                {
                    for (int y = 0; y < img.Height; y++)
                    {
                        // get pixel colour from image
                        Color pixel = img.GetPixel(x, y);

                        // write to Excel cell
                        ws.Cells[y, x].Style.FillPattern.PatternForegroundColor= pixel;
                        ws.Cells[y, x].Style.FillPattern.PatternStyle = GemBox.Spreadsheet.FillPatternStyle.Solid;
                    }
                }
            }

            xl.SaveXlsx("C:\\IBMPC5050.xlsx");

I use the excellent GemBox SpreadSheet library for my Excel interaction. The code should not really need any explanation: read a pixel, write a “pixel”.

The resulting output is an Excel Spreadsheet, as shown below. Initially the aspect-ratio of the image was distorted as the cells were not square. I manually resized the column widths to 2, and it looks correct.

image

You can download the resulting output yourself here. I have to admit this is not going to win awards for image compression techniques: the original JPG was only 43KB and the Excel sheet was 1291KB. But hey, it’s 100% cooler.

For reference this is the original image:

In case you’re wondering – why an old PC as a test picture? This was the first suitable image I found, it’s one of my collection of old computers. It’s also a reference to the ASCII art of days gone by.