Knockout-Validation 101

As a user of KnockoutJS you quickly realise that the one thing you’re going to need is a way to validate user input. Fortunately the excellent Knockout-Validation library exists to fill that gap.

While this library does have a getting started page and some documentation, it isn’t particularly clear (at least to me!) in these pages exactly what ways you can use the library and how it works. So this post is to summarise my testing and experimentation. For brevity I will refer to the Knockout-Validation library as “KOV”.


To use KOV in the first instance you need to have both the Knockout and KOV scripts loaded in the page (in that order). KOV relies on Knockout and will complain if it’s not present.

Very Basic Introduction

KOV allows you to define validation rules that apply to a Knockout observable property, observable array and even knockout computed properties. You do this by using the extends method on an observable to define one or more rules. For example:

var viewModel = {
    name: ko.observable().extend({ required: true })

This is the simplest example. We have specified a rule that the name field is required.

What Actually Happened?

KOV extends our observable to apply this rule, and adds the following sub-properties to name:

Property Type Purpose
error string property the error message for this field
rules() observable array the list of rules for the property
isModified() observable set if the value has changed since original load
isValidating() computed Set if validation has started but not finished (e.g. it may be performing an asynchronous AJAX call)
isValid() computed Evaluates the rules, and returns true.

The error message is set using the default message for each rule. For example, the required extender has a default message of “This field is required.”. You can override this message by specifying different options when you call .extends().

var viewModel = { name: ko.observable().extend({

required: {

message: ‘Required!’,

params: true }


) }

Triggering Validation

The validation rules for an observable are triggered when an observable value is modified. In most cases for controls, this is when an input control loses focus (e.g. a textbox) or a related control is clicked (e.g. a radio button or listbox).

The validation rules are checked, and the first one to fail generates the error message.

Note: isModified(), isValid() and isValidating() are all observable or computed-observable values. That means you can bind to these values just like any other KO observable value. For example, you could bind the css for an input field to the isModified sub-property to change colour when a field is changed.

Out-of-the-Box Behaviours

In our example above we have just one field, but we have some useful behaviours added by default. The first is that the input field will have a the validation message appended after it when the user leaves the textbox.

Message Options

For simple cases this may fine. However in most cases you will want to control KOV’s behaviour using the configuration options.

You specify these using ko.init(). Note these are global options, so apply to all validations on a single page.

The option to turn off the error message inserting would be to set insertMessages to false (it is set to true by default).

If you want to keep the message, it can have a CSS class set when the error is shown. The default class is ‘validationMessage’. You can set this with the option setting errorMessageClass.

If you want fine-grain control there is also a messageTemplate setting where you can provide the ID of a text/template to use.

The last option is messagesOnModified which is set to true by default. This prevents any messages being shown until a user has modified a value. If set to false, all the errors are shown when the form is created, before the user has a change to do anything. Normally this is bad practice but there are cases when it might be useful (e.g. returning to a previously edited screen which has errors).

Element Options

Another capability (not enabled by default) is that KOV can apply a CSS class to the invalid input element. This is controlled by the options decorateElement* and errorElementClass. Setting decorateElement = true will make KOV add the class to the element’s class list. I use this to bind with a Bootstrap form to set the entire control-group’s CSS to the error state, to make it work in the Bootstrap manner.

However, note the default behaviour is to decorate just the element that failed, whereas Bootstrap forms often have a series of DIVs containing controls and you


Validating the Whole ViewModel

Validating single fields is useful, but most ViewModels have more than one field being validated.

This is where the KOV function validatedObservable is useful. You call ko.validatedObservable(…) and pass in your own view model – the viewmodel has following methods added:

Added Type Purpose
errors() computed observable a list of any errors on the current viewmodel
isAnyMessageShown() function returns true if any messages are show
isValid() function returns false if any validation errors are set

Important!: note that only the errors() method is a Knockout computed observable, and isAnyMessageShown and isValid are just plain functions. However the errors method does not seem to function for .subscribe.

Validation Bindings

If you don’t like the validation messages appended after the input, you can create your message placement point, and use the validationMessage binding handler: see Validation-Bindings

This displays the validation messages in the content of the bound control. It also hides the control when no messages exist: this means it can be bound to formatted control (e.g. a div with a coloured background, or popup control), and this will disappear when there is no error.

How KOV Works and Things To Watch

KOV overrides the applyBindings and the value binding handler. It adds its own validation binding handler, then calls the original value binding. This means when an observable is changed KOV knows about it and can trigger the validation.

One side affect of this is that custom binding handlers will not be validated on-screen, and you need to consider triggering the validation yourself.


Are you a .NET developer? Got a list of strings you want to join? This is what I and almost every other developer I’ve seen does:

var values = new List<string>();
// add some values...
string csv = string.Join(",", values.ToArray());

Turns out someone in the .NET framework team thought this was a bit dumb, so they quietly added a new overload in version 4.0 of the framework. This overload takes an IEnumerable<string> so now you can write:

var values = new List<string>();
// add some values...
string csv = string.Join(",", values);

Just thought I’d blog that in case you’re doing the same thing!

ASP.NET Razor Views in Class Libraries

One of my biggest complaints about WebForms in ASP.NET was that you couldn’t easily put interface code such as .ascx controls into DLLs and then use these in an ASP.NET website.

This meant that virtually all the interface code had to reside in the ASP.NET web project – which kills reuse and modularity. You end up with monolithic ASP.NET web applications containing all the UI code.

It was possible to create .ascx controls in DLLs, but it was a horrible kludge.

Enter the Razor

With the introduction of the Razor view engine we can now create views in class libraries, with Intellisense, compilation and design time support. However, it’s not a built-in functionality and takes some work to set up. There are two basic approaches, and you need to decide which is appropriate for you.

I’ve worked on this previously on MVC v3 and MVC v4, but I’ve decided to write about in this blog post purely from the MVC v5 and Razor v3 perspective. If you’re interested in older versions I recommend the following:

Full-Fat MVC Project

The easiest way to do this is create the class library using the ASP.NET Web Application and select the MVC template. This will create a web project which can be compiled down to a single pre-compiled DLL. You can then reference this in your main ASP.NET web application.

Indeed if you follow this approach you can use the routes, controllers and views as if they were part of the main web app.

However, if you’re looking to do something like create html email templates (as I was) you get a lot of overhead and stuff you don’t need throw into the project (Controllers, Models, Scripts etc.).

Razor-Thin Class Library

      The alternative approach I’ve investigated is to start with a basic C# class library for .NET 4.5 and then add just enough functionality to make Razor views work in the editor. I can then include the Razor content as an embedded resource and combine with a library like RazorEngine or RazorGenerator to run the view in code and generate the HTML.

      Edit: as a helpful resource I’ve also created a GitHub project which is a working example, and has a commit for each of the changes made.

      The steps required to make this work:

      1. Create a new C# project using the Class Library template, for framework version 4.5
      2. Using Nuget Package manager add the following packages:
        • Microsoft.AspNet.Razor
        • Microsoft.AspNet.WebPages
        • Microsoft.Web.Infrastructure
        • Microsoft.AspNet.Mvc

        I also added in the .NET framework libraries for System.Web.Abstractions (4.0) and System.Data.Linq (4.0)

      3. Open the .csprj file in a text editor, and change (or edit) the ProjectTypeGuids setting to the following:

        If the setting is not present in your project file, then add it after the <ProjectGuid> setting.  This makes the MVC razor views in the Add New Item dialog box.

      4. Add a web.config file to the project, with the content required to support views. The Visual Studio editor expects to see this file to render Razor Views. I’ve attached the one I created at the end of this post. This was based on the web.config that MVC creates in the Views folder of an MVC project, but has been edited to get intellisense working.

      5. My final step is to save everything, then close and re-open the solution. This is to ensure the VS IDE is properly aware that the project supports Razor pages and loads the correct intellisense. It’s possible it might work without this step, but if you have problems, give it a try.
      6. Having done this, Intellisense works for the model and inline code. You can specify @model <yourclass> at the top and get @Model.<someproperty> intellisense. You can also have all the Razor code functionality. However, HtmlHelper extension methods, such as  @Html.Raw(..), although recognised in the editor, won’t work in RazorEngine, for example, as the Html helper isn’t present in its rendering engine by default. Read this article to find out more.

      web.config file:

      <?xml version="1.0"?>
          <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
            <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
            <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
          <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
          <pages pageBaseType="System.Web.Mvc.WebViewPage">
              <add namespace="System.Web.Mvc" />
              <add namespace="System.Web.Mvc.Ajax" />
              <add namespace="System.Web.Mvc.Html" />
              <add namespace="yourDLLhere" />
          <add key="webpages:Version" value=""/>
          <add key="webpages:Enabled" value="false" />
          <compilation targetFramework="4.5">
              <add assembly="System.Web.Abstractions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
              <add assembly="System.Web.Routing, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
              <add assembly="System.Data.Linq, Version=, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
              <add assembly="System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
          <httpRuntime targetFramework="4.5" />
            <remove name="BlockViewHandler"/>
            <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" />

      ASP.NET MVC attribute routing

      As we start to add more pages to our ASP.NET MVC section of our application, I’ve found that the MVC routing table in version 4 is somewhat constraining. I’m not alone, it seems others find it awkward in doing more intelligent handling of routes.


      Our site has customers, and we have different services for these customers. I’ve partitioned everything customer-related into it’s own area, so that the base URL for it is /Customer/

      We use integer account ID’s from the database as our primary key, so logically if I want to refer to a specific customer I prefer the URL


      as our root. If we followed the standard MVC default, which would be


      Ugh. That just sounds long-winded and over-complicated.

      So for example, the home page should be


      The default ASP.NET route won’t work for this of course: /Customer/{controller}/{action}/{id} will interpret this as a controller called 123 and an action Home, with no ID. Easy enough to change the routing as we have until now:


      First problem with this approach is that customerID is now a required value: what if I want to create a search page, to find a customer? I don’t have a customerID in this context, but it’s still customer related. What I’d like is a URL like


      While this is possible in the MVC4 routing engine, it leads to a longer and longer set of routes to be registered in our RegisterRoutes method. What we actually want to specific is some kind of pattern matching a la regex, e.g.


      Attribute Based Routing

      Alas the current MVC4 routing engine does not support this. However there is an alternative that it seems the ASP.NET team is adopting to appear in MVC 5: attribute based routing(ABR).

      Installing is a breeze: Install the Nuget package it adds a setup file to the App_Start folder which loads the configuration option. That’s it.

      Each controller/action can then decorated with attributes which define the routing. Using my example above, all my controllers live in the Customer area. However, I generally want to prefix each route/action with the customerID. In ABR, I can define the area and a prefix at the controller level:

          public class IssueController : Controller

      In the above sample, the URL


      is now the base URL for all actions on this controller. ABR will extract and parse the customer ID as an integer from the URL, and I can use it directly in my action methods, e.g.

              public ActionResult List(int CustomerID)
                  return View();

      The List action works with a blank or “List” URL part, so either of the following routes will map to it:



      No More IDs

      When I refer to a specific Issue by ID, I can avoid the generic ‘{ID}’ we get in the old routing engine:

              // GET: /Customer/{CustomerID}/Issue/{IssueID}
              public ActionResult Index(int CustomerID, int IssueID)

      Now let’s say I want my search page, which does not have a customer ID:


      We can define a Search controller by dropping the CustomerID part:

          public class SearchController : Controller

      This will map to the URL


      I won’t go into the ABR specification and all the options it provides, as the library is well documented and logical.

      A big thanks to Tim McCall for his work!

      Async File Uploads with MVC, WebAPI and Bootstrap

      This is a ‘how-to’ aide-memoir on my research on file uploading within MVC/client applications. Our application has a form with a couple of fields and we want to allow the user to upload a file as part of the form submission.

      Uploading a file in the old WebForm days was pretty simple. But now we want asynchronous uploads and send via WebAPI, so we need to handle things better.

      My requirements are as follows:

      1. We don’t want a traditional post-and-redirect model. It must use an async upload method
      2. We need the whole form (file+form fields) in one go: file-only uploads are of no use in this case
      3. We might want KnockoutJS integration if appropriate..
      4. We don’t want the uploaded data stored in files on the webserver: we want memory streams written to our database object
      5. Will be on a Bootstrap page so ideally needs to blend in (although Bootstrap 2.3 does not itself render these controls)


      Initial Investigation

      There are two parts to the operation: server-side and client-side.

      A bit of Googling for the server-side gets us to a blog post by Henrik Nielsen (from 2012):

      Asynchronous File Upload using ASP.NET Web API. This ticks two of our boxes:  first it uses WebAPI to handle the post, and secondly it does so asynchronously. It’s not Knockout-bound but that’s not such a big issue at this stage. The example he provides runs as a console app (!) self-hosted web server, so it’s a bit.. interesting.

      Except it won’t work. It’s an old beta-version sample with no complete version and it’s missing a lot of using directives, and.. the list goes on.

      There are other versions of this around and most of them either don’t work or store the uploads to files, which is contrary to requirements, and not something I’d ever do. I needed something that used memory or streams.


      I also found a Knockout solution at Khayrov’s Github project. This was a great tool using Knockout to upload files on the client – and in theory then post onward to the server. The only problem was it used the FileAPI which won’t work on most older browsers. Strictly speaking KO isn’t needed for posting but it helps if the form has behaviours on it.

      Back to WebAPI and Memory-only

      Much more Googling and I found MultipartMemoryStreamProvider which reads the form data into memory. One issue here is it treats normal form values and files the same, so you have to differentiate. A quick Google suggests a solution. This looks like a much better, and uses async methods. I amended the code to make it easier to get the files out directly, by parsing the HttpContent list into a list of files. I have attached a source listing at the end of this article ,along with a sample Post method that uses this.

      Client Side

      That seems to take care of the server side. The client side for testing was a standard html form and a submit method, which is not very async. I needed to use a jQuery .ajax call to post the form. Posting forms via AJAX with file uploads isn’t supported directly with .ajax, but after several experiments I settled on the jQuery.Form plugin. This seems to work very well on the versions I tested: IE8, IE10, Chrome v28, and Firefox v22.

      I did have a couple of errors on Firefox but that was because I tried to upload a 22MB file as a test: IIS was set to accept the default maximum upload size of 4096K, which is only 4MB. Setting maxRequestLength to a higher value fixed this:

          <httpRuntime maxRequestLength="32768" />

      The only thing the form plugin does not do well is handle errors: there seems to be no error code available in the .ajax model. We can handle an error event from the jQuery ajax call, but I wasn’t able to determine which error code was being returned.

      Binding the Form Values to the Model

      As we are processing the multi-part form ourselves, we are getting a list of file and a list of form values. This means we have to manually process the values. We could extract one-by-one using the name, but we should really use a Model Binder..

      It does not seem to be possible to use the DefaultModelBinder from MVC as this takes controller contexts as the constructors, so I added a simple one to the provider. This uses a simple reflection-based name match and set operation.


      Source code of the amended WebAPI post and supporting classes:

      using System;
      using System.Collections.Generic;
      using System.Collections.ObjectModel;
      using System.Collections.Specialized;
      using System.IO;
      using System.Linq;
      using System.Net;
      using System.Net.Http;
      using System.Net.Http.Headers;
      using System.Threading.Tasks;
      using System.Web.Http;
      using System.Web.Mvc;
      namespace MyWebApp.Api
          public class UploadController : ApiController
              /// <summary>
              /// Accept form post with files
              /// </summary>
              /// <returns></returns>
              public async Task<string> Post()
                  if (!Request.Content.IsMimeMultipartContent("form-data"))
                      throw new HttpResponseException(HttpStatusCode.BadRequest);
                  var provider = await Request.Content.ReadAsMultipartAsync<InMemoryMultipartFormDataStreamProvider>(new InMemoryMultipartFormDataStreamProvider());
                  //access form data
                  FormCollection formData = provider.FormData;
                  //access files
                  IList<HttpContent> fileContentList = provider.Files;
                  var fileDataList = provider.GetFiles();
                  // get the files 
                  var files = await fileDataList;
                  // formulate the response
                  if (files.Any())
                      return string.Join("; ", 
                          (from f in files 
                           select string.Format("uploaded: {0} ({1} bytes = '{2}')", f.FileName, f.Size, f.ContentType)).ToArray());
                      return "No files attached";
              public class InMemoryMultipartFormDataStreamProvider : MultipartStreamProvider
                  private FormCollection _formData = new FormCollection();
                  private List<HttpContent> _fileContents = new List<HttpContent>();
                  // Set of indexes of which HttpContents we designate as form data
                  private Collection<bool> _isFormData = new Collection<bool>();
                  /// <summary>
                  /// Gets a <see cref="NameValueCollection"/> of form data passed as part of the multipart form data.
                  /// </summary>
                  public FormCollection FormData
                      get { return _formData; }
                  /// <summary>
                  /// Gets list of <see cref="HttpContent"/>s which contain uploaded files as in-memory representation.
                  /// </summary>
                  public List<HttpContent> Files
                      get { return _fileContents; }
                  /// <summary>
                  /// Convert list of HttpContent items to FileData class task
                  /// </summary>
                  /// <returns></returns>
                  public async Task<FileData[]> GetFiles()
                      return await Task.WhenAll(Files.Select(f => FileData.ReadFile(f)));
                  public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
                      // For form data, Content-Disposition header is a requirement
                      ContentDispositionHeaderValue contentDisposition = headers.ContentDisposition;
                      if (contentDisposition != null)
                          // We will post process this as form data
                          return new MemoryStream();
                      // If no Content-Disposition header was present.
                      throw new InvalidOperationException(string.Format("Did not find required '{0}' header field in MIME multipart body part..", "Content-Disposition"));
                  /// <summary>
                  /// Read the non-file contents as form data.
                  /// </summary>
                  /// <returns></returns>
                  public override async Task ExecutePostProcessingAsync()
                      // Find instances of non-file HttpContents and read them asynchronously
                      // to get the string content and then add that as form data
                      for (int index = 0; index < Contents.Count; index++)
                          if (_isFormData[index])
                              HttpContent formContent = Contents[index];
                              // Extract name from Content-Disposition header. We know from earlier that the header is present.
                              ContentDispositionHeaderValue contentDisposition = formContent.Headers.ContentDisposition;
                              string formFieldName = UnquoteToken(contentDisposition.Name) ?? String.Empty;
                              // Read the contents as string data and add to form data
                              string formFieldValue = await formContent.ReadAsStringAsync();
                              FormData.Add(formFieldName, formFieldValue);
                  /// <summary>
                  /// Remove bounding quotes on a token if present
                  /// </summary>
                  /// <param name="token">Token to unquote.</param>
                  /// <returns>Unquoted token.</returns>
                  private static string UnquoteToken(string token)
                      if (String.IsNullOrWhiteSpace(token))
                          return token;
                      if (token.StartsWith("\"", StringComparison.Ordinal) && token.EndsWith("\"", StringComparison.Ordinal) && token.Length > 1)
                          return token.Substring(1, token.Length - 2);
                      return token;
              /// <summary>
              /// Class to store attached file info
              /// </summary>
              public class FileData
                  public string FileName { get; set; }
                  public string ContentType { get; set; }
                  public byte[] Data { get; set; }
                  public long Size { get { return (Data != null ? Data.LongLength : 0L); } }
                  /// <summary>
                  /// Create a FileData from HttpContent 
                  /// </summary>
                  /// <param name="file"></param>
                  /// <returns></returns>
                  public static async Task<FileData> ReadFile(HttpContent file)
                      var data = await file.ReadAsByteArrayAsync();
                      var result = new FileData()
                          FileName = FixFilename(file.Headers.ContentDisposition.FileName),
                          ContentType = file.Headers.ContentType.ToString(),
                          Data = data
                      return result;
                  /// <summary>
                  /// Amend filenames to remove surrounding quotes and remove path from IE
                  /// </summary>
                  /// <param name="original"></param>
                  /// <returns></returns>
                  private static string FixFilename(string original)
                      var result = original.Trim();
                      // remove leading and trailing quotes
                      if (result.StartsWith("\""))
                          result = result.TrimStart('"').TrimEnd('"');
                      // remove full path versions
                          // parse out path
                          result = new System.IO.FileInfo(result).Name;
                      return result;

      IT Developer Search – Job Site Comparison for Employers

      Our company had need of another developer (recession.. what recession?) so I decided to review the current players and try to get some comparisons before I posted my advert, and search CVs. Here are my research notes. I’ve tried to compare like-with-like where that is possible. This is not intended to be an exhaustive list, I tried to stick with the bigger on-line players.

      Site 1 month advert 1 week advert CV search sample job search1 sample cv search2

      Score (out of 5)

      Monster UK £199 (only as part of packages) £799 for month, £249 for week. 278 7 candidates (4 in London)


      JobServe £299 £120 £200/month, 200 CV views 698 55


      JobSite £1983 £99 £600 (24hrs) 61 n/a


      TotalJobs £99 n/a £250 (week) 11 n/a


      Jobs £199 n/a £499? 20 (see notes) n/a


      Indeed Free/CPC Free/CPC Free/$1 per CV 971 about 350



      All prices quoted correct at time of research (24th June 2013) and are exclusive of VAT.

      1 sample job search is “ web developer” and criteria is “Guildford, Surrey”. If a distance is specified we use 20 miles.

      2 CV search price for one month access (if available)

      3 JobSite advert was for two weeks, so doubled the £99 cost to match month.


      I had recruited one person through a recruitment consultant, who it seems used Monster as a CV source and sent me candidates from there,so it was a logical place to start.

      There are a lot of different options including discount combined advert + CV searches, and also geographic filters with discounted pricing, which looked good. After a bit of searching I found their CV search test-drive facility, which allows you to run your queries and get real results back with the contact details removed. The search found only seven candidates, four of which were in “London”, so if I used the service for a week (assuming no new candidates appeared in that time) that would be a cost of £35 per CV!

      The job search returned a respectable number of positions, but was priced at a similar level to the other services and wasn’t especially tempting. In their favour they did have regionally-restricted packages, where the searches and listings were for a smaller area. This makes a lot of sense – many smaller employers (like me) won’t be willing to interview staff from the other side of the UK, and it wastes my time and theirs.

      The only criticism of this offering from Monster is that the regions are fixed areas: so “South East” includes people from Essex, as well as Guildford, but not someone from Crawley. They should have had regional listings based on a radius from the job location, which would make more sense.


      Founded in the earliest days of the UK Internet, they used to dominate the UK IT recruitment industry, but everyone else has caught up a bit. The job listing test returned almost 700 hits, although the distance was set for 25 miles, which brings much of central London into scope. I changed it to 15 (there is no 20 option) and this dropped to just 153.

      The CV search facility is not competitive at £198 for a month (and 200 CV downloads), but there is no online test-drive facility. I was contacted by a sales manager from JobServe who offered to do a demo, which involved using a remote-client view whilst he talked me through the demonstration. My CV search test returned 55 CVs, which is a more reasonable £3.60 per CV.


      I included this site as I had the impression they had scale on their side. Their job listing prices were similar to Monster, JobServe but the number of job matches was pretty derisory at only 61, suggesting their database isn’t very large.

      The CV search facility was laughably over-priced: £600 for 24 hours access. How many CVs are in the database? No idea. CV search test-drive? Nope. Good luck with selling that, JobSite.


      Not a good start for this company: the basic job prices are available on the site but access to anything else as an employer requires you to register. So one set of fake login details later, I gain access.

      Initial impressions are that this is a company focused on payment and your details. The services they offer (job listings, CV search etc.) are the fourth and fifth items on the employers menu. Clicking the first item “My Account” seems to be broken: until I realise it’s not a tab or a link.. it’s a title for the section. I also notice that the site is classic ASP – which suggests this is a company that has not invested in it’s website infrastructure in the last ten years.

      The job search test returned 20 results, none of which were in or near Guildford as requested. Then I noticed the results were actually coming from a different site: Indeed -  an aggregator. Which suggests the level of service being offered by Jobs is very basic: I did the same search directly on the Indeed site, and got far more jobs that the search, which suggests they’re not even using it properly.

      The CV search function only works if you’ve purchased it. I can’t tell if it’s any good, or how many CVs I can search, or do a trial run. I have to purchase a subscription first. Kind of like meeting a bloke in a pub who wants to sell you a mobile phone. Except the phone is in an unmarked box, and he won’t let you see what you’re buying until you’ve paid him for it. Do you think I’m going to take a punt for £… for… er – how much is it?

      And there is the second problem.. there is no mention of CV search in the purchase options. There is something called “Full Internet Sweep” for £499.. is that it? I guess it must be, since there’s little else in this site.

      Given the job search just subcontracts the list to Indeed, one wonders if the “CV search” would do the same – a service you can get for free from Indeed directly? This site is a thin veneer of not-very-much, trying to trade on an easily-remembered domain name. It’s no wonder the recruitment industry has such a bad name.


      I am embarrassed to say I’d not heard of Indeed before. I stumbled on their name from (so at least that site had one redeeming feature!). Their model is so different and refreshing that it took me a while to get it. They are “Google for Jobs”.

      If you think about it, it’s a totally logical concept. Google helps you find web pages, and Indeed helps you find jobs (if you’re an employee) or find staff (if you’re an employer). Their business model is the same too, with job adverts charged on a pay-per-click basis. However there is an option to turn off the “sponsor” option so it’s possible to list jobs for free.

      The test search for job produced an excellent 971 results. This is partly because Indeed acts as an aggregator for jobs from other locations and sites, so it gives a wider choice. A very good reason for a potential employee to use it over the competition.

      The CV search seems to be totally free, although I’ve not tried contacting anyone through it. It took a Google search to determine that contacting people via the CV service costs about $1 per CV. The CV search provides so much information that finding the person with a Google search for their name wasn’t that difficult to do: frequently these people also appeared on LinkedIn, Google+ and Facebook.

      However, at a cost of $1 per CV I’m happy to pay for the service and use the online contact facility. I can also have new CVs sent to me that meet my criteria, also at no extra charge unless I chose to contact that person.

      I have only two criticisms of Indeed: first, that they make it a bit difficult to work out what their services cost for the CV-resume service, and secondly that they need to spread the word of their existence more!


      It’s a bit of a no-brainer really. The days are numbered for the “classified-ad” listing model for recruitment agencies. Pay £199 for 30 days listing when possibly no-one would search for or see your vacancy. The CV search products on most sites are overpriced rip-offs, with only JobServe’s offering looking like it would be worth a try.

      Excel Art – The New ASCII Art?

      Reading my daily CodeProject newsletter today, I came across this article about a Japanese artist who uses the graphic editing tools in Excel to create art.

      Like many commenters, I disappointed that he hadn’t actually used Excel cells to create the image in the style of Pointillism.

      If you know C# this is not a problem, and a few minutes later we have this code:

                  // gembox spreadsheet key set
                  // create the Excel output file
                  var xl = new GemBox.Spreadsheet.ExcelFile();
                  var ws = xl.Worksheets.Add("IBMPC");
                  const string imageResource= "ExcelPaint.IBM PC 5150.jpg";
                  var thisEXE = System.Reflection.Assembly.GetExecutingAssembly();
                  using (var resource = thisEXE.GetManifestResourceStream(imageResource)) 
                      var img = new Bitmap(resource);
                      for (int x = 0; x < img.Width; x++)
                          for (int y = 0; y < img.Height; y++)
                              // get pixel colour from image
                              Color pixel = img.GetPixel(x, y);
                              // write to Excel cell
                              ws.Cells[y, x].Style.FillPattern.PatternForegroundColor= pixel;
                              ws.Cells[y, x].Style.FillPattern.PatternStyle = GemBox.Spreadsheet.FillPatternStyle.Solid;

      I use the excellent GemBox SpreadSheet library for my Excel interaction. The code should not really need any explanation: read a pixel, write a “pixel”.

      The resulting output is an Excel Spreadsheet, as shown below. Initially the aspect-ratio of the image was distorted as the cells were not square. I manually resized the column widths to 2, and it looks correct.


      You can download the resulting output yourself here. I have to admit this is not going to win awards for image compression techniques: the original JPG was only 43KB and the Excel sheet was 1291KB. But hey, it’s 100% cooler.

      For reference this is the original image:

      In case you’re wondering – why an old PC as a test picture? This was the first suitable image I found, it’s one of my collection of old computers. It’s also a reference to the ASCII art of days gone by.