An IT Perspective on Scottish Independence

Lots of column inches on paper and blogs about what Scottish Independence would mean for the economy, ordinary people, etc. Not much I could see that looked at this from an IT perspective. If you run an IT operation in the UK (or internationally) what are the implications?

With so many unknowns about what an independent Scotland would look like, we cannot be certain about the precise impacts, but we can guess about the probability of some main issues.

Timing

For the purpose of this article we’ll assume that a ‘Yes’ vote happens on the 19th September. Nothing immediately will change since it will take a lot of political horse-trading to settle exactly how independence will happen, and of course when. The SNP is aiming for March 2016, but considering this is just 18 months after the vote, it’s possible that it will be later than this.

Eighteen months might be enough for some companies to adapt. Bear in mind the negotiations on what independence means in reality will take time, and given the size and scope of these issues. I would expect this process to take many months, so this could mean the time between negotiations completing and the date of independence could be as little as six months. You should start your planning on 20th September.

Currency

Leaving the political arguments aside, Scotland has three choices: the Euro, sharing the UK Pound, or creating a new Scottish Pound.

Euro

The Euro is politically unpalatable given the current state of the Eurozone, and the SNP itself says there is “no prospect” of using the Euro. So we can rule this one out, yes?

Maybe not. Scotland will want to join the EU, and EU membership rules for new members require the use of the Euro as currency. Scotland will definitely want to re-join the EU, so the circle has to be squared somehow. Scotland negotiating an opt-out as part of the membership negotiations is possible, but unlikely given their small size and therefore weak position. Set against this is that many of the recent members (Romania, Bulgaria, Czech Republic, Croatia, Poland and Hungary) have not joined the Euro. However, in most cases these countries will have to adopt the Euro eventually – it’s more a question of when, than if.

If we look at the Baltic states as an example, Estonia and Latvia joined the EU on 1st May 2004 but only converted to the Euro in 2011.

Interestingly the Convergence Criteria for the Euro require that the country in question would need to have “participated in ERM II for two years without severe tensions”. Assuming Scotland shares the pound it has no way to join ERM II without the UK’s agreement (which we can safely assume it’s not going to get), and therefore is both required to join and prevented from doing so.

So don’t assume this will never happen. As other countries have shown, you don’t have to convert to the Euro immediately and the rules would suggest that Scotland isn’t going to be eligible anyway unless they create their own currency first.

From an IT perspective though, it’s almost certainly four or five years away at best, so we can discount it for now.

Sharing UK Pound

This is the best outcome from an IT viewpoint – it means all systems can operate unchanged in currency terms at least.

But how likely is it? And would it last?

Sharing the UK Pound would mean that Scotland would lose control over its currency and therefore over interest rates, and to a degree over fiscal policy. This is not a political argument, but a logical outcome of simple economics. To explain:

Scotland’s economy is about 9% of the UK’s total GDP. If the pound is shared, the decisions about interest rates and exchange rate control will have to reflect what happens in the 91% of the economy more than what happens in Scotland. Interest rates in the rest of the UK would therefore determine the interest rates in Scotland, since you can’t have different interest rates in the same currency for very long.

A currency union also directly impacts fiscal policy, as the Eurozone has so ably demonstrated, with Greece being told how much they can spend and borrow. Scotland cannot set its spending and debt plans with complete independence. Agreed or informal currency sharing is quite likely to be the case immediately after independence, but I don’t expect it to last in the medium term.

So from an IT perspective, you can probably assume with a high degree of confidence that the pound will remain as Scotland’s currency for the immediate future, whether by agreement or not. How long this would last is an interesting question.

Scottish Pound

The third alternative is for Scotland to create it’s own currency (we’ll call it the Scottish Pound for now). This scenario has a similar level of impact as the Euro from an IT perspective – it’s a separate currency with its own exchange rates, symbols etc. The exchange rate might be set to shadow the pound initially, but don’t bank on that being the case, and assume that the initial 1:1 exchange rate might change in the future.

Is this a probable outcome? No, I don’t think a “Scottish pound” is very likely. It would be a very weak currency, and the cost of setting it up would be very high and introduce transaction costs on Scottish businesses dealing with the UK. And once Scotland re-joins the EU they will have to dump it and start again with the Euro. It would be the worst case scenario in the short term.

Legal

Scotland has its own legal system already (with some overlap with the UK) and laws are broadly similar with those in England/Wales. So it’s likely that not much would change. Cross-border contracts might be impacted if the pound isn’t shared post-independence (e.g. pricing) but since I believe that to be unlikely it’s not an immediate problem.

Regulation

Data protection might suddenly be an issue for some IT operations. An independent Scotland would not be part of the EU, and therefore EU rules about storing data on EU customers outside the EU would now apply. This might possibly result in repatriation of data to the UK, but it’s not a common scenario I believe.

Other regulatory bodies might also have an impact. Our business is in telecommunications, so we fall under Ofcom’s remit. A Scottish Ofcom (SofCom?) might impose different regulations and require operational changes on customers based in Scotland.

Taxation

An independent Scotland would be free to set its own tax rates and regulations. The VAT rate might alter, and the rules on things like exemptions and applicability would mean that Scottish and UK customers would need to be treated differently.

The rules on employment and employment taxes would also probably change. These would obviously impact on transactional and accounting systems.

Internet

The top-level-domain (TLD) .uk is currently used for many UK businesses and organisations. A new TLD for Scotland .sco has already been created but following a Yes vote this could see a rush of requests to grab the good names.

At present the domain is in a “sunrise phase” where trademark holders can pre-register domains, but this expires on 23rd September. So if you want a .sco domain, now is the time to register.

Conclusion

The independence vote has been a long time coming, but it’s only now with opinion polls putting the Yes camp in front that it is being considered as a possible reality.

The fundamental problem is that every key issue, from the currency downwards, is not clearly defined. In the event of the Yes vote, IT departments might want to start the planning process, but you don’t need to panic about it.

Disclaimer

I’m not trying to argue a pro- or anti- independence case here, just trying to work out what the likely outcomes would be and the impacts on IT. I welcome any comments that point out any factual errors, or that add any key areas I’ve missed (I’m sure there are lots!).

RequireJS – Shooting Yourself in the Foot Made Simple

I’ve run across a few JavaScript dependency issues when creating pages, where you have to ensure the right libraries are loaded in the right order before the page-specific stuff is loaded.

RequireJS seems like the solution to this problem as you can define requirements in each JavaScript file. It uses the AMD pattern and TypeScript supports AMD, so it looked like the obvious choice.

I’ve tried looking at RequireJS before briefly. A quick look was enough to make me realise this wasn’t a simple implementation, so until this weekend I had not made a serious attempt to learn it. Having tried to do this over a weekend, I’ve begun to realise why it’s not more widely used.

Introduction

On the face of it, it seems that it should be simple. You load the require.js script and give it a config file in the data-main attribute, to set it up. When you write JavaScript code, you do so using AMD specifications so it can be loaded asynchronously and in the correct order.

For TypeScript users like myself, this presents a big barrier to entry (not RequireJS’s fault), as you need to specify the –module AMD flag on the compiler. This means all TypeScript code in the application is now compiled with AMD support, so existing non-requireJS pages won’t work. You have to migrate all in one go.

Setting Up a Test App

I created a simple ASP.NET web application with MVC to test requireJS, and followed the (many) tutorials, which explain how to configure the baseURL, paths etc. I then add a script tag to the _Layout.cshtml template to load and configure requireJS:

<script data-main="/Scripts/Config.js"

src="~/Scripts/require.js"></script>

Seems simple enough, doesn’t it? In fact I’d just shot myself in the foot: I just didn’t realise it.

Since my original layout loaded jQuery and Bootstrap, I needed to replicate that, so I added the following code:

    <script>
        require(['jquery', 'bootstrap'], function ($, bootstrap) {
        });
    </script>

This would tell requireJS to look for a javascript file called /Scripts/jquery.js, but since I loaded jQuery using Nuget the file is actually /Scripts/jquery-2.1.1.min.js

I obviously don’t want to bake that version number into every call or require() statement, so requireJS supports an ‘aliasing’ feature. In the config you can specify a path, in this format:

    paths: {
        "jquery": "jquery-2.1.1.min",
        "bootstrap": "bootstrap.min"
    }

Requesting the ‘jquery’ module should now actually ensure that jquery-2.1.1.min.js is loaded.

Except that it doesn’t… most of the time.

Loading the page on both IE and Chrome with developer tools, I can see that mostly that require.js is trying to load /Scripts/jquery.js – what gives‽ Changing the settings and trying different combinations seems to have bearing on what actually happens.

I felt like a kid who’d picked up the wrong remote control for his toy car: it didn’t matter what buttons I pressed, the car seemed to ignore me.

StackOverflow to the Rescue (again)

I don’t like going to StackOverflow too quickly. Often, even just writing the question prompts enough ideas to try other solutions and you find the answer yourself.

In this case I’d spent part of my weekend and two days, hunting through lots of tutorials and existing StackOverflow answers, trying to figure out WTF was going on. Finally a kind soul pointed me to the right solution:

A config/main file specified in the in the data-main attribute of the require.js script tag is loaded asynchronously.

To be fair, requireJS does try to make this reasonably clear here http://requirejs.org/docs/api.html#data-main – but this is just one little warning in the sea of information you are trying to learn.

This means that when you call require(…) you can’t be certain that the configuration is actually loaded or not. When I called require(..) just after the script tag for requireJS it had not been run, so it was using a default configuration.

What requireJS intended was that you load your modules just after setting the configuration, inside the config.js file. However, this approach only works if every page is running the same scripts.

So far the only way to get this to work has been to remove the data-main attribute and load Config.js as a separate script tag. At that point we can be sure the config has been applied and can specify dependencies and use aliases.

TypeScript files not compiling in VS2013 project

I created a C# class library project to hold some product-related code, and wanted to emit some JavaScript from this DLL, compiled from a TypeScript file. I added the .ts file and compiled the project, but no .js file was created.

The first thing to check is the Build Action is set to “TypeScriptCompile”, which it was – so why no .js output?

It seems the VS 2013 update 2 which is supposed to incorporate TypeScript compilation is not adding the required Import section to the project file.

If you add this line

  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" />

Just after the line

  <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

Then the TypeScript compiler will be invoked.

My thanks to my colleague Sri for figuring this one out!

Reported as a bug on Microsoft Connect: https://connect.microsoft.com/VisualStudio/feedback/details/934285/adding-typescript-files-to-vs-2013-update-2-library-does-not-compile-typescript

Using Google Drive API with C# – Part 2

Welcome to Part 2 which covers the authorization process. If you have not yet set up your Google API access, please read part 1 first.

OpenAuth

The OpenAuth initially seems pretty complicated, but once you get your head around it, it’s not that scary, honest!

If you followed the steps in Part 1 you should now have a Client ID and Client Secret, which are the ‘username’ and ‘password’. However, these by themselves are not going to get you access directly.

Hotel OpenAuth

You can think of OpenAuth of being a bit like a hotel, where the room is your Google Drive. To get access you need to check in at the hotel and obtain a key.

When you arrive at reception, they check your identity, and once they know who you are, they issue a card key and a PIN number to renew it. This hotel uses electronic card keys and for security reasons they stop working after an hour.

When the card stops working you have to get it re-enabled. There is a machine in the lobby where you can insert your card, enter the PIN and get the card renewed, so you don’t have to go back to the reception desk and ask for access again.

Back To OpenAuth

In OpenAuth ‘Reception’ is the authentication request that appears in the browser you get when you first attempt to use a Client ID and Client Secret. This happens when you call GoogleWebAuthorizationBroker.AuthorizeAsync the first time.

This allows the user to validate the access being requested from your application. If the access is approved, the client receives a TokenResponse object.

The OpenAuth ‘key card’ is called an AccessToken, and will work for an hour after being issued. It’s just a string property in the TokenResponse. This is what is used when you try to access files or resources on the API.

When the AccessToken expires you need to request a new one, and the ‘PIN number’ is a RefreshToken (another property in TokenResponse) which also got issued when the service validated you. You can save the refresh token and re-use it as many times as you need. It won’t work without the matching Client ID and Client Secret, but you should still keep it confidential.

With the .NET API this renewal process is automatic – you don’t need to request a new access key if you’ve provided a RefreshToken. If the access is revoked by the Drive’s owner, the RefreshToken will stop working, so you need to handle this situation when you attempt to gain access.

Token Storage

The first time you make a call to AuthorizeAsync will result in the web authorization screen popping up, but in subsequent requests this doesn’t happen, even if you restarted the application. How does this happen?

The Google .NET client API stores these access requests using an interface called IDataStore. This is an optional parameter in the AuthorizeAsync method, and if you didn’t provide one, a default FileDataStore (on Windows) would have been used. This stores the TokenResponse in a file in a folder [userfolders]\[yourname]\AppData\Roaming\Drive.Auth.Store

When you call AuthorizeAsync a second time, the OpenAuth API uses the key provided to see if there is already a TokenResponse available in the store.

Key, what key? The key is the third parameter of the AuthorizeAsync method, which in most code samples is just “user”.

   1: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync(

   2:                 secrets,

   3:                 new string[] {DriveService.Scope.Drive}, 

   4:                 "user", 

   5:                 CancellationToken.None).Result;

It follows that if you run your drive API application on a different PC, or logged in as a different user, the folder is different and the stored TokenResponse isn’t accessible, so the user will get prompted to authorise again.

Creating Your Own IDataStore

Since Google uses an interface, you can create your own version of the IDataStore. For my application, I would only be using a single client ID and secret for the application, but I wanted it to work on the live server without popping up a web browser.

I’d already obtained a TokenResponse by calling the method without a store, and authorised the application in the browser. This generated the TokenResponse in the file system as I just described.

I copied the value of just the RefreshToken, and created a MemoryDataStore that stores the TokenResponses in memory, along with a key value to select them. Here’s the sequence of events:

  1. When my application starts and calls AuthorizeAsync the first time, I pass in MemoryDataStore.
  2. The Google API then calls the .GetAsync<T> method in my class, so I hand back a TokenResponse object where I’ve set only the ResponseToken property.
  3. This prompts the Google OAuth API to go and fetch an AccessToken (no user interaction required) that you can use to access the drive.
  4. Then the API calls StoreAsync<T> with the resulting response. I then replace the original token I created with the fully populated one.
  5. This means the API won’t keep making requests for AccessTokens for the next hour, as the next call to GetAsync<T> will return the last key (just like the FileStore does).

Note that the ResponseToken we get back has an ExpiresInSeconds value and an Issued date. The OAuth system has auto-renewal (although I’ve not confirmed this yet) so when your AccessToken expires, it gets a new one without you needing to do this.

My code for the MemoryDataStore is as follows:

   1: using Google.Apis.Auth.OAuth2.Responses;

   2: using Google.Apis.Util.Store;

   3: using System;

   4: using System.Collections.Generic;

   5: using System.Linq;

   6: using System.Text;

   7: using System.Threading.Tasks;

   8:  

   9: namespace Anvil.Services.FileStorageService.GoogleDrive

  10: {

  11:     /// <summary>

  12:     /// Handles internal token storage, bypassing filesystem

  13:     /// </summary>

  14:     internal class MemoryDataStore : IDataStore

  15:     {

  16:         private Dictionary<string, TokenResponse> _store;

  17:  

  18:         public MemoryDataStore()

  19:         {

  20:             _store = new Dictionary<string, TokenResponse>();

  21:         }

  22:  

  23:         public MemoryDataStore(string key, string refreshToken)

  24:         {

  25:             if (string.IsNullOrEmpty(key))

  26:                 throw new ArgumentNullException("key");

  27:             if (string.IsNullOrEmpty(refreshToken))

  28:                 throw new ArgumentNullException("refreshToken");

  29:  

  30:             _store = new Dictionary<string, TokenResponse>();

  31:  

  32:             // add new entry

  33:             StoreAsync<TokenResponse>(key,

  34:                 new TokenResponse() { RefreshToken = refreshToken, TokenType = "Bearer" }).Wait();

  35:         }

  36:  

  37:         /// <summary>

  38:         /// Remove all items

  39:         /// </summary>

  40:         /// <returns></returns>

  41:         public async Task ClearAsync()

  42:         {

  43:             await Task.Run(() =>

  44:             {

  45:                 _store.Clear();

  46:             });

  47:         }

  48:  

  49:         /// <summary>

  50:         /// Remove single entry

  51:         /// </summary>

  52:         /// <typeparam name="T"></typeparam>

  53:         /// <param name="key"></param>

  54:         /// <returns></returns>

  55:         public async Task DeleteAsync<T>(string key)

  56:         {

  57:             await Task.Run(() =>

  58:             {

  59:                 // check type

  60:                 AssertCorrectType<T>();

  61:  

  62:                 if (_store.ContainsKey(key))

  63:                     _store.Remove(key);

  64:             });

  65:         }

  66:  

  67:         /// <summary>

  68:         /// Obtain object

  69:         /// </summary>

  70:         /// <typeparam name="T"></typeparam>

  71:         /// <param name="key"></param>

  72:         /// <returns></returns>

  73:         public async Task<T> GetAsync<T>(string key)

  74:         {

  75:             // check type

  76:             AssertCorrectType<T>();

  77:  

  78:             if (_store.ContainsKey(key))

  79:                 return await Task.Run(() => { return (T)(object)_store[key]; });

  80:  

  81:             // key not found

  82:             return default(T);

  83:         }

  84:  

  85:         /// <summary>

  86:         /// Add/update value for key/value

  87:         /// </summary>

  88:         /// <typeparam name="T"></typeparam>

  89:         /// <param name="key"></param>

  90:         /// <param name="value"></param>

  91:         /// <returns></returns>

  92:         public Task StoreAsync<T>(string key, T value)

  93:         {

  94:             return Task.Run(() =>

  95:             {

  96:                 if (_store.ContainsKey(key))

  97:                     _store[key] = (TokenResponse)(object)value;

  98:                 else

  99:                     _store.Add(key, (TokenResponse)(object)value);

 100:             });

 101:         }

 102:  

 103:         /// <summary>

 104:         /// Validate we can store this type

 105:         /// </summary>

 106:         /// <typeparam name="T"></typeparam>

 107:         private void AssertCorrectType<T>()

 108:         {

 109:             if (typeof(T) != typeof(TokenResponse))

 110:                 throw new NotImplementedException(typeof(T).ToString());

 111:         }

 112:     }

 113: }

This sample uses the following Nuget package versions:

   1: <package id="Google.Apis" version="1.8.2" targetFramework="net45" />

   2: <package id="Google.Apis.Auth" version="1.8.2" targetFramework="net45" />

   3: <package id="Google.Apis.Core" version="1.8.2" targetFramework="net45" />

   4: <package id="Google.Apis.Drive.v2" version="1.8.1.1270" targetFramework="net45" />

   5: <package id="log4net" version="2.0.3" targetFramework="net45" />

   6: <package id="Microsoft.Bcl" version="1.1.9" targetFramework="net45" />

   7: <package id="Microsoft.Bcl.Async" version="1.0.168" targetFramework="net45" />

   8: <package id="Microsoft.Bcl.Build" version="1.0.14" targetFramework="net45" />

   9: <package id="Microsoft.Net.Http" version="2.2.22" targetFramework="net45" />

  10: <package id="Newtonsoft.Json" version="6.0.3" targetFramework="net45" />

  11: <package id="Zlib.Portable" version="1.9.2" targetFramework="net45" />

Using Google Drive API with C# – Part 1

We had a requirement to store a large volume of user files (for call recordings) as part of a new service. Initially it would be a few gigabytes of MP3 files, but if successful would possibly be into the terabyte range. Although our main database server has some space available, we didn’t want to store these on the database.

Storing them as files on the server was an option, but it would then mean we had to put in place a backup strategy. We would also need to spend a lot of money on new disks, or new NAS devices, etc. It started to look complicated and expensive.

It was then the little “cloud” lightbulb lit up, and we thought about storing the files on a cloud service instead. I have stuff on Dropbox, LiveDrive SkyDrive OneDrive and a Google Drive. However the recent price drop on Google Drive meant this was the clear favourite for storing our files. At $120 per year for 1TB of space that’s a no-brainer.

Google API

To do this we’d need to have the server listing, reading and writing files directly to the Google Drive. To do this we needed to use the Google Drive API.

I decided to write this series of articles because I found a lot of the help and examples on the web were confusing and in many cases out-of-date: Google has refactored a lot of the .NET client API and a lot of online sample code (including some in the API documentation) is for older versions and all the namespaces, classes and methods have changed.

API Access

To use the Google APIs you need a Google account, a Google drive (which is created by default for each user, and has 30GB of free storage), and API access.

Since you can get a free account with 30GB you can have one account for development and testing, and kept separate from the live account. You may want to use different browsers to set up the live and development/testing accounts. Google is very slick at auto-logging you in and then joining together multiple Google identities. For regular users this is great, but when trying to keep the different environments apart it’s a problem.

User or Service Account?

When you use the Google Drive on the web you’re using a normal user account. However, you may spot that there is also a service account option.

It seems logical that you might want to use this for a back-end service, but I’d recommend against using this for two reasons:

Creating a Project

I’ll assume you’ve already got a Google Account: if not you should set one up. I created a new account with it’s own empty drive so that I could use this in my unit test code.

Before you can start to write code you need to configure your account for API access. You need to create a project in the developer’s console. Click create project and give it name and ID. Click the project link once it’s created. You should see a welcome screen.

APIs

On the menu on the left we need to select APIs & Auth – this lets you determine which APIs your project is allowed to use.

A number of these will have been preselected, but you can ignore these if you wish. Scroll down to Drive API and Drive SDK. The library will be using the API but it seems to also need the SDK (enlighten me please if this is not the case!) so select both. As I understand it, the SDK is needed if you’re going to create online apps (like Docs or Spreadsheet), rather than accessing a drive itself. The usual legal popups will need to be agreed to.

The two entries will be at the top of the page, with a green ON button, and a configuration icon next to each

image

Configuration is a bit.. broken at present. Clicking either goes to configuration for the SDK on an older page layout. I don’t know if the SDK configuration is required or not. You could try accessing without it.

Credentials

The next step is to set up credentials. These are the identity of your login and there are different clientIDs based on the type of client you want to run.

By default you will have Client IDs set up for Compute Engine and Client ID for web application. To access drive from non-web code you need a Native Application, so click Create New Client ID. Select Installed application type and for C# and other .NET apps, select Other.

When this has completed you’ll have a new ClientID and a ClientSecret. Think of this as a username and password that you use to access the drive API. You should treat the Client Secret in the same way as a password and not disclose it. You might also want to store it in encrypted form in your application.

next: Part 2 – Authorising Drive API

Getting Rid of k__BackingField in Serialization

We have an application that uses WebApi to send out results for some queries. This has worked well and outputs nice JSON results courtesy of JSON.NET (thanks to James for that great library!).

Today I ran into a problem: the serialized JSON was corrupted with content that looks like this:

   1: {

   2:     "<Data>k__BackingField" : [{

   3:             "item1",

   4:             "item2"

   5:         }

   6:     ],

   7:     "<Totals>k__BackingField" : null,

   8:     "_count" : 2,

   9:     "_pageSize" : 10,

  10:     "_page" : 1,

  11:     "<Sort>k__BackingField" : "Date"

  12: }

My reaction was puzzlement: why on earth would a straightforward class with properies like Data and Count suddenly start spitting out weird JSON like this?

SO to the Rescue?

Obviously the first port of call was a search on StackOverflow

From this article we get some clues: the k__BackingField is created by automatic properties in C#, and that DataContractJsonSerializer does this.

But we’re not supposed to be using DataContractJsonSerializer, we’ve got WebApi which uses JSON.NET?

Solution

Turns out the cause is the SerializableAttribute – because I’d added that to the class, the object result from the WebAPI method got passed to DataContractJsonSerializer.

I had not seen this before because most of the results I had output didn’t have this attribute, even though the base class did. I removed this, and bingo, the results were fixed:

   1: {

   2:     "Data" : [{

   3:         "item1",

   4:         "item2"

   5:         }

   6:     ],

   7:     "Totals" : null,

   8:     "Count" : 2,

   9:     "PageSize" : 10,

  10:     "Page" : 1,

  11:     "Sort" : "Date"

  12: }

Raspberry Pi Wallboard System

One of the directors wanted to have a wallboard displaying real-time numbers from our new VoIP phone system. We had a web page which could show the stats, so we now had to decide how to get these onto a wall-mounted display in the office.

The display part was easy, there are quite a few 32 inch or larger LCD TV systems with HDMI input. The question was what to use to display and update the web page.

At first we considered setting up a PC to do this but even the Intel NUC devices are quite expensive – £250-plus for a device and a disk. They are also much more powerful than we need.

My colleague working on this project is looking at the Google ChromeCast device, but this is designed for media streaming rather than web pages. I decided to explore the Raspberry Pi as an alternative.

Toys!

To be honest, I’d been itching to mess about with a Pi since they came out. But with so much else to do I couldn’t justify the time to myself. This was a good opportunity to experiment with a specific, worthwhile goal in mind.

model_a_1_of_4_grande

I chose Pimoroni as my supplier as they had a good selection of starter kits. Our “production” unit would consist of a Pi model B, a case, 8GB SD card, WiFi dongle, 5v USB power adapter and a short HDMI cable.

This comes to £67 including VAT in the UK – a lot less than the NUC option. Add that to a 32” Samsung TV for about £219 including VAT. These are excellent as they also have a powered USB 5V connector – so the Pi runs off the TV power supply.

So a wallboard system for less than £300! A wall mounting bracket is extra – these vary from £10 upward, so we might just break the £300 limit for that.

I got two “production” units and a “Deluxe Raspberry Pi Starter Kit” which includes a simple USB keyboard, mouse, USB hub – this one was to act as our development box.

Configuration and Setup

The SD cards come pre-installed with NOOBS so I selected Raspbian and got to a working desktop. After configuring the WiFi we had network access.

The main requirement was a hands-free boot-to-display operation. Fortunately someone else had done the heavy lifting for this part of the design by configuring their Pi to act as a wall mounted outlook calendar. A tip of the hat to Piney for his excellent guide.

Once I had a working development system, I purchased a USB card reader (that supports SD card format – almost all do) for my PC and installed Win32 Disk Imager. I copied the working SD card from the development Pi to an image, and then wrote this image back to the two production SD cards.

Testing

I had so far had this on my office WiFi on a computer monitor, so I unhooked the development setup, and went into my house, where the children’s 42” TV sat in their den, practically begging me to try it out.

PiTV

I was impressed that I didn’t need to touch the screen setup for the HDMI to work correctly. I had to reconfigure the WiFi, but once that was done I could plug the Pi directly into the TV’s powered USB connector, and the HDMI cable. 

Installation

Installation at site isn’t totally shrink-wrap unless you know the WiFi configuration in advance. Our production Pis were delivered to my office and our excellent support tech Cristian used a USB mouse and keyboard to configure them with the correct WiFi setup and key. He then mounted the two Pis behind the monitors (you could use Velcro tape for this), connected their USB power to the monitor USB output and the HDMI to the TV’s HDMI.

Operation

The TV is switched on in the morning, which powers up the USB port. The Pi boots up directly into a full-screen display of the web page that shows our call stats.

PiTV