Using Google Drive API with C# – Part 2

Welcome to Part 2 which covers the authorization process. If you have not yet set up your Google API access, please read part 1 first.

OpenAuth

The OpenAuth initially seems pretty complicated, but once you get your head around it, it’s not that scary, honest!

If you followed the steps in Part 1 you should now have a Client ID and Client Secret, which are the ‘username’ and ‘password’. However, these by themselves are not going to get you access directly.

Hotel OpenAuth

You can think of OpenAuth of being a bit like a hotel, where the room is your Google Drive. To get access you need to check in at the hotel and obtain a key.

When you arrive at reception, they check your identity, and once they know who you are, they issue a card key and a PIN number to renew it. This hotel uses electronic card keys and for security reasons they stop working after an hour.

When the card stops working you have to get it re-enabled. There is a machine in the lobby where you can insert your card, enter the PIN and get the card renewed, so you don’t have to go back to the reception desk and ask for access again.

Back To OpenAuth

In OpenAuth ‘Reception’ is the authentication request that appears in the browser you get when you first attempt to use a Client ID and Client Secret. This happens when you call GoogleWebAuthorizationBroker.AuthorizeAsync the first time.

This allows the user to validate the access being requested from your application. If the access is approved, the client receives a TokenResponse object.

The OpenAuth ‘key card’ is called an AccessToken, and will work for an hour after being issued. It’s just a string property in the TokenResponse. This is what is used when you try to access files or resources on the API.

When the AccessToken expires you need to request a new one, and the ‘PIN number’ is a RefreshToken (another property in TokenResponse) which also got issued when the service validated you. You can save the refresh token and re-use it as many times as you need. It won’t work without the matching Client ID and Client Secret, but you should still keep it confidential.

With the .NET API this renewal process is automatic – you don’t need to request a new access key if you’ve provided a RefreshToken. If the access is revoked by the Drive’s owner, the RefreshToken will stop working, so you need to handle this situation when you attempt to gain access.

Token Storage

The first time you make a call to AuthorizeAsync will result in the web authorization screen popping up, but in subsequent requests this doesn’t happen, even if you restarted the application. How does this happen?

The Google .NET client API stores these access requests using an interface called IDataStore. This is an optional parameter in the AuthorizeAsync method, and if you didn’t provide one, a default FileDataStore (on Windows) would have been used. This stores the TokenResponse in a file in a folder [userfolders]\[yourname]\AppData\Roaming\Drive.Auth.Store

When you call AuthorizeAsync a second time, the OpenAuth API uses the key provided to see if there is already a TokenResponse available in the store.

Key, what key? The key is the third parameter of the AuthorizeAsync method, which in most code samples is just “user”.

   1: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync(

   2:                 secrets,

   3:                 new string[] {DriveService.Scope.Drive}, 

   4:                 "user", 

   5:                 CancellationToken.None).Result;

It follows that if you run your drive API application on a different PC, or logged in as a different user, the folder is different and the stored TokenResponse isn’t accessible, so the user will get prompted to authorise again.

Creating Your Own IDataStore

Since Google uses an interface, you can create your own version of the IDataStore. For my application, I would only be using a single client ID and secret for the application, but I wanted it to work on the live server without popping up a web browser.

I’d already obtained a TokenResponse by calling the method without a store, and authorised the application in the browser. This generated the TokenResponse in the file system as I just described.

I copied the value of just the RefreshToken, and created a MemoryDataStore that stores the TokenResponses in memory, along with a key value to select them. Here’s the sequence of events:

  1. When my application starts and calls AuthorizeAsync the first time, I pass in MemoryDataStore.
  2. The Google API then calls the .GetAsync<T> method in my class, so I hand back a TokenResponse object where I’ve set only the ResponseToken property.
  3. This prompts the Google OAuth API to go and fetch an AccessToken (no user interaction required) that you can use to access the drive.
  4. Then the API calls StoreAsync<T> with the resulting response. I then replace the original token I created with the fully populated one.
  5. This means the API won’t keep making requests for AccessTokens for the next hour, as the next call to GetAsync<T> will return the last key (just like the FileStore does).

Note that the ResponseToken we get back has an ExpiresInSeconds value and an Issued date. The OAuth system has auto-renewal (although I’ve not confirmed this yet) so when your AccessToken expires, it gets a new one without you needing to do this.

My code for the MemoryDataStore is as follows:

   1: using Google.Apis.Auth.OAuth2.Responses;

   2: using Google.Apis.Util.Store;

   3: using System;

   4: using System.Collections.Generic;

   5: using System.Linq;

   6: using System.Text;

   7: using System.Threading.Tasks;

   8:  

   9: namespace Anvil.Services.FileStorageService.GoogleDrive

  10: {

  11:     /// <summary>

  12:     /// Handles internal token storage, bypassing filesystem

  13:     /// </summary>

  14:     internal class MemoryDataStore : IDataStore

  15:     {

  16:         private Dictionary<string, TokenResponse> _store;

  17:  

  18:         public MemoryDataStore()

  19:         {

  20:             _store = new Dictionary<string, TokenResponse>();

  21:         }

  22:  

  23:         public MemoryDataStore(string key, string refreshToken)

  24:         {

  25:             if (string.IsNullOrEmpty(key))

  26:                 throw new ArgumentNullException("key");

  27:             if (string.IsNullOrEmpty(refreshToken))

  28:                 throw new ArgumentNullException("refreshToken");

  29:  

  30:             _store = new Dictionary<string, TokenResponse>();

  31:  

  32:             // add new entry

  33:             StoreAsync<TokenResponse>(key,

  34:                 new TokenResponse() { RefreshToken = refreshToken, TokenType = "Bearer" }).Wait();

  35:         }

  36:  

  37:         /// <summary>

  38:         /// Remove all items

  39:         /// </summary>

  40:         /// <returns></returns>

  41:         public async Task ClearAsync()

  42:         {

  43:             await Task.Run(() =>

  44:             {

  45:                 _store.Clear();

  46:             });

  47:         }

  48:  

  49:         /// <summary>

  50:         /// Remove single entry

  51:         /// </summary>

  52:         /// <typeparam name="T"></typeparam>

  53:         /// <param name="key"></param>

  54:         /// <returns></returns>

  55:         public async Task DeleteAsync<T>(string key)

  56:         {

  57:             await Task.Run(() =>

  58:             {

  59:                 // check type

  60:                 AssertCorrectType<T>();

  61:  

  62:                 if (_store.ContainsKey(key))

  63:                     _store.Remove(key);

  64:             });

  65:         }

  66:  

  67:         /// <summary>

  68:         /// Obtain object

  69:         /// </summary>

  70:         /// <typeparam name="T"></typeparam>

  71:         /// <param name="key"></param>

  72:         /// <returns></returns>

  73:         public async Task<T> GetAsync<T>(string key)

  74:         {

  75:             // check type

  76:             AssertCorrectType<T>();

  77:  

  78:             if (_store.ContainsKey(key))

  79:                 return await Task.Run(() => { return (T)(object)_store[key]; });

  80:  

  81:             // key not found

  82:             return default(T);

  83:         }

  84:  

  85:         /// <summary>

  86:         /// Add/update value for key/value

  87:         /// </summary>

  88:         /// <typeparam name="T"></typeparam>

  89:         /// <param name="key"></param>

  90:         /// <param name="value"></param>

  91:         /// <returns></returns>

  92:         public Task StoreAsync<T>(string key, T value)

  93:         {

  94:             return Task.Run(() =>

  95:             {

  96:                 if (_store.ContainsKey(key))

  97:                     _store[key] = (TokenResponse)(object)value;

  98:                 else

  99:                     _store.Add(key, (TokenResponse)(object)value);

 100:             });

 101:         }

 102:  

 103:         /// <summary>

 104:         /// Validate we can store this type

 105:         /// </summary>

 106:         /// <typeparam name="T"></typeparam>

 107:         private void AssertCorrectType<T>()

 108:         {

 109:             if (typeof(T) != typeof(TokenResponse))

 110:                 throw new NotImplementedException(typeof(T).ToString());

 111:         }

 112:     }

 113: }

This sample uses the following Nuget package versions:

   1: <package id="Google.Apis" version="1.8.2" targetFramework="net45" />

   2: <package id="Google.Apis.Auth" version="1.8.2" targetFramework="net45" />

   3: <package id="Google.Apis.Core" version="1.8.2" targetFramework="net45" />

   4: <package id="Google.Apis.Drive.v2" version="1.8.1.1270" targetFramework="net45" />

   5: <package id="log4net" version="2.0.3" targetFramework="net45" />

   6: <package id="Microsoft.Bcl" version="1.1.9" targetFramework="net45" />

   7: <package id="Microsoft.Bcl.Async" version="1.0.168" targetFramework="net45" />

   8: <package id="Microsoft.Bcl.Build" version="1.0.14" targetFramework="net45" />

   9: <package id="Microsoft.Net.Http" version="2.2.22" targetFramework="net45" />

  10: <package id="Newtonsoft.Json" version="6.0.3" targetFramework="net45" />

  11: <package id="Zlib.Portable" version="1.9.2" targetFramework="net45" />

Using Google Drive API with C# – Part 1

We had a requirement to store a large volume of user files (for call recordings) as part of a new service. Initially it would be a few gigabytes of MP3 files, but if successful would possibly be into the terabyte range. Although our main database server has some space available, we didn’t want to store these on the database.

Storing them as files on the server was an option, but it would then mean we had to put in place a backup strategy. We would also need to spend a lot of money on new disks, or new NAS devices, etc. It started to look complicated and expensive.

It was then the little “cloud” lightbulb lit up, and we thought about storing the files on a cloud service instead. I have stuff on Dropbox, LiveDrive SkyDrive OneDrive and a Google Drive. However the recent price drop on Google Drive meant this was the clear favourite for storing our files. At $120 per year for 1TB of space that’s a no-brainer.

Google API

To do this we’d need to have the server listing, reading and writing files directly to the Google Drive. To do this we needed to use the Google Drive API.

I decided to write this series of articles because I found a lot of the help and examples on the web were confusing and in many cases out-of-date: Google has refactored a lot of the .NET client API and a lot of online sample code (including some in the API documentation) is for older versions and all the namespaces, classes and methods have changed.

API Access

To use the Google APIs you need a Google account, a Google drive (which is created by default for each user, and has 30GB of free storage), and API access.

Since you can get a free account with 30GB you can have one account for development and testing, and kept separate from the live account. You may want to use different browsers to set up the live and development/testing accounts. Google is very slick at auto-logging you in and then joining together multiple Google identities. For regular users this is great, but when trying to keep the different environments apart it’s a problem.

User or Service Account?

When you use the Google Drive on the web you’re using a normal user account. However, you may spot that there is also a service account option.

It seems logical that you might want to use this for a back-end service, but I’d recommend against using this for two reasons:

Creating a Project

I’ll assume you’ve already got a Google Account: if not you should set one up. I created a new account with it’s own empty drive so that I could use this in my unit test code.

Before you can start to write code you need to configure your account for API access. You need to create a project in the developer’s console. Click create project and give it name and ID. Click the project link once it’s created. You should see a welcome screen.

APIs

On the menu on the left we need to select APIs & Auth – this lets you determine which APIs your project is allowed to use.

A number of these will have been preselected, but you can ignore these if you wish. Scroll down to Drive API and Drive SDK. The library will be using the API but it seems to also need the SDK (enlighten me please if this is not the case!) so select both. As I understand it, the SDK is needed if you’re going to create online apps (like Docs or Spreadsheet), rather than accessing a drive itself. The usual legal popups will need to be agreed to.

The two entries will be at the top of the page, with a green ON button, and a configuration icon next to each

image

Configuration is a bit.. broken at present. Clicking either goes to configuration for the SDK on an older page layout. I don’t know if the SDK configuration is required or not. You could try accessing without it.

Credentials

The next step is to set up credentials. These are the identity of your login and there are different clientIDs based on the type of client you want to run.

By default you will have Client IDs set up for Compute Engine and Client ID for web application. To access drive from non-web code you need a Native Application, so click Create New Client ID. Select Installed application type and for C# and other .NET apps, select Other.

When this has completed you’ll have a new ClientID and a ClientSecret. Think of this as a username and password that you use to access the drive API. You should treat the Client Secret in the same way as a password and not disclose it. You might also want to store it in encrypted form in your application.

next: Part 2 – Authorising Drive API

Getting Rid of k__BackingField in Serialization

We have an application that uses WebApi to send out results for some queries. This has worked well and outputs nice JSON results courtesy of JSON.NET (thanks to James for that great library!).

Today I ran into a problem: the serialized JSON was corrupted with content that looks like this:

   1: {

   2:     "<Data>k__BackingField" : [{

   3:             "item1",

   4:             "item2"

   5:         }

   6:     ],

   7:     "<Totals>k__BackingField" : null,

   8:     "_count" : 2,

   9:     "_pageSize" : 10,

  10:     "_page" : 1,

  11:     "<Sort>k__BackingField" : "Date"

  12: }

My reaction was puzzlement: why on earth would a straightforward class with properies like Data and Count suddenly start spitting out weird JSON like this?

SO to the Rescue?

Obviously the first port of call was a search on StackOverflow

From this article we get some clues: the k__BackingField is created by automatic properties in C#, and that DataContractJsonSerializer does this.

But we’re not supposed to be using DataContractJsonSerializer, we’ve got WebApi which uses JSON.NET?

Solution

Turns out the cause is the SerializableAttribute – because I’d added that to the class, the object result from the WebAPI method got passed to DataContractJsonSerializer.

I had not seen this before because most of the results I had output didn’t have this attribute, even though the base class did. I removed this, and bingo, the results were fixed:

   1: {

   2:     "Data" : [{

   3:         "item1",

   4:         "item2"

   5:         }

   6:     ],

   7:     "Totals" : null,

   8:     "Count" : 2,

   9:     "PageSize" : 10,

  10:     "Page" : 1,

  11:     "Sort" : "Date"

  12: }

Raspberry Pi Wallboard System

One of the directors wanted to have a wallboard displaying real-time numbers from our new VoIP phone system. We had a web page which could show the stats, so we now had to decide how to get these onto a wall-mounted display in the office.

The display part was easy, there are quite a few 32 inch or larger LCD TV systems with HDMI input. The question was what to use to display and update the web page.

At first we considered setting up a PC to do this but even the Intel NUC devices are quite expensive – £250-plus for a device and a disk. They are also much more powerful than we need.

My colleague working on this project is looking at the Google ChromeCast device, but this is designed for media streaming rather than web pages. I decided to explore the Raspberry Pi as an alternative.

Toys!

To be honest, I’d been itching to mess about with a Pi since they came out. But with so much else to do I couldn’t justify the time to myself. This was a good opportunity to experiment with a specific, worthwhile goal in mind.

model_a_1_of_4_grande

I chose Pimoroni as my supplier as they had a good selection of starter kits. Our “production” unit would consist of a Pi model B, a case, 8GB SD card, WiFi dongle, 5v USB power adapter and a short HDMI cable.

This comes to £67 including VAT in the UK – a lot less than the NUC option. Add that to a 32” Samsung TV for about £219 including VAT. These are excellent as they also have a powered USB 5V connector – so the Pi runs off the TV power supply.

So a wallboard system for less than £300! A wall mounting bracket is extra – these vary from £10 upward, so we might just break the £300 limit for that.

I got two “production” units and a “Deluxe Raspberry Pi Starter Kit” which includes a simple USB keyboard, mouse, USB hub – this one was to act as our development box.

Configuration and Setup

The SD cards come pre-installed with NOOBS so I selected Raspbian and got to a working desktop. After configuring the WiFi we had network access.

The main requirement was a hands-free boot-to-display operation. Fortunately someone else had done the heavy lifting for this part of the design by configuring their Pi to act as a wall mounted outlook calendar. A tip of the hat to Piney for his excellent guide.

Once I had a working development system, I purchased a USB card reader (that supports SD card format – almost all do) for my PC and installed Win32 Disk Imager. I copied the working SD card from the development Pi to an image, and then wrote this image back to the two production SD cards.

Testing

I had so far had this on my office WiFi on a computer monitor, so I unhooked the development setup, and went into my house, where the children’s 42” TV sat in their den, practically begging me to try it out.

PiTV

I was impressed that I didn’t need to touch the screen setup for the HDMI to work correctly. I had to reconfigure the WiFi, but once that was done I could plug the Pi directly into the TV’s powered USB connector, and the HDMI cable. 

Installation

Installation at site isn’t totally shrink-wrap unless you know the WiFi configuration in advance. Our production Pis were delivered to my office and our excellent support tech Cristian used a USB mouse and keyboard to configure them with the correct WiFi setup and key. He then mounted the two Pis behind the monitors (you could use Velcro tape for this), connected their USB power to the monitor USB output and the HDMI to the TV’s HDMI.

Operation

The TV is switched on in the morning, which powers up the USB port. The Pi boots up directly into a full-screen display of the web page that shows our call stats.

PiTV

Knockout-Validation 101

As a user of KnockoutJS you quickly realise that the one thing you’re going to need is a way to validate user input. Fortunately the excellent Knockout-Validation library exists to fill that gap.

While this library does have a getting started page and some documentation, it isn’t particularly clear (at least to me!) in these pages exactly what ways you can use the library and how it works. So this post is to summarise my testing and experimentation. For brevity I will refer to the Knockout-Validation library as “KOV”.

Setup

To use KOV in the first instance you need to have both the Knockout and KOV scripts loaded in the page (in that order). KOV relies on Knockout and will complain if it’s not present.

Very Basic Introduction

KOV allows you to define validation rules that apply to a Knockout observable property, observable array and even knockout computed properties. You do this by using the extends method on an observable to define one or more rules. For example:

var viewModel = {
    name: ko.observable().extend({ required: true })
}

This is the simplest example. We have specified a rule that the name field is required.

What Actually Happened?

KOV extends our observable to apply this rule, and adds the following sub-properties to name:

Property Type Purpose
error string property the error message for this field
rules() observable array the list of rules for the property
isModified() observable set if the value has changed since original load
isValidating() computed Set if validation has started but not finished (e.g. it may be performing an asynchronous AJAX call)
isValid() computed Evaluates the rules, and returns true.

The error message is set using the default message for each rule. For example, the required extender has a default message of “This field is required.”. You can override this message by specifying different options when you call .extends().

var viewModel = { name: ko.observable().extend({

required: {

message: ‘Required!’,

params: true }

}

) }

Triggering Validation

The validation rules for an observable are triggered when an observable value is modified. In most cases for controls, this is when an input control loses focus (e.g. a textbox) or a related control is clicked (e.g. a radio button or listbox).

The validation rules are checked, and the first one to fail generates the error message.

Note: isModified(), isValid() and isValidating() are all observable or computed-observable values. That means you can bind to these values just like any other KO observable value. For example, you could bind the css for an input field to the isModified sub-property to change colour when a field is changed.

Out-of-the-Box Behaviours

In our example above we have just one field, but we have some useful behaviours added by default. The first is that the input field will have a the validation message appended after it when the user leaves the textbox.

Message Options

For simple cases this may fine. However in most cases you will want to control KOV’s behaviour using the configuration options.

You specify these using ko.init(). Note these are global options, so apply to all validations on a single page.

The option to turn off the error message inserting would be to set insertMessages to false (it is set to true by default).

If you want to keep the message, it can have a CSS class set when the error is shown. The default class is ‘validationMessage’. You can set this with the option setting errorMessageClass.

If you want fine-grain control there is also a messageTemplate setting where you can provide the ID of a text/template to use.

The last option is messagesOnModified which is set to true by default. This prevents any messages being shown until a user has modified a value. If set to false, all the errors are shown when the form is created, before the user has a change to do anything. Normally this is bad practice but there are cases when it might be useful (e.g. returning to a previously edited screen which has errors).

Element Options

Another capability (not enabled by default) is that KOV can apply a CSS class to the invalid input element. This is controlled by the options decorateElement* and errorElementClass. Setting decorateElement = true will make KOV add the class to the element’s class list. I use this to bind with a Bootstrap form to set the entire control-group’s CSS to the error state, to make it work in the Bootstrap manner.

However, note the default behaviour is to decorate just the element that failed, whereas Bootstrap forms often have a series of DIVs containing controls and you

 

Validating the Whole ViewModel

Validating single fields is useful, but most ViewModels have more than one field being validated.

This is where the KOV function validatedObservable is useful. You call ko.validatedObservable(…) and pass in your own view model – the viewmodel has following methods added:

Added Type Purpose
errors() computed observable a list of any errors on the current viewmodel
isAnyMessageShown() function returns true if any messages are show
isValid() function returns false if any validation errors are set

Important!: note that only the errors() method is a Knockout computed observable, and isAnyMessageShown and isValid are just plain functions. However the errors method does not seem to function for .subscribe.

Validation Bindings

If you don’t like the validation messages appended after the input, you can create your message placement point, and use the validationMessage binding handler: see Validation-Bindings

This displays the validation messages in the content of the bound control. It also hides the control when no messages exist: this means it can be bound to formatted control (e.g. a div with a coloured background, or popup control), and this will disappear when there is no error.

How KOV Works and Things To Watch

KOV overrides the applyBindings and the value binding handler. It adds its own validation binding handler, then calls the original value binding. This means when an observable is changed KOV knows about it and can trigger the validation.

One side affect of this is that custom binding handlers will not be validated on-screen, and you need to consider triggering the validation yourself.

String.Join

Are you a .NET developer? Got a list of strings you want to join? This is what I and almost every other developer I’ve seen does:

var values = new List<string>();
// add some values...
string csv = string.Join(",", values.ToArray());

Turns out someone in the .NET framework team thought this was a bit dumb, so they quietly added a new overload in version 4.0 of the framework. This overload takes an IEnumerable<string> so now you can write:

var values = new List<string>();
// add some values...
string csv = string.Join(",", values);

Just thought I’d blog that in case you’re doing the same thing!

ASP.NET Razor Views in Class Libraries

One of my biggest complaints about WebForms in ASP.NET was that you couldn’t easily put interface code such as .ascx controls into DLLs and then use these in an ASP.NET website.

This meant that virtually all the interface code had to reside in the ASP.NET web project – which kills reuse and modularity. You end up with monolithic ASP.NET web applications containing all the UI code.

It was possible to create .ascx controls in DLLs, but it was a horrible kludge.

Enter the Razor

With the introduction of the Razor view engine we can now create views in class libraries, with Intellisense, compilation and design time support. However, it’s not a built-in functionality and takes some work to set up. There are two basic approaches, and you need to decide which is appropriate for you.

I’ve worked on this previously on MVC v3 and MVC v4, but I’ve decided to write about in this blog post purely from the MVC v5 and Razor v3 perspective. If you’re interested in older versions I recommend the following:

Full-Fat MVC Project

The easiest way to do this is create the class library using the ASP.NET Web Application and select the MVC template. This will create a web project which can be compiled down to a single pre-compiled DLL. You can then reference this in your main ASP.NET web application.

Indeed if you follow this approach you can use the routes, controllers and views as if they were part of the main web app.

However, if you’re looking to do something like create html email templates (as I was) you get a lot of overhead and stuff you don’t need throw into the project (Controllers, Models, Scripts etc.).

Razor-Thin Class Library

      The alternative approach I’ve investigated is to start with a basic C# class library for .NET 4.5 and then add just enough functionality to make Razor views work in the editor. I can then include the Razor content as an embedded resource and combine with a library like RazorEngine or RazorGenerator to run the view in code and generate the HTML.

      Edit: as a helpful resource I’ve also created a GitHub project which is a working example, and has a commit for each of the changes made.

      The steps required to make this work:

      1. Create a new C# project using the Class Library template, for framework version 4.5
      2. Using Nuget Package manager add the following packages:
        • Microsoft.AspNet.Razor
        • Microsoft.AspNet.WebPages
        • Microsoft.Web.Infrastructure
        • Microsoft.AspNet.Mvc

        I also added in the .NET framework libraries for System.Web.Abstractions (4.0) and System.Data.Linq (4.0)

      3. Open the .csprj file in a text editor, and change (or edit) the ProjectTypeGuids setting to the following:
            <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids>
        

        If the setting is not present in your project file, then add it after the <ProjectGuid> setting.  This makes the MVC razor views in the Add New Item dialog box.

      4. Add a web.config file to the project, with the content required to support views. The Visual Studio editor expects to see this file to render Razor Views. I’ve attached the one I created at the end of this post. This was based on the web.config that MVC creates in the Views folder of an MVC project, but has been edited to get intellisense working.

      5. My final step is to save everything, then close and re-open the solution. This is to ensure the VS IDE is properly aware that the project supports Razor pages and loads the correct intellisense. It’s possible it might work without this step, but if you have problems, give it a try.
      6. Having done this, Intellisense works for the model and inline code. You can specify @model <yourclass> at the top and get @Model.<someproperty> intellisense. You can also have all the Razor code functionality. However, HtmlHelper extension methods, such as  @Html.Raw(..), although recognised in the editor, won’t work in RazorEngine, for example, as the Html helper isn’t present in its rendering engine by default. Read this article to find out more.

      web.config file:

      <?xml version="1.0"?>
      
      <configuration>
        <configSections>
          <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
            <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
            <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
          </sectionGroup>
        </configSections>
      
        <system.web.webPages.razor>
          <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
          <pages pageBaseType="System.Web.Mvc.WebViewPage">
            <namespaces>
              <add namespace="System.Web.Mvc" />
              <add namespace="System.Web.Mvc.Ajax" />
              <add namespace="System.Web.Mvc.Html" />
              <add namespace="yourDLLhere" />
            </namespaces>
          </pages>
        </system.web.webPages.razor>
      
        <appSettings>
          <add key="webpages:Version" value="3.0.0.0"/>
          <add key="webpages:Enabled" value="false" />
        </appSettings>
      
        <system.web>
          <compilation targetFramework="4.5">
            <assemblies>
              <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
              <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
              <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
              <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
            </assemblies>
          </compilation>
          <httpRuntime targetFramework="4.5" />
        </system.web>
      
        <system.webServer>
          <handlers>
            <remove name="BlockViewHandler"/>
            <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" />
          </handlers>
        </system.webServer>
      
      </configuration>