Thursday, 22 March 2012

WCF - moving from Config to Code, a simple WCF service harness (plus implementing your own Authorization)

Last time I wrote about WCF I was getting up and running with WCF Transport Windows authentication using NetTcpBinding in an Intranet environment. I ended up with a WCF service hosted in a Windows Service which did pretty much what the previous post name implies.

Since writing that I've taken things on a bit further and I thought it worth recording my approach whilst it's still fresh in my mind. There's 3 things I want to go over:

  1. I've moved away from the standard config driven WCF approach to a more "code-first" style
  2. I've established a basic Windows Service hosted WCF service / client harness which is useful if you're trying to get up and running with a WCF service quickly
  3. I've locked down the WCF authorization to a single Windows account through the use of my own ServiceAuthorizationManager

Moving from Config to Code

So, originally I was doing what all the cool kids are doing and driving the configuration of my WCF service and all its clients through config files. And why not? I'm in good company.

Here's why not: it gets *very* verbose *very* quickly....

Okay - that's not the end of the world. My problem was that I had ~10 Windows Services and 3 Web applications that needed to call into my WCF Service. I didn't want to have to separately tweak 15 or so configs each time I wanted to make one standard change to WCF configuration settings. I wanted everything in one place.

Now there's newer (and probably hipper) ways of achieving this. Here's one possibility I happened upon on StackOverflow that looks perfectly fine.

Well I didn't use a hip new approach - no I went Old School with my old friend the appSettings file attribute. Remember that? It's just a simple way to have all your common appSettings configuration settings in a single file which can be linked to from as many other apps as you like. It's wonderful and I've been using it for a long time now. Unfortunately it's pretty basic in that it's only the appSettings section that can be shared out; no <system.serviceModel> or similar.

But that wasn't really a problem from my perspective. I realised that there were actually very few things that needed to be configurable for my WCF service. Really I wanted a basic WCF harness that could be initialised in code which implicitly set all the basic configuration with settings that worked (ie it was set up with defaults like maximum message size which were sufficiently sized). On top of that I would allow myself to configure just those things that I needed to through the use of my own custom WCF config settings in the shared appSettings.config file.

Once done I massively reduced the size of my configs from frankly gazillions of entries to just these appSettings.config entries which were shared across each of my WCF service clients and by my Windows Service harness:


  <appSettings>
  <add key="WcfBaseAddressForClient" value="net.tcp://localhost:9700/"/>
  <add key="WcfWindowsSecurityApplied" value="true" />
  <add key="WcfCredentialsUserName" value="myUserName" />
  <add key="WcfCredentialsPassword" value="myPassword" />
  <add key="WcfCredentialsDomain" value="myDomain" />
  </appSettings>

And these config settings used only by my Windows Service harness:


  <appSettings file="../Shared/AppSettings.config">
    <add key="WcfBaseAddressForService" value="net.tcp://localhost:9700/"/>
  </appSettings>

Show me your harness

I ended up with a quite a nice basic "vanilla" framework that allowed me to quickly set up Windows Service hosted WCF services. The framework also provided me with a simple way to consume these WCF services with a minimum of code an configuration. No muss. No fuss. :-) So pleased with it was I that I thought I'd go through it here much in the manner of a chef baking a cake...

To start with I created myself a Windows Service in Visual Studio which I grandly called "WcfWindowsService". The main service class looked like this:


  public class WcfWindowsService: ServiceBase
  {
    public static string WindowsServiceName = "WCF Windows Service";
    public static string WindowsServiceDescription = "Windows service that hosts a WCF service.";
    
    private static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    public List<ServiceHost> _serviceHosts = null;

    public WcfWindowsService()
    {
      ServiceName = WindowsServiceName;
    }

    public static void Main()
    {
      ServiceBase.Run(new WcfWindowsService());
    }

    /// <summary>
    /// The Windows Service is starting
    /// </summary>
    /// <param name="args"></param>
    protected override void OnStart(string[] args)
    {
      try
      {
        CloseAndClearServiceHosts();

        //Make log4net startup
        XmlConfigurator.Configure();
        _logger.Warn("WCF Windows Service starting...");
        _logger.Info("Global.WcfWindowsSecurityApplied = " + Global.WcfWindowsSecurityApplied.ToString().ToLower());

        if (Global.WcfWindowsSecurityApplied)
        {
          _logger.Info("Global.WcfOnlyAuthorizedForWcfCredentials = " + Global.WcfOnlyAuthorizedForWcfCredentials.ToString().ToLower());

          if (Global.WcfOnlyAuthorizedForWcfCredentials)
          {
            _logger.Info("Global.WcfCredentialsDomain = " + Global.WcfCredentialsDomain);
            _logger.Info("Global.WcfCredentialsUserName = " + Global.WcfCredentialsUserName);
          }
        }

        //Create binding
        var wcfBinding = WcfHelper.CreateBinding(Global.WcfWindowsSecurityApplied);

        // Create a servicehost and endpoints for each service and open each
        _serviceHosts = new List<ServiceHost>();
        _serviceHosts.Add(WcfServiceFactory<IHello>.CreateAndOpenServiceHost(typeof(HelloService), wcfBinding));
        _serviceHosts.Add(WcfServiceFactory<IGoodbye>.CreateAndOpenServiceHost(typeof(GoodbyeService), wcfBinding));

        _logger.Warn("WCF Windows Service started.");
      }
      catch (Exception exc)
      {
        _logger.Error("Problem starting up", exc);

        throw exc;
      }
    }

    /// <summary>
    /// The Windows Service is stopping
    /// </summary>
    protected override void OnStop()
    {
      CloseAndClearServiceHosts();

      _logger.Warn("WCF Windows Service stopped");
    }

    /// <summary>
    /// Close and clear service hosts in list and clear it down
    /// </summary>
    private void CloseAndClearServiceHosts()
    {
      if (_serviceHosts != null)
      {
        foreach (var serviceHost in _serviceHosts)
        {
          CloseAndClearServiceHost(serviceHost);
        }

        _serviceHosts.Clear();
      }
    }

    /// <summary>
    /// Close and clear the passed service host
    /// </summary>
    /// <param name="serviceHost"></param>
    private void CloseAndClearServiceHost(ServiceHost serviceHost)
    {
      if (serviceHost != null)
      {
        _logger.Info(string.Join(", ", serviceHost.BaseAddresses) + " is closing...");

        serviceHost.Close();

        _logger.Info(string.Join(", ", serviceHost.BaseAddresses) + " is closed");
      }
    }
  }

As you've no doubt noticed this makes use of Log4Net for logging purposes (I'll assume you're aware of it). My Windows Service implements such fantastic WCF services as HelloService and GoodbyeService. Each revolutionary in their own little way. To give you a taste of the joie de vivre that these services exemplify take a look at this:


  // Implement the IHello service contract in a service class.
  public class HelloService : WcfServiceAuthorizationManager, IHello
  {
    // Implement the IHello methods.
    public string GreetMe(string thePersonToGreet)
    {
      return "well hello there " + thePersonToGreet;
    }
  }

Exciting! WcfWindowsService also references another class called "Global" which is a helper class - to be honest not much more than a wrapper for my config settings. It looks like this:


  static public class Global
  {
    #region Properties

    // eg "net.tcp://localhost:9700/"
    public static string WcfBaseAddressForService { get { return ConfigurationManager.AppSettings["WcfBaseAddressForService"]; } }

    // eg true
    public static bool WcfWindowsSecurityApplied { get { return bool.Parse(ConfigurationManager.AppSettings["WcfWindowsSecurityApplied"]); } }

    // eg true
    public static bool WcfOnlyAuthorizedForWcfCredentials { get { return bool.Parse(ConfigurationManager.AppSettings["WcfOnlyAuthorizedForWcfCredentials"]); } }

    // eg "myDomain"
    public static string WcfCredentialsDomain { get { return ConfigurationManager.AppSettings["WcfCredentialsDomain"]; } }

    // eg "myUserName"
    public static string WcfCredentialsUserName { get { return ConfigurationManager.AppSettings["WcfCredentialsUserName"]; } }

    // eg "myPassword" - this should *never* be stored unencrypted and is only ever used by clients that are not already running with the approved Windows credentials
    public static string WcfCredentialsPassword { get { return ConfigurationManager.AppSettings["WcfCredentialsPassword"]; } }

    #endregion
  }

WcfWindowsService creates and hosts a HelloService and a GoodbyeService when it starts up. It does this using my handy WcfServiceFactory:


  public class WcfServiceFactory<TInterface>
  {
    private static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    public static ServiceHost CreateAndOpenServiceHost(Type serviceType, NetTcpBinding wcfBinding)
    {
      var serviceHost = new ServiceHost(serviceType, new Uri(Global.WcfBaseAddressForService + ServiceHelper<TInterface>.GetServiceName()));
      serviceHost.AddServiceEndpoint(typeof(TInterface), wcfBinding, "");
      serviceHost.Authorization.ServiceAuthorizationManager = new WcfServiceAuthorizationManager(); // This allows us to control authorisation within WcfServiceAuthorizationManager
      serviceHost.Open();

      _logger.Info(string.Join(", ", serviceHost.BaseAddresses) + " is now listening.");

      return serviceHost;
    }
  }

To do this it also uses my equally handy WcfHelper class:


  static public class WcfHelper
  {
    /// <summary>
    /// Create a NetTcpBinding
    /// </summary>
    /// <param name="useWindowsSecurity"></param>
    /// <returns></returns>
    public static NetTcpBinding CreateBinding(bool useWindowsSecurity)
    {
      var wcfBinding = new NetTcpBinding();
      if (useWindowsSecurity)
      {
        wcfBinding.Security.Mode = SecurityMode.Transport;
        wcfBinding.Security.Transport.ClientCredentialType = TcpClientCredentialType.Windows;
      }
      else
        wcfBinding.Security.Mode = SecurityMode.None;

      wcfBinding.MaxBufferSize = int.MaxValue;
      wcfBinding.MaxReceivedMessageSize = int.MaxValue;
      wcfBinding.ReaderQuotas.MaxArrayLength = int.MaxValue;
      wcfBinding.ReaderQuotas.MaxDepth = int.MaxValue;
      wcfBinding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
      wcfBinding.ReaderQuotas.MaxBytesPerRead = int.MaxValue;

      return wcfBinding;
    }
  }

  /// <summary>
  /// Create a WCF Client for use anywhere (be it Windows Service or ASP.Net web application)
  /// nb Credential fields are optional and only likely to be needed by web applications
  /// </summary>
  /// <typeparam name="TInterface"></typeparam>
  public class WcfClientFactory<TInterface>
  {
    public static TInterface CreateChannel(bool useWindowsSecurity, string wcfBaseAddress, string wcfCredentialsUserName = null, string wcfCredentialsPassword = null, string wcfCredentialsDomain = null)
    {
      //Create NetTcpBinding using universally
      var wcfBinding = WcfHelper.CreateBinding(useWindowsSecurity);

      //Get Service name from examining the ServiceNameAttribute decorating the interface
      var serviceName = ServiceHelper<TInterface>.GetServiceName();

      //Create the factory for creating your channel
      var factory = new ChannelFactory<TInterface>(
        wcfBinding,
        new EndpointAddress(wcfBaseAddress + serviceName)
        );

      //if credentials have been supplied then use them
      if (!string.IsNullOrEmpty(wcfCredentialsUserName))
      {
        factory.Credentials.Windows.ClientCredential = new System.Net.NetworkCredential(wcfCredentialsUserName, wcfCredentialsPassword, wcfCredentialsDomain);
      }

      //Create the channel
      var channel = factory.CreateChannel();

      return channel;
    }
  }

Now the above WcfHelper class and it's comrade-in-arms the WcfClientFactory don't live in the WcfWindowsService project with the other classes. No. They live in a separate project called the WcfWindowsServiceContracts project with their old mucker the ServiceHelper:


  public class ServiceHelper<T>
  {
    public static string GetServiceName()
    {
      var customAttributes = typeof(T).GetCustomAttributes(false);
      if (customAttributes.Length > 0)
      {
        foreach (var customAttribute in customAttributes)
        {
          if (customAttribute is ServiceNameAttribute)
          {
            return ((ServiceNameAttribute)customAttribute).ServiceName;
          }
        }
      }

      throw new ArgumentException("Interface is missing ServiceNameAttribute");
    }
  }

  [AttributeUsage(AttributeTargets.Interface, AllowMultiple = false)]
  public class ServiceNameAttribute : System.Attribute
  {
    public ServiceNameAttribute(string serviceName)
    {
      this.ServiceName = serviceName;
    }

    public string ServiceName { get; set; }
  }

Now can you guess what the WcfWindowsServiceContracts project might contain? Yes; contracts for your services (oh the excitement)! What might one of these contracts look like I hear you ask... Well, like this:


  [ServiceContract()]
  [ServiceName("HelloService")]
  public interface IHello
  {
    [OperationContract]
    string GreetMe(string thePersonToGreet);
  }

The WcfWindowsServiceContracts project is included in *any* WCF client solution that wants to call your WCF services. It is also included in the WCF service solution. It facilitates the calling of services. What you're no doubt wondering is how this might be achieved. Well here's how, it uses our old friend the WcfClientFactory:


  var helloClient = WcfClientFactory<IHello>
    .CreateChannel(
      useWindowsSecurity:     Global.WcfWindowsSecurityApplied,  // eg true
      wcfBaseAddress:         Global.WcfBaseAddressForClient,    // eg "net.tcp://localhost:9700/"
      wcfCredentialsUserName: Global.WcfCredentialsUserName,     // eg "myUserName" - Optional parameter - only passed by web applications that need to impersonate the valid user
      wcfCredentialsPassword: Global.WcfCredentialsPassword,     // eg "myPassword" - Optional parameter - only passed by web applications that need to impersonate the valid user
      wcfCredentialsDomain:   Global.WcfCredentialsDomain        // eg "myDomain" - Optional parameter - only passed by web applications that need to impersonate the valid user
    );
  var greeting = helloClient.GreetMe("John"); //"well hello there John"

See? Simple as simple. The eagle eyed amongst you will have noticed that client example above is using "Global" which is essentially a copy of the Global class mentioned above that is part of the WcfWindowsService project.

Locking down Authorization to a single Windows account

I can tell you think i've forgotten something. "Tell me about this locking down to the single Windows account / what is this mysterious WcfServiceAuthorizationManager class that all your WCF services inherit from? Don't you fob me off now.... etc"

Well ensuring that only a single Windows account is authorised (yes dammit the original English spelling) to access our WCF services is achieved by implementing our own ServiceAuthorizationManager class. This implementation is used for authorisation by your ServiceHost and the logic sits in the overridden CheckAccessCore method. All of our WCF service classes will inherit from our ServiceAuthorizationManager class and so trigger the CheckAccessCore authorisation each time they are called.

As you can see from the code below, depending on our configuration, we lock down access to all our WCF services to a specific Windows account. This is far from the only approach that you might want to take to authorisation; it's simply the one that we've been using. However the power of being able to implement your own authorisation in the CheckAccessCore method allows you the flexibility to do pretty much anything you want:


  public class WcfServiceAuthorizationManager : ServiceAuthorizationManager
  {
    protected static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    protected override bool CheckAccessCore(OperationContext operationContext)
    {
      if (Global.WcfWindowsSecurityApplied)
      {
        if ((operationContext.ServiceSecurityContext.IsAnonymous) ||
          (operationContext.ServiceSecurityContext.PrimaryIdentity == null))
        {
          _logger.Error("WcfWindowsSecurityApplied = true but no credentials have been supplied");
          return false;
        }

        if (Global.WcfOnlyAuthorizedForWcfCredentials)
        {
          if (operationContext.ServiceSecurityContext.PrimaryIdentity.Name.ToLower() == Global.WcfCredentialsDomain.ToLower() + "\\" + Global.WcfCredentialsUserName.ToLower())
          {
            _logger.Debug("WcfOnlyAuthorizedForWcfCredentials = true and the valid user (" + operationContext.ServiceSecurityContext.PrimaryIdentity.Name + ") has been supplied and access allowed");
            return true;
          }
          else
          {
            _logger.Error("WcfOnlyAuthorizedForWcfCredentials = true and an invalid user (" + operationContext.ServiceSecurityContext.PrimaryIdentity.Name + ") has been supplied and access denied");
            return false;
          }
        }
        else
        {
          _logger.Debug("WcfOnlyAuthorizedForWcfCredentials = false, credentials were supplied (" + operationContext.ServiceSecurityContext.PrimaryIdentity.Name + ") so access allowed");
          return true;
        }
      }
      else
      {
        _logger.Info("WcfWindowsSecurityApplied = false so we are allowing unfettered access");
        return true;
      }
    }
  }

Phewwww... I know this has ended up as a bit of a brain dump but hopefully people will find it useful. At some point I'll try to put up the above solution on GitHub so people can grab it easily for themselves.

Saturday, 17 March 2012

Using the PubSub / Observer pattern to emulate constructor chaining without cluttering up global scope

Yes the title of this post is *painfully* verbose. Sorry about that.

Couple of questions for you:
  • Have you ever liked the way you can have base classes in C# which can then be inherited and subclassed in a different file / class?
  • Have you ever thought; gosh it'd be nice to do something like that in JavaScript...
  • Have you then looked at JavaScripts prototypical inheritance and thought "right.... I'm sure it's possible but this going to end up like War and Peace"
  • Have you then subsequently thought "and hold on a minute... even if I did implement this using the prototype and split things between different files / modules wouldn't I have to pollute the global scope to achieve that? And wouldn't that mean that my code was exposed to the vagaries of any other scripts on the page? Hmmm..."
  • Men! Are you skinny? Do bullies kick sand in your face? (Just wanted to see if you were still paying attention...)

The Problem

Well, the above thoughts occurred to me just recently. I had a situation where I was working on an MVC project and needed to build up quite large objects within JavaScript representing various models. The models in question were already implemented on the server side using classes and made extensive use of inheritance because many of the properties were shared between the various models. That is to say we would have models which were implemented through the use of a class inheriting a base class which in turn inherits a further base class. With me? Good.

Perhaps I can make it a little clearer with an example. Here are my 3 classes. First BaseReilly.cs:

    public class BaseReilly
    {
        public string LastName { get; set; }

        public BaseReilly()
        {
            LastName = "Reilly";
        }
    }

Next BoyReilly.cs (which inherits from BaseReilly):

    public class BoyReilly : BaseReilly
    {
        public string Sex { get; set; }

        public BoyReilly()
            : base()
        {
            Sex = "It is a manchild";
        }
    }

And finally JohnReilly.cs (which inherits from BoyReilly which in turn inherits from BaseReilly):

    public class JohnReilly : BoyReilly
    {
        public string FirstName { get; set; }

        public JohnReilly()
            : base()
        {
            FirstName = "John";
        }
    }

Using the above I can create myself my very own "JohnReilly" like so:

var johnReilly = new JohnReilly();

And it will look like this:
JohnReilly made in C#

I was looking to implement something similar on the client and within JavaScript. I was keen to ensure code reuse. And my inclination to keep things simple made me wary of making use of the prototype. It is undoubtedly powerful but I don't think even the mighty Crockford would consider it "simple". Also I had the reservation of exposing my object to the global scope.

So what to do? I had an idea....

The Big Idea

For a while I've been making use explicit use of the Observer pattern in my JavaScript, better known by most as the publish/subscribe (or "PubSub") pattern. There's a million JavaScript libraries that facilitate this and after some experimentation I finally settled on higgins implementation as it's simple and I saw a JSPerf which demonstrated it as either the fastest or second fastest in class.

Up until now my main use for it had been to facilitate loosely coupled GUI interactions. If I wanted one component on the screen to influence anothers behaviour I simply needed to get the first component to publish out the relevant events and the second to subscribe to these self-same events.

One of the handy things about publishing out events this way is that with them you can also include data. This data can be useful when driving the response in the subscribers.

However, it occurred to me that it would be equally possible to pass an object when publishing an event. And the subscribers could enrich that object with data as they saw fit.

Now this struck me as a pretty useful approach. It's not rock solid secure as it's always possible that someone could subscribe to your events and get access to your object as you published out. However, that's pretty unlikely to happen accidentally; certainly far less likely than someone else's global object clashing with your global object.

What might this look like in practice?

So this is what it ended up looking like when I turned my 3 classes into JavaScript files / modules. First BaseReilly.js:

$(function () {

    $.subscribe("PubSub.Inheritance.Emulation", function (obj) {
        obj.LastName = "Reilly";
    });
});

Next BoyReilly.js:

$(function () {

    $.subscribe("PubSub.Inheritance.Emulation", function (obj) {
        obj.Sex = "It is a manchild";
    });
});

And finally JohnReilly.js:

$(function () {

    $.subscribe("PubSub.Inheritance.Emulation", function (obj) {
        obj.FirstName = "John";
    });
});

If the above scripts have been included in a page I can create myself my very own "JohnReilly" in JavaScript like so:

    var oJohnReilly = {}; //Empty object
    
    $.publish("PubSub.Inheritance.Emulation", [oJohnReilly]); //Empty object "published" so it can be enriched by subscribers

    console.log(JSON.stringify(oJohnReilly)); //Show me this thing you call "JohnReilly"

And it will look like this:
JohnReilly made in JavaScript

And it works. Obviously the example I've given above it somewhat naive - in reality my object properties are driven by GUI components rather than hard-coded. But I hope this illustrates the point.

This technique allows you to simply share functionality between different JavaScript files and so keep your codebase tight. I certainly wouldn't recommend it for all circumstances but when you're doing something as simple as building up an object to be used to pass data around (as I am) then it works very well indeed.

A Final Thought on Script Ordering

A final thing that maybe worth mentioning is script ordering. The order in which functions are called is driven by the order in which subscriptions are made.

In my example I was registering the scripts in this order:

        <script src="/Scripts/PubSubInheritanceDemo/BaseReilly.js"></script>
        <script src="/Scripts/PubSubInheritanceDemo/BoyReilly.js"></script>
        <script src="/Scripts/PubSubInheritanceDemo/JohnReilly.js"<>/script>

So when my event was published out the functions in the above JS files would be called in this order:
  1. BaseReilly.js
  2. BoyReilly.js
  3. JohnReilly.js
If you were so inclined you could use this to emulate inheritance in behaviour. Eg you could set a property in BaseReilly.js which was subsequently overridden in JohnReilly.js or BoyReilly.js if you so desired. I'm not doing that myself but it occurred as a possibility.

PS

If you're interested in learning more about JavaScript stabs at inheritance you could do far worse than look at Bob Inces in depth StackOverflow answer.

Monday, 12 March 2012

Striving for (JavaScript) Convention

Update

The speed of change makes fools of us all. Since I originally wrote this post all of 3 weeks ago Visual Studio 11 beta has been released and the issues I was seeking to solve have pretty much been resolved by the new innovations found therein. It's nicely detailed in @carlbergenhem's blog post: My Top 5 Visual Studio 11 Designer Improvements for ASP.NET 4.5 Development.

I've left the post in place below but much of what I said (particularly with regard to Hungarian Notation) I've now moved away from. That was originally my intention anyway so that's no bad thing. The one HN artefact that I've held onto is using "$" as a prefix for jQuery objects. I think that still makes sense.
I would have written my first line of JavaScript in probably 2000. It probably looked something like this: alert('hello world').

I know. Classy.

As I've mentioned before it was around 2010 before I took JavaScript in any way seriously. Certainly it was then when I started to actively learn the language. Because up until this point I'd been studiously avoiding writing any JavaScript at all I'd never really given thought to forms and conventions. When I wrote any JavaScript I just used the same style and approaches as I used in my main development language (of C#). By and large I have been following the .net naming conventions which are ably explained by Pete Brown here.

Over time I have started to move away from this approach. Without a deliberate intention to do so I have found myself adopting a different style for my JavaScript code as compared with anything else I write. I wouldn't go so far as to say I'm completely happy with the style I'm currently using. But I find it more helpful than not and thought it might be worth talking about.

It was really 2 things that started me down the road of "rolling my own" convention: dynamic typing and the lack of safety nets. Let's take each in turn....

1. Dynamic typing
Having grown up (in a development sense) using compiled and strongly-typed languages I was used to the IDE making it pretty clear what was what through friendly tooltips and the like:


Doesn't Visual Studio just spoil us?

JavaScript is loosely / dynamically typed (occasionally called "untyped" but let's not go there). This means that the IDE can't easily determine what's what. So no tooltips for you sunshine.

2. The lack of safety nets / running with scissors
Now I've come to love it but what I realised pretty quickly when getting into JavaScript was this: you are running with scissors. If you're not careful and you don't take precautions it can bloody quickly.

If I'm writing C# I have a lot of safety nets. Not the least of which is "does it compile"? If I declare an integer and then subsequently try to assign a string value to it it won't let me. But JavaScript is forgiving. Some would say too forgiving. Let's do something mad:


var iAmANumber = 77;

console.log(iAmANumber); //Logs a number

iAmANumber = "It's a string";

console.log(iAmANumber); //Logs a string

iAmANumber = { 
  description: "I am an object"
};

console.log(iAmANumber); //Logs an object

iAmANumber = function (myVariable) {

  console.log(myVariable);
}

console.log(iAmANumber); //Logs a function
iAmANumber("I am not a number, I am a free man!"); //Calls a function which performs a log
Now if I were to attempt something similar in C# fuggedaboudit but JavaScript; no I'm romping home free:


Sir, your tolerance is extreme...

Now I'm not saying that you should ever do the above, and thinking about it I can't think of a situation where you'd want to (suggestions on a postcard). But the point is it's possible. And because it's possible to do this deliberately, it's doubly possible to do this accidentally. My point is this: it's easy to make bugs in JavaScript.

What Katy Johnny Did Next

I'd started making more and more extensive use of JavaScript. I was beginning to move in the direction of using the single-page application approach (<sideNote>although more in the sense of giving application style complexity to individual pages rather than ensuring that entire applications ended up in a single page</sideNote>).

This meant that whereas in the past I'd had the occasional 2 lines of JavaScript I now had a multitude of functions which were all interacting in response to user input. All these functions would contain a number of different variables. As well as this I was making use of jQuery for both Ajax purposes and to smooth out the DOM inconsistencies between various browsers. This only added to the mix as variables in one of my functions could be any one of the following:
  • a number
  • a string
  • a boolean
  • a date
  • an object
  • an array
  • a function
  • a jQuery object - not strictly a distinct JavaScript type obviously but treated pretty much as one in the sense that it has a particular functions / properties etc associated with it

As I started doing this sort of work I made no changes to my coding style. Wherever possible I did *exactly* what I would have been doing in C# in JavaScript. And it worked fine. Until....

Okay there is no "until" as such, it did work fine. But what I found was that I would do a piece of work, check it into source control, get users to test it, release the work into Production and promptly move onto the next thing. However, a little way down the line there would be a request to add a new feature or perhaps a bug was reported and I'd find myself back looking at the code. And, as is often the case, despite the comments I would realise that it wasn't particularly clear why something worked in the way it did. (Happily it's not just me that has this experience, paranoia has lead me to ask many a fellow developer and they have confessed to similar)

When it came to bug hunting in particular I found myself cursing the lack of friendly tooltips and the like. Each time I wanted to look at a variable I'd find myself tracking back through the function, looking for the initial use of the variable to determine the type. Then I'd be tracking forward through the function for each subsequent use to ensure that it conformed.

Distressingly, I would find examples of where it looked like I'd forgotten the type of the variable towards the end of a function (for which I can only, regrettably, blame myself). Most commonly I would have a situation like this:


var tableCell = $("#ItIsMostDefinitelyATableCell"); //I jest ;-)

/* ...THERE WOULD BE SOME CODE DOING SOMETHING HERE... */

tableCell.className = "makeMeProminent"; //Oh dear - not good.

You see what happened above? I forgot I had a jQuery object and instead treated it like it was a standard DOM element. Oh dear.

Spinning my own safety net; Hungarian style

After I'd experienced a few of the situations described above I decided that steps needed to be taken to minimise the risk of this. In this case, I decided that "steps" meant Hungarian notation. I know. I bet you're wincing right now.

For those of you that don't remember HN was pretty much the standard way of coding at one point (although at the point that I started coding professionally it had already started to decline). It was adopted in simpler times long before the modern IDE's that tell you what each variable is became the norm. Back when you couldn't be sure of the types you were dealing with. In short, kind of like my situation with JavaScript right now.

There's not much to it. By and large HN simply means having a lowercase prefix of 1-3 characters on all your variables indicating type. It doesn't solve all your problems. It doesn't guarantee to stop bugs. But because each instance of the variables use implicitly indicates it's type it makes bugs more glaringly obvious. This means when writing code I'm less likely to misuse a variable (eg iNum = "JIKJ") because part of my brain would be bellowing: "that just looks wrong... pay better attention lad!". Likewise, if I'm scanning through some JavaScript and searching for a bug then this can make it more obvious.

Here's some examples of different types of variables declared using the style I have adopted:

var iInteger = 4;
var dDecimal = 10.50;
var sString = "I am a string";
var bBoolean = true;
var dteDate = new Date();
var oObject = {
  description: "I am an object"
};
var aArray = [34, 77];
var fnFunction = function () {
  //Do something
};
var $jQueryObject = $("#ItIsMostDefinitelyATableCell");

Some of you have read this and thought "hold on a minute... JavaScript doesn't have integers / decimals etc". You're quite right. My style is not specifically stating the type of a variable. More it is seeking to provide a guide on how a variable should be used. JavaScript does not have integers. But oftentimes I'll be using a number variable which i will only ever want to treat as an integer. And so I'll name it accordingly.

Spinning a better safety net; DOJO style

I would be the first to say that alternative approaches are available. And here's one I recently happened upon that I rather like the look of: look 2/3rds down at the parameters section of the DOJO styleguide

Essentially they advise specifying parameter types through the use of prefixed comments. See the examples below:

function(/*String*/ foo, /*int*/ bar)...
or

function(/*String?*/ foo, /*int*/ bar, /*String[]?*/ baz)...
I really rather like this approach and I'm thinking about starting to adopt it. It's not possible in Hungarian Notation to be so clear about the purpose of a variable. At least not without starting to adopt all kinds of kooky conventions that take in all the possible permutations of variable types. And if you did that you'd really be defeating yourself anyway as it would simply reduce the clarity of your code and make bugs more likely.

Spinning a better safety net; unit tests

Despite being quite used to writing unit tests for all my server-side code I have not yet fully embraced unit testing on the client. Partly I've been holding back because of the variety of JavaScript testing frameworks available. I wasn't sure which to start with.

But given that it is so easy to introduce bugs into JavaScript I have come to the conclusion that it's better to have some tests in place rather than none. Time to embrace the new.

Conclusion

I've found using Hungarian Notation useful whilst working in JavaScript. Not everyone will feel the same and I think that's fair enough; within reason I think it's generally a good idea to go with what you find useful.

However, I am giving genuine consideration to moving to the DOJO style and moving back to my more standard camel-cased variable names instead of Hungarian Notation. Particularly since I strive to keep my functions short with the view that ideally each should 1 thing well. Keep it simple etc...

And so in a perfect world the situation of forgetting a variables purpose shouldn't really arise... I think once I've got up and running with JavaScript unit tests I may make that move. Hungarian Notation may have proved to be just a stop-gap measure until better techniques were employed...

Saturday, 3 March 2012

jQuery Unobtrusive Remote Validation

Just recently I have been particularly needing to make use of remote / server-side validation in my ASP.NET MVC application and found that the unobtrusive way of using this seemed to be rather inadequately documented (of course it's possible that it's well documented and I just didn't find the resources). Anyway I've rambled on much longer than I intended to in this post so here's the TL;DR:

  • You *can* use remote validation driven by unobtrusive data attributes
  • Using remote validation you can supply *multiple* parameters to be evaluated
  • It is possible to block validation and force it to be re-evaluted - although using a slightly hacky method which I document here. For what it's worth I acknowledge up front that this is *not* an ideal solution but it does seem to work. I really hope there is a better solution out there and if anyone knows about it then please get in contact and let me know.

Off we go... So, jQuery unobtrusive validation; clearly the new cool right?

I'd never been particularly happy with the validation that I had traditionally been using with ASP.NET classic. It worked... but it always seemed a little... clunky? I realise that's not the most well expressed concern. For basic scenarios it seemed fine, but I have recollections of going through some pain as soon as I stepped outside of the basic form validation. Certainly when it came to validating custom controls that we had developed it never seemed entirely straightforward to get validation to play nice.

Based on this I was keen to try something new and the opportunity presented itself when we started integrating MVC into our classic WebForms app. (By the way if you didn't know that MVC and ASP.NET could live together in perfect harmony, well, they can! And a good explanation on how to achieve it is offered by Colin Farr here.)

Jörn Zaefferer came out with the jQuery validation plug-in way back in 2006. And mighty fine it is too. Microsoft (gor' bless 'em) really brought something new to the jQuery validation party when they came out with their unobtrusive javascript validation library along with MVC 3. What this library does, in short, is allows for jQuery validation to be driven by data-val-* attributes alone as long as the jquery.validate.js and jquery.validate.unobtrusive.js libraries are included in the screen (I have assumed you are already including jQuery). I know; powerful stuff!

A good explanation of unobtrusive validation is given by Brad Wilson here.

Anyway, to my point: what about remote validation? That is to say, what about validation which needs to go back to the server to perform the necessary tests? Well I struggled to find decent examples of how to use this. Those that I did find seemed to universally be php examples; not so useful for an ASP.NET user. Also, when I did root out an ASP.NET example there seemed to be a fundamental flaw. Namely, if remote validation hadn't been triggered and completed successfully then the submit could fire anyway. This seems to be down to the asynchronous nature of the test; ie because it is *not* synchronous there is no "block" to the submit. And out of the box with unobtrusive validation there seems no way to make this synchronous. I could of course wire this up manually and simply side-step the restrictions of unobtrusive validation but that wasn't what I wanted.

Your mission John, should you decide to accept it, is this: block the submit until remote validation has completed successfully. As always, should you or any of your I.M. Force be caught or killed, the Secretary will disavow any knowledge of your actions.

So that's what I wanted to do. Make it act like it's synchronous even though it's asynchronous. Bit horrible but I had a deadline to meet and so this is my pragmatic solution. There may be better alternatives but this worked for me.

First of all the HTML:


<form action="/Dummy/ValidationDemo.mvc/SaveUser" 
    id="ValidationForm" method="post">  

  First name: 
  <input data-val="true" data-val-required="First Name required" 
      id="FirstName" name="FirstName" type="text" value="" />

  Last name: 
  <input data-val="true" data-val-required="Last Name required" 
      id="LastName" name="LastName" type="text" value="" />

  User name: 
  <input id="UserName" name="UserName" type="text" value=""
    data-val="true" 
    data-val-required="You must enter a user name before we can validate it remotely"
    data-val-remote="&amp;#39;UserNameInput&amp;#39; is invalid." 
    data-val-remote-additionalfields="*.FirstName,*.LastName" 
    data-val-remote-url="/Dummy/ValidationDemo/IsUserNameValid" />

  <input id="SaveMyDataButton" name="SaveMyDataButton" 
      type="button" value="Click to Save" />
</form>

I should mention that on my actual page (a cshtml partial view) the HTML for the inputs is generated by the use of the InputExtensions.TextBoxFor method which is lovely. It takes your model and using the validation attributes that decorate your models properties it generates the relevant jQuery unobtrusive validation data attributes so you don't have to do it manually.

But for the purposes of seeing what's "under the bonnet" I thought it would be more useful to post the raw HTML so it's entirely clear what is being used. Also there doesn't appear to be a good way (that I've yet seen) for automatically generating Remote validation data attributes in the way that I've found works. So I'm manually specifying the data-val-remote-* attributes using the htmlAttributes parameter of the TextBoxFor (using "_" to replace "-" obviously).

Next the JavaScript that performs the validation:


$(document).ready(function () {

  var intervalId = null,

  //
  // DECLARE FUNCTION EXPRESSIONS
  //

  //======================================================
  // function that triggers update when remote validation 
  // completes successfully
  //======================================================
  pendingValidationComplete = function () {

    var i, errorList, errorListForUsers;
    var $ValidationForm = $("#ValidationForm");
    if ($ValidationForm.data("validator").pendingRequest === 0) {

      clearInterval(intervalId);

      //Force validation to present to user 
      //(this will *not* retrigger remote validation)
      if ($ValidationForm.valid()) {

        alert("Validation has succeeded - you can now submit");
      }
      else {

        //Validation failed! 
        errorList = $ValidationForm.data("validator").errorList;
        errorListForUsers = [];
        for (i = 0; i < errorList.length; i++) {
          errorListForUsers.push(errorList[i].message);
        }

        alert(errorListForUsers.join("\r\n"));
      }
    }
  },

  //======================================================
  // Trigger validation
  //======================================================
  triggerValidation = function (evt) {

    //Removed cached values where remote is concerned
    // so remote validation is retriggered
    $("#UserName").removeData("previousValue");

    //Trigger validation
    $("#ValidationForm").valid();

    //Setup interval which will evaluate validation 
    //(this approach because of remote validation)
    intervalId = setInterval(pendingValidationComplete, 50);
  };

  //
  //ASSIGN EVENT HANDLERS
  //
  $("#SaveMyDataButton").click(triggerValidation);
});

And finally the Controller:


public JsonResult IsUserNameValid(string UserName,
                                  string FirstName, 
                                  string LastName)
{
  var userNameIsUnique = IsUserNameUnique(UserName);
  if (userNameIsUnique)
    return Json(true, JsonRequestBehavior.AllowGet);
  else
    return Json(string.Format(
                  "{0} is already taken I'm afraid {1} {2}",
                  UserName, FirstName, LastName), 
                JsonRequestBehavior.AllowGet);
}

private bool IsUserNameUnique(string potentialUserName)
{
  return false;
}

So what happens here exactly? Well it's like this:

  1. The user enters their first name, last name and desired user name and hits the "Click to Save" button.

  2. This forces validation by first removing any cached validation values stored in previousValue data attribute and then triggering the valid method.

    Disclaimer: I KNOW THIS IS A LITTLE HACKY. I would have expected there would be some way in the API to manually re-force validation. Unless I've missed something there doesn't appear to be. (And the good citizens of Stack Overflow would seem to concur.) I would guess that the underlying assumption is that if nothing has changed on the client then that's all that matters. Clearly that's invalid for our remote example given that a username could be "claimed" at any time; eg in between people first entering their username (when validation should have fired automatically) and actually submitting the form. Anyway - this approach seems to get us round the problem.

  3. When validation takes place the IsUserNameValid action / method on our controller will be called. It's important to note that I have set up a method that takes 3 inputs; UserName, which is supplied by default as the UserName input is the one which is decorated with remote validation attributes as well as the 2 extra inputs of FirstName and LastName. In the example I've given I don't actually need these extra attributes. I'm doing this because I know that I have situations in remote validation where I *need* to supply multiple inputs and so essentially I did it here as a proof of concept.

    The addition of these 2 extra inputs was achieved through the use of the data-val-remote-additionalfields attribute. When searching for documentation about this I found absolutely none. I assume there is some out there - if anyone knows then I'd very pleased to learn about it. I only learned about it in the end by finding an example of someone using this out in the great wide world and understanding how to use it based on their example.

    To understand how the data-val-remote-additionalfields attribute works you can look at jquery.validate.unobtrusive.js. If you're just looking to get up and running then I found that the following works:
    data-val-remote-additionalfields="*.FirstName,*.LastName"

    You will notice that:
    - Each parameter is supplied in the format *.[InputName] and inputs are delimited by ","'s
    - Name is a required attribute for an input if you wish it to be evaluated with unobtrusive validation. (Completely obvious statement I realise; I'm writing that sentence more for my benefit than yours)
    - Finally, our validation always fails. That's deliberate - I just wanted to be clear on the approach used to get remote unobtrusive validation with extra parameters up and running.

  4. Using setInterval we intend to trigger the pendingValidationComplete function to check if remote validation has completed every 50ms - again I try to avoid setInterval wherever possible but this seems to be the most sensible solution in this case.

  5. When the remote request finally completes (ie when pendingRequest has a value of 0) then we can safely proceed on the basis of our validation results. In the example above I'm simply alerting to the screen based on my results; this is *not* advised for any finished work; I'm just using this mechanism here to demonstrate the principle.

Validation in action:

Well I've gone on for far too long but I am happy to have an approach that does what I need. It does feel like a slightly hacky solution and I expect that there is a better approach for this that I'm not aware of. As much as anything else I've written this post in the hope that someone who knows this better approach will set me straight. In summary, this works. But if you're aware of a better solution then please do get in contact - I'd love to know!

PS: Just in case you're in the process of initially getting up and running with unobtrusive validation I've listed below a couple of general helpful bits of config etc:

The following setting is essential for Application_Start in Global.asax.cs:


DataAnnotationsModelValidatorProvider.AddImplicitRequiredAttributeForValueTypes = false;

The following settings should be used in your Web.Config:


<appSettings>
  <add key="ClientValidationEnabled" value="true" />
  <add key="UnobtrusiveJavaScriptEnabled" value="true "/>
</appSettings>

My example used the following scripts:


<script src="Scripts/jquery-1.7.1.js"></script>
<script src="Scripts/jquery.validate.js"></script>
<script src="Scripts/jquery.validate.unobtrusive.js"></script>
<script src="Scripts/ValidationDemo.js"></script>