Friday 13 December 2013

NuGet and WebMatrix: How to install a specific version of a package

I've recently been experimenting with WebMatrix. If you haven't heard of it, WebMatrix is Microsoft's "free, lightweight, cloud-connected web development tool". All marketing aside, it's pretty cool. You can whip up a site in next to no time, it has source control, publishing abilities, intellisense. Much good stuff. And one thing it has, that I genuinely hadn't expected is NuGet. Brilliant!

But like any free product there are disadvantages. As a long time Visual Studio user I've become very used to the power of the NuGet command line. I've been spoiled. You don't have this in WebMatrix. You have a nice UI that looks like this:

Looks great right? However, if you want to install a specific version of a NuGet package... well let's see what happens...

As you're probably aware jQuery currently exists in 2 branches; the 1.10.x branch which supports IE 6-8 and the 2.0.x branch which doesn't. However there is only 1 jQuery inside NuGet. Let's click on install and see if we can select a specific version:

Hmmm.... As you can see it's 2.0.3 or bust. We can't select a specific version; we're forced to go with the latest and greatest which is a problem if you need to support IE 6-8. So the obvious strategy if you're in this particular camp is to forego NuGet entirely. Go old school. And we could. But let's say we want to keep using NuGet, mindful that a little while down the road we'll be ready to do that upgrade. Can it be done? Let's find out.

NuGet, by hook or by crook

I've created a new site in WebMatrix using the Empty Site template. Looks like this:

Lovely.

Now to get me some jQuery 1.10.2 goodness. To the console Batman! We've already got the NuGet command line installed (if you haven't you could get it from here) and so we follow these steps:

  • At the C:\ prompt we enter nuget install jQuery -Version 1.10.2 and down comes jQuery 1.10.2.
  • We move C:\jQuery.1.10.2 to C:\Users\me\Documents\My Web Sites\Empty Site\App_Data\packages\jQuery.1.10.2.
  • Then we delete the C:\Users\me\Documents\My Web Sites\Empty Site\App_Data\packages\jQuery.1.10.2\Tools subfolder.
  • We move C:\Users\me\Documents\My Web Sites\Empty Site\App_Data\packages\jQuery.1.10.2\Content\Scripts to C:\Users\me\Documents\My Web Sites\Empty Site\Scripts.
  • And finally we delete the C:\Users\me\Documents\My Web Sites\Empty Site\App_Data\packages\jQuery.1.10.2\Content folder.

We hit refresh back in WebMatrix and now we get this:

If we go to NuGet and select updates you'll see that jQuery is now considered "installed" and an update is available. So, in short, our plan worked - yay!

Now for bonus points

Just to prove that you can upgrade using the WebMatrix tooling following our manual install let's do it. Click "Update", then "Yes" and finally "I Accept" to the EULA. You'll now see we're now on jQuery 2.0.3:

Rounding off

In my example I'm only looking at a simple JavaScript library. But the same principal should be able to be applied to any NuGet package as far as I'm aware. Hope that helps!

Wednesday 4 December 2013

Simple fading in and out using CSS transitions and classes

Caveat emptor folks... Let me start off by putting my hands up and saying I am no expert on CSS. And furthermore let me say that this blog post is essentially the distillation of a heady session of googling on the topic of CSS transitions. The credit for the technique detailed here belongs to many others, I'm just documenting it for my own benefit (and for anyone who stumbles upon this).

What do we want to do?

Most web developers have likely reached at some point for jQuery's fadeIn and fadeOut awesomeness. What could be cooler than fading in or out your UI, right?

Behind the scenes of fadeIn and fadeOut JavaScript is doing an awful lot of work to create that animation. And in our modern world we simply don't need to do that work anymore; it's gone native and is covered by CSS transitions.

Added to the "because it's there" reason for using CSS transitions to do fading there is a more important reason; let me quote HTML5 rocks:

"CSS Transitions make style animation trivial for everyone, but they also are a smart performance feature. Because a CSS transition is managed by the browser, the fidelity of its animation can be greatly improved, and in many cases hardware accelerated. Currently WebKit (Chrome, Safari, iOS) have hardware accelerated CSS transforms, but it's coming quickly to other browsers and platforms."

Added to this, if you have mobile users then the usage of native functionality (as opposed to doing it manually in JavaScript) actually saves battery life.

I'm sold - let's do it!

This is the CSS we'll need:


.fader {
    -moz-transition: opacity 0.7s linear;
    -o-transition: opacity 0.7s linear;
    -webkit-transition: opacity 0.7s linear;
    transition: opacity 0.7s linear;
}

.fader.fadedOut {
    opacity: 0;
}

Note we have 2 CSS classes:

  • fader - if this class is applied to an element then when the opacity of that element is changed it will be an animated change. The duration of the transition and the timing function used are customisable - in this case it takes 0.7 seconds and is linear.
  • fadedOut - when used in conjunction with fader this class creates a fading in or fading out effect as it is removed or applied respectively. (This relies upon the default value of opacity being 1.)

Let's see it in action:

It goes without saying that one day in the not too distant future (I hope) we'll be able to leave behind the horrible world of vendor prefixes. Then we'll be down to just the single transition statement. One day...

Now, a warning...

Unfortunately the technique detailed above differs from fadeIn and fadeOut in one important way. When the fadeOut animation completes it sets removes the element from the flow of the DOM using display: none. However, display is not a property that can be animated and so you can't include this in your CSS transition. If removing the element from the flow of the DOM is something you need then you'll need to bear this in mind. If anyone has any suggestions for an nice way to approach this I'd love to hear from you.

A halfway there solution to the display: none

Andrew Davey tweeted me the suggestion below:

So I thought I'd give it a go. However, whilst we've a transitionend event to play with we don't have a corresponding transitionstart or transitionbegin. So I tried this:


$("#showHideButton").click(function(){
    var $alertDiv = $("#alertDiv");
    if ($alertDiv.hasClass("fadedOut")) {
        $alertDiv.removeClass("fadedOut").css("display", "");
    }
    else {
        $("#alertDiv").addClass("fadedOut");
    }
})

$(document).on('webkitTransitionEnd transitionend oTransitionEnd', ".fader", 
    function (evnt) {
        var $faded = $(evnt.target);
        if ($faded.hasClass("fadedOut")) {
            $faded.css("display", "none");
        }
});

Essentially, on the transitionend event display: none is applied to the element in question. Groovy. In the absence of a transitionstart or transitionbegin, when removing the fadeOut class I'm first manually clearing out the display: none. Whilst this works in terms of adding it back into the flow of the DOM it takes away all the fadeIn gorgeousness. So it's not quite the fully featured solution you might hope for. But it's a start.

Tuesday 26 November 2013

Rolling your own confirm mechanism using Promises and jQuery UI

It is said that a picture speaks a thousand words. So here's two:

That's right, we're here to talk about the confirm dialog. Or, more specifically, how we can make our own confirm dialog.

JavaScript in the browser has had the window.confirm method for the longest time. This method takes a string as an argument and displays it in the form of a dialog, giving the user the option to click on either an "OK" or a "Cancel" button. If the user clicks "OK" the method returns true, if the user clicks "Cancel" the method returns false.

window.confirm is wonderful in one way - it has a simple API which is easy to grok. But regardless of the browser, window.confirm is always as ugly as sin. Look at the first picture in this blog post; hideous. Or, put more dispassionately, it's not terribly configurable; want to change the button text? You can't. Want to change the styling of the dialog? You can't. You get the picture.

Making confirm 2.0

jQuery UI's dialog has been around for a long time. I've been using it for a long time. But, if you look at the API, you'll see it works in a very different way to window.confirm - basically it's all about the callbacks. My intention was to create a mechanism which allowed me to prompt the user with jQuery UI's tried and tested dialog, but to expose it in a way that embraced the simplicity of the window.confirm API.

How to do this? Promises! To quote Martin Fowler (makes you look smart when you do that):

"In Javascript, promises are objects which represent the pending result of an asynchronous operation. You can use these to schedule further activity after the asynchronous operation has completed by supplying a callback."

When we show our dialog we are in asynchronous land; waiting for the user to click "OK" or "Cancel". When they do, we need to act on their response. So if our custom confirm dialog returns a promise of a boolean (true when the users click "OK", false otherwise) then that should be exactly what we need. I'm going to use Q for promises. (Nothing particularly special about Q - it's one of many Promises / A+ compliant implementations available.)

Here's my custom confirm dialog:


/**
  * Show a "confirm" dialog to the user (using jQuery UI's dialog)
  *
  * @param {string} message The message to display to the user
  * @param {string} okButtonText OPTIONAL - The OK button text, defaults to "Yes"
  * @param {string} cancelButtonText OPTIONAL - The Cancel button text, defaults to "No"
  * @param {string} title OPTIONAL - The title of the dialog box, defaults to "Confirm..."
  * @returns {Q.Promise<boolean>} A promise of a boolean value
  */
function confirmDialog(message, okButtonText, cancelButtonText, title) {
    okButtonText = okButtonText || "Yes";
    cancelButtonText = cancelButtonText || "No";
    title = title || "Confirm...";

    var deferred = Q.defer();
    $('<div title="' + title + '">' + message + '</div>').dialog({
        modal: true,
        buttons: [{
            // The OK button
            text: okButtonText,
            click: function () {
                // Resolve the promise as true indicating the user clicked "OK"
                deferred.resolve(true);
                $(this).dialog("close");
            }
        }, {
            // The Cancel button
            text: cancelButtonText,
            click: function () {
                $(this).dialog("close");
            }
        }],
        close: function (event, ui) {
            // Destroy the jQuery UI dialog and remove it from the DOM
            $(this).dialog("destroy").remove();
            
            // If the promise has not yet been resolved (eg the user clicked the close icon) 
            // then resolve the promise as false indicating the user did *not* click "OK"
            if (deferred.promise.isPending()) {
                deferred.resolve(false);
            }
        }
    });

    return deferred.promise;
}

What's happening here? Well first of all, if okButtonText, cancelButtonText or title have false-y values then they are initialised to defaults. Next, we create a deferred object with Q. Then we create our modal dialog using jQuery UI. There's a few things worth noting about this:

  • We're not dependent on the dialog markup being in our HTML from the off. We create a brand new element which gets added to the DOM when the dialog is created. (I draw attention to this as the jQuery UI dialog documentation doesn't mention that you can use this approach - and frankly I prefer it.)
  • The "OK" and "Cancel" buttons are initialised with the string values stored in okButtonText and cancelButtonText. So by default, "Yes" and "No".
  • If the user clicks the "OK" button then the promise is resolved with a value of true.
  • If the dialog closes and the promise has not been resolved then the promise is resolved with a value of false. This covers people clicking on the "Cancel" button as well as closing the dialog through other means.

Finally we return the promise from our deferred object.

Going from window.confirm to confirmDialog

It's very simple to move from using window.confirm to confirmDialog. Take this example:


if (window.confirm("Are you sure?")) {
    // Do something
}

Becomes:


confirmDialog("Are you sure?").then(function(confirmed) {
    if (confirmed) {
        // Do something
    }
});

There's no more to it than that.

And finally a demo...

With the JSFiddle below you can create your own custom dialogs and see the result of clicking on either the "OK" or "Cancel" buttons.

Monday 4 November 2013

TypeScript: Don't forget Build Action for Implicit Referencing...

As part of the known breaking changes between 0.9 and 0.9.1 there was this subtle but significant switch:

In Visual Studio, all TypeScript files in a project are considered to be referencing each other

Description: Previously, all TypeScript files in a project had to reference each other explicitly. With 0.9.1, they now implicitly reference all other TypeScript files in the project. For existing projects that fit multiple projects into a single projects, these will now have to be separate projects.

Reason: This greatly simplifies using TypeScript in the project context.

Having been initially resistant to this change I recently decided to give it a try. That is to say I started pulling out the /// <reference's from my TypeScript files. However, to my surprise, pulling out these references stopped my TypeScript from compiling and killed my Intellisense. After wrestling with this for a couple of hours I finally filed an issue on the TypeScript CodePlex site. (Because clearly the problem was with TypeScript and not how I was using it, right?)

Wrong!

When I looked through my typing files (*.d.ts) I found that, pretty much without exception, all had a Build Action of "Content" and not "TypeScriptCompile". I went through the project and switched the files over to being "TypeScriptCompile". This resolved the issue and I was then able to pull out the remaining /// <reference comments from the codebase (though I did have to restart Visual Studio to get the Intellisense working).

Most, if not all, of the typing files had been pulled in from NuGet and are part of the DefinitelyTyped project on GitHub. Unfortunately, at present, when TypeScript NuGet packages are added they are added without the "TypeScriptCompile" Build Action. I was going to post an issue there and ask if it's possible for NuGet packages to pull in typings files as "TypeScriptCompile" from the off - fortunately a chap called Natan Vivo already has.

So until this issue is resolved it's probably a good idea to check that your TypeScript files are set to the correct Build Action in your project. And every time you upgrade your TypeScript NuGet packages double check that you still have the correct Build Action afterwards (and to get Intellisense working in VS 2012 at least you'll need to close and re-open the solution as well).

Wednesday 30 October 2013

Getting TypeScript Compile-on-Save and Continuous Integration to play nice

Well sort of... Perhaps this post should more accurately called "How to get CI to ignore your TypeScript whilst Visual Studio still compiles it..."

Once there was Web Essentials

When I first started using TypeScript, I was using it in combination with Web Essentials. Those were happy days. I saved my TS file and Web Essentials would kick off TypeScript compilation. Ah bliss. But the good times couldn't last forever and sure enough when version 3.0 of Web Essentials shipped it pulled support for TypeScript.

This made me, and others, very sad. Essentially we were given the choice between sticking with an old version of Web Essentials (2.9 - the last release before 3.0) and keeping our Compile-on-Save *or* keeping with the latest version of Web Essentials and losing it. And since I understood that newer versions of TypeScript had differences in the compiler flags which slightly broke compatibility with WE 2.9 the latter choice seemed the most sensible...

But there is still Compile on Save hope!

The information was that we need not lose our Compile on Save. We just need to follow the instructions here. Or to quote them:

Then additionally add (or replace if you had an older PreBuild action for TypeScript) the following at the end of your project file to include TypeScript compilation in your project.

...

For C#-style projects (.csproj):


  <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
    <TypeScriptTarget>ES5</TypeScriptTarget>
    <TypeScriptIncludeComments>true</TypeScriptIncludeComments>
    <TypeScriptSourceMap>true</TypeScriptSourceMap>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)' == 'Release'">
    <TypeScriptTarget>ES5</TypeScriptTarget>
    <TypeScriptIncludeComments>false</TypeScriptIncludeComments>
    <TypeScriptSourceMap>false</TypeScriptSourceMap>
  </PropertyGroup>
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" />

I followed these instructions (well I had to tweak the Import Project location) and I was in business again. But I when I came to check my code into TFS I came unstuck. The automated build kicked off and then, in short order, kicked me:


C:\Builds\1\MyApp\MyApp Continuous Integration\src\MyApp\MyApp.csproj (1520): The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\TypeScript\Microsoft.TypeScript.targets" was not found. Confirm that the path in the  declaration is correct, and that the file exists on disk.
C:\Builds\1\MyApp\MyApp Continuous Integration\src\MyApp\MyApp.csproj (1520): The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\TypeScript\Microsoft.TypeScript.targets" was not found. Confirm that the path in the  declaration is correct, and that the file exists on disk.

That's right, TypeScript wasn't installed on the build server. And since TypeScript was now part of the build process my builds were now failing. Ouch.

So what now?

I did a little digging and found this issue report on the TypeScript CodePlex site. To quote the issue, it seemed there were 2 possible solutions to get continuous integration and typescript playing nice:

  1. Install TypeScript on the build server
  2. Copy the required files for Microsoft.TypeScript.targets to a different source-controlled folder and change the path references in the csproj file to this folder.

#1 wasn't an option for us - we couldn't install on the build server. And covering both #1 and #2, I wasn't particularly inclined to kick off builds on the build server since I was wary of reported problems with memory leaks etc with the TS compiler. I may feel differently later when TS is no longer in Alpha and has stabilised but it didn't seem like the right time.

A solution

So, to sum up, what I wanted was to be able to compile TypeScript in Visual Studio on my machine, and indeed in VS on the machine of anyone else working on the project. But I *didn't* want TypeScript compilation to be part of the build process on the server.

The solution in the end was pretty simple - I replaced the .csproj changes with the code below:


  <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
    <TypeScriptTarget>ES5</TypeScriptTarget>
    <TypeScriptRemoveComments>false</TypeScriptRemoveComments>
    <TypeScriptSourceMap>false</TypeScriptSourceMap>
    <TypeScriptModuleKind>AMD</TypeScriptModuleKind>
    <TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)' == 'Release'">
    <TypeScriptTarget>ES5</TypeScriptTarget>
    <TypeScriptRemoveComments>false</TypeScriptRemoveComments>
    <TypeScriptSourceMap>false</TypeScriptSourceMap>
    <TypeScriptModuleKind>AMD</TypeScriptModuleKind>
    <TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
  </PropertyGroup>
  <Import Project="$(VSToolsPath)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(VSToolsPath)\TypeScript\Microsoft.TypeScript.targets')" />

What this does is enable TypeScript compilation *only* if TypeScript is installed. So when I'm busy developing with Visual Studio on my machine with the plugin installed I can compile TypeScript. But when I check in the TypeScript compilation is *not* performed on the build server. This is because TypeScript is not installed on the build server and we are only compiling if it is installed. (Just to completely labour the point.)

Final thoughts

I do consider this an interim solution. As I mentioned earlier, when TypeScript has stabilised I think I'd like TS compilation to be part of the build process. Like with any other code I think compiling on check-in to catch bugs early is an excellent idea. But I think I'll wait until there's some clearer guidance on the topic from the TypeScript team before I take this step.

Friday 4 October 2013

Migrating from jquery.validate.unobtrusive.js to jQuery.Validation.Unobtrusive.Native

So, you're looking at jQuery.Validation.Unobtrusive.Native. You're thinking to yourself "Yeah, I'd really like to use the native unobtrusive support in jQuery Validation. But I've already got this app which is using jquery.validate.unobtrusive.js - actually how easy is switching over?" Well I'm here to tell you that it's pretty straightforward - here's a walkthrough of how it might be done.

I need something to migrate

So let's File > New Project ourselves a new MVC 4 application using the Internet Application template. I've picked this template as I know it ships with account registration / login screens in place which make use of jquery.validate.unobtrusive.js. To demo this just run the project, click the "Log in" link and then click the "Log in" button - you should see something like this:

What you've just witnessed is jquery.validate.unobtrusive.js doing its thing. Both the UserName and Password properties on the LoginModel are decorated with the Required data annotation which, in the above scenario, causes the validation to be triggered on the client thanks to MVC rendering data attributes in the HTML which jquery.validate.unobtrusive.js picks up on. The question is, how can we take the log in screen above and migrate it across to to using jQuery.Validation.Unobtrusive.Native?

Hit me up NuGet!

Time to dive into NuGet and install jQuery.Validation.Unobtrusive.Native. We'll install the MVC 4 version using this command:

PM> Install-Package jQuery.Validation.Unobtrusive.Native.MVC4

What has this done to my project? Well 2 things

  1. It's upgraded jQuery Validation (jquery.validate.js) from v1.10.0 (the version that is currently part of the MVC 4 template) to v1.11.1 (the latest and greatest jQuery Validation as of the time of writing)
  2. It's added a reference to the jQuery.Validation.Unobtrusive.Native.MVC4 assembly, like so:

In case you were wondering, doing this hasn't broken the existing jquery.validate.unobtrusive.js - if you head back to the Log in screen you'll still see the same behaviour as before.

Migrating...

We need to switch our TextBox and Password helpers over to using jQuery.Validation.Unobtrusive.Native, which we achieve by simply passing a second argument of true to useNativeUnobtrusiveAttributes. So we go from this:


// ...
@Html.TextBoxFor(m => m.UserName)
// ...
@Html.PasswordFor(m => m.Password)
// ...

To this:


// ...
@Html.TextBoxFor(m => m.UserName, true)
// ...
@Html.PasswordFor(m => m.Password, true)
// ...

With these minor tweaks in place the natively supported jQuery Validation data attributes will be rendered into the textbox / password elements instead of the jquery.validate.unobtrusive.js ones.

Next lets do the JavaScript. If you take a look at the bottom of the Login.cshtml view you'll see this:


@section Scripts {
    @Scripts.Render("~/bundles/jqueryval")
}

Which renders the following scripts:


<script src="/Scripts/jquery.unobtrusive-ajax.js"></script>
<script src="/Scripts/jquery.validate.js"></script>
<script src="/Scripts/jquery.validate.unobtrusive.js"></script>

In our brave new world we're only going to need jquery.validate.js - so let's create ourselves a new bundle in BundleConfig.cs which only contains that single file:


bundles.Add(new ScriptBundle("~/bundles/jqueryvalnative")
    .Include("~/Scripts/jquery.validate.js"));

To finish off our migrated screen we need to do 2 things. First we need to switch over the Login.cshtml view to only render the jquery.validate.js script (in the form of our new bundle). Secondly, the other thing that jquery.validate.unobtrusive.js did was to trigger validation for the current form. So we need to do that ourselves now. So our finished Scripts section looks like this:


@section Scripts {
    @Scripts.Render("~/bundles/jqueryvalnative")
    <script>
        $("form").validate();
    </script>
}

Which renders the following script:


<script src="/Scripts/jquery.validate.js"></script>
<script>
    $("form").validate();
</script>

And, pretty much, that's it. If you run the app now and go to the Log in screen and try to log in without credentials you'll get this:

Which is functionally exactly the same as previously. The eagle eyed will notice some styling differences but that's all it comes down to really; style. And if you were so inclined you could easily style this up as you liked using CSS and the options you can pass to jQuery Validation (in fact a quick rummage through jquery.validate.unobtrusive.js should give you everything you need).

Rounding off

Before I sign off I'd like to illustrate how little we've had to change the code to start using jQuery.Validation.Unobtrusive.Native. Just take a look at this code comparison:

As you see, it takes very little effort to migrate from one approach to the other. And it's *your* choice. If you want to have one screen that uses jQuery.Validation.Unobtrusive.Native and one screen that uses jquery.validation.unobtrusive.js then you can! Including jQuery.Validation.Unobtrusive.Native in your project gives you the option to use it. It doesn't force you to, you can do so as you need to and when you want to. It's down to you.

Saturday 17 August 2013

Using Bootstrap Tooltips to display jQuery Validation error messages

I love jQuery Validation. I was recently putting together a screen which had a lot of different bits of validation going on. And the default jQuery Validation approach of displaying the validation messages next to the element being validated wasn't working for me. That is to say, because of the amount of elements on the form, the appearance of validation messages was really making a mess of the presentation. So what to do?

Tooltips to the rescue!

I was chatting to Marc Talary about this and he had the bright idea of using tooltips to display the error messages. Tooltips would allow the existing presentation of the form to remain as is whilst still displaying the messages to the users. Brilliant idea!

After a certain amount of fiddling I came up with a fairly solid mechanism for getting jQuery Validation to display error messages as tooltips which I'll share here. It's worth saying that for the application that Marc and I were working on we already had jQuery UI in place and so we decided to use the jQuery UI tooltip. This example will use the Bootstrap tooltip instead. As much as anything else this demonstrates that you could swap out the tooltip mechanism here with any of your choosing.

Beautiful isn't it? Now look at the source:

All the magic is in the JavaScript, specifically the showErrors function that's passed as an option to jQuery Validation. Enjoy!

Thursday 8 August 2013

Announcing jQuery Validation Unobtrusive Native...

I've been busy working on an open source project called jQuery Validation Unobtrusive Native. To see it in action take a look here.

A Little Background

I noticed a little while ago that jQuery Validation was now providing native support for validation driven by HTML 5 data attributes. As you may be aware, Microsoft shipped jquery.validate.unobtrusive.js back with MVC 3. (I have written about it before.) It provided a way to apply data model validations to the client side using a combination of jQuery Validation and HTML 5 data attributes.

The principal of this was and is fantastic. But since that time the jQuery Validation project has implemented its own support for driving validation unobtrusively (shipping with jQuery Validation 1.11.0). I've been looking at a way to directly use the native support instead of jquery.validate.unobtrusive.js.

So... What is jQuery Validation Unobtrusive Native?

jQuery Validation Unobtrusive Native is a collection of ASP.Net MVC HTML helper extensions. These make use of jQuery Validation's native support for validation driven by HTML 5 data attributes. The advantages of the native support over jquery.validate.unobtrusive.js are:

  • Dynamically created form elements are parsed automatically. jquery.validate.unobtrusive.js does not support this whilst jQuery Validation does. Take a look at a demo using Knockout.
  • jquery.validate.unobtrusive.js restricts how you use jQuery Validation. If you want to use showErrors or something similar then you may find that you need to go native (or at least you may find that significantly easier than working with the jquery.validate.unobtrusive.js defaults)...
  • Send less code to your browser, make your browser to do less work and even get a (marginal) performance benefit .

This project intends to be a bridge between MVC's inbuilt support for driving validation from data attributes and jQuery Validation's native support for the same. This is achieved by hooking into the MVC data attribute creation mechanism and using it to generate the data attributes natively supported by jQuery Validation.

Future Plans

So far the basic set of the HtmlHelpers and their associated unobtrusive mappings have been implemented. If any have been missed then let me know. As time goes by I intend to:

  • fill in any missing gaps there may be
  • maintain MVC 3, 4 (and when the time comes 5+) versions of this on Nuget
  • not all data annotations generate client data attributes - if it makes sense I may look to implement some of these where it seems sensible. (eg the MinLengthAttribute annotation could be mapped to minlength validation...)
  • get the unit test coverage to a good level

    and finally (and perhaps most importantly)
  • create some really useful demos and documentation.

Help is appreciated so feel free to pitch in! You can find the project on GitHub here...

Saturday 6 July 2013

How I'm Using Cassette part 3:
Cassette and TypeScript Integration

The modern web is JavaScript. There's no two ways about it. HTML 5 has new CSS, new HTML but the most important aspect of it from an application development point of view is JavaScript. It's the engine. Without it HTML 5 wouldn't be the exciting application platform that it is. Half the posts on Hacker News would vanish.

It's easy to break a JavaScript application. One false keypress and you can mysteriously turn a fully functioning app into toast. And not know why. There's tools you can use to help yourself - JSHint / JSLint but whilst these make error detection a little easier it remains very easy to shoot yourself in the foot with JavaScript. Because of this I've come to really rather love TypeScript. If you didn't already know, TypeScript can be summed up as JavaScript with optional static typing. It's a superset of JavaScript - JavaScript with go-faster stripes. When run through the compiler TypeScript is transpiled into JavaScript. And importantly, if you have bugs in your code, the compiler should catch them at this point and let you know.

Now very few of us are working on greenfield applications. Most of us have existing applications to maintain and support. Happily, TypeScript fits very well with this purely because TypeScript is a superset of JavaScript. That is to say: all JavaScript is valid TypeScript in the same way that all CSS is valid LESS. This means that you can take an existing .js file, rename it to have a .ts suffix, run the TypeScript compiler over it and out will pop your JavaScript file just as it was before. You're then free to enrich your TypeScript file with the relevant type annotations at your own pace. Increasing the robustness of your codebase is a choice left to you.

The project I am working on has recently started to incorporate TypeScript. It's an ASP.Net MVC 4 application which makes use of Knockout. The reason we started to incorporate TypeScript is because certain parts of the app, particularly the Knockout parts, were becoming more complex. This complexity wasn't really an issue when we were writing the relevant JavaScript. However, when it came to refactoring and handing files from one team member to another we realised it was very easy to introduce bugs into the codebase, particularly around the JavaScript. Hence TypeScript.

Cassette and TypeScript

Enough of the pre-amble. The project was making use of Cassette for serving up its CSS and JavaScript. Because Cassette rocks. One of the reasons we use it is that we're making extensive use of Cassette's ability to serve scripts in dependency order. So if we were to move to using TypeScript it was important that TypeScript and Cassette would play well together.

I'm happy to report that Cassettes and TypeScript do work well together, but there are a few things that you need to get up and running. Or, to be a little clearer, if you want to make use of Cassette's in-file Asset Referencing then you'll need to follow these steps. If you don't need Asset Referencing then you'll be fine using Cassette with TypeScript generated JavaScript as is *provided* you ensure the TypeScript compiler is not preserving comments in the generated JavaScript.

The Fly in the Ointment: Asset References

TypeScript is designed to allow you to break up your application into modules. However, the referencing mechanism which allows you to reference one TypeScript file / module from another is exactly the same as the existing Visual Studio XML reference comments mechanism that was originally introduced to drive JavaScript Intellisense in Visual Studio. To quote the TypeScript spec:

  • A comment of the form /// <reference path="…"/> adds a dependency on the source file specified in the path argument. The path is resolved relative to the directory of the containing source file.
  • An external import declaration that specifies a relative external module name (section 11.2.1) resolves the name relative to the directory of the containing source file. If a source file with the resulting path and file extension ‘.ts’ exists, that file is added as a dependency. Otherwise, if a source file with the resulting path and file extension ‘.d.ts’ exists, that file is added as a dependency.

The problem is that Cassette *also* supports Visual Studio XML reference comments to drive Asset References. The upshot of this is, that Cassette will parse the /// <reference path="*.ts"/>s and will attempt to serve up the TypeScript files in the browser... Calamity!

Pulling the Fly from the Ointment

Again I'm going to take the demo from last time (the References branch of my CassetteDemo project) and build on top of it. First of all, we need to update the Cassette package. This is because to get Cassette working with TypeScript you need to be running at least Cassette 2.1. So let's let NuGet do it's thing:

Update-Package Cassette.Aspnet

And whilst we're at it let's grab the jQuery TypeScript typings - we'll need them later:

Install-Package jquery.TypeScript.DefinitelyTyped

Now we need to add a couple of classes to the project. First of all this:

Which subclasses ParseJavaScriptReferences and ensures TypeScript files are excluded when JavaScript references are being parsed. And to make sure that Cassette makes use of ParseJavaScriptNotTypeScriptReferences in place of ParseJavaScriptReferences we need this:

Now we're in a position to use TypeScript with Cassette. To demonstrate this let's take the Index.js and rename it to Index.ts. And now it's TypeScript. However before it can compile it needs to know what jQuery is - so we drag in the jQuery typings from Definitely Typed. And now it can compile from this:

To this: (Please note that I get the TypeScript compiler to preserve my comments in order that I can continue to use Cassettes Asset Referencing)

As you can see the output JavaScript has both the TypeScript and the Cassette references in place. However thanks to ParseJavaScriptNotTypeScriptReferences those TypeScript references will be ignored by Cassette.

So that's it - we're home free. Before I finish off I'd like to say thanks to Cassette's Andrew Davey who set me on the right path when trying to work out how to do this. A thousand thank yous Andrew!

And finally, again as last time you can see what I've done in this post by just looking at the repository on GitHub. The changes I made are on the TypeScript branch of that particular repository.

Wednesday 26 June 2013

jQuery Validate - Native Unobtrusive Validation Support!

Did you know that jQuery Validate natively supports the use of HTML 5 data attributes to drive validation unobtrusively? Neither did I - I haven't seen any documentation for it. However, I was reading the jQuery Validate test suite and that's what I spotted being used in some of the tests.

I was quite keen to give it a try as I've found the Microsoft produced unobtrusive extensions both fantastic and frustrating in nearly equal measure. Fantastic because they work and they're integrated nicely with MVC. Frustrating, because they don't allow you do all the things that jQuery Validate in the raw does.

So when I realised that there was native alternative available I was delighted. Enough with the fine words - what we want is a demo:

Not particularly exciting? Not noticably different to any other jQuery Validate demo you've ever seen? Fair enough. Now look at the source:

Do you see what I see? Data attributes (both data-rule-* and data-msg-*s) being used to drive the validation unobtrusively! And if you look at the JavaScript files referenced you will see *no sign* of jquery.validate.unobtrusive.js - this is all raw jQuery Validate. Nothing else.

Why is this useful?

First of all, I'm of the opinion that it makes intuitive sense to have the validation information relevant to various DOM elements stored directly with those DOM elements. There will be occasions where you may not want to use this approach but, in the main, I think it's very sensible. It saves you bouncing back and forth between your HTML and your JavaScript and it means when you read the HTML you know there and then what validation applies to your form.

I think this particularly applies when it comes to adding elements to the DOM dynamically. If I use data attributes to drive my validation and I dynamically add elements then jQuery Validate will parse the validation rules for me. I won't have to subsequently apply validation to those new elements once they've been added to the DOM. 1 step instead of 2. It makes for simpler code and that's always a win.

Wrapping up

For myself I'm in the early stages of experimenting with this but I thought it might be good to get something out there to show how this works. If anyone knows of any official documentation for this please do let me know - I'd love to have a read of it. Maybe it's been out there all along and it's just my Googling powers are inadequate.

Update 09/08/2012

If you're using ASP.Net MVC 3+ and this post has been of interest to you then you might want to take a look at this.

Thursday 6 June 2013

How I'm Using Cassette part 2:
Get Cassette to Serve Scripts in Dependency Order

Last time I wrote about Cassette I was talking about how to generally get up and running. How to use Cassette within an ASP.Net MVC project. What I want to write about now is (in my eyes) the most useful feature of Cassette by a country mile. This is Cassettes ability to ensure scripts are served in dependency order.

Why does this matter?

You might well ask. If we go back 10 years or so then really this wasn't a problem. No-one was doing a great deal with JavaScript. And if they did anything it tended to be code snippets in amongst the HTML; nothing adventurous. But unless you've had your head in the sand for the last 3 years then you will have clearly noticed that JavaScript is in rude health and being used for all kinds of things you'd never have imagined. In fact some would have it that it's the assembly language of the web.

For my part, I've been doing more and more with JavaScript. And as I do more and more with it I seek to modularise my code; (like any good developer would) breaking it up into discrete areas of functionality. I aim to only serve up the JavaScript that I need on a given page. And that would be all well and good but for one of the languages shortcomings. Modules. JavaScript doesn't yet have a good module loading story to tell. (Apparently one's coming in EcmaScript 6). (I don't want to get diverted into this topic as it's a big area. But if you're interested then you can read up a little on different approaches being used here. The ongoing contest between RequireJS and CommonJS frankly makes me want to keep my distance for now.)

It Depends

Back to my point, JavaScripts native handling of script dependencies is non-existent. It's real "here be dragons" territory. If you serve up, for example, Slave.js that depends on things set up in Master.js before you've actually served up Master.js, well it's not a delightful debugging experience. The errors tend be obscure and it's not always obvious what the correct ordering should be.

Naturally this creates something of a headache around my own JavaScript modules. A certain amount of jiggery-pokery is required to ensure that scripts are served in the correct order so that they run as expected. And as your application becomes more complicated / modular, the number of problems around this area increase exponentially. It's really tedious. I don't want to be thinking about managing that as I'm developing - I want to be focused on solving the problem at hand.

In short, what I want to do is reference a script file somewhere in my server-side pipeline. I could be in a view, a layout, a controller, a partial view, a HTML helper... - I just want to know that that script is going to turn up at the client in the right place in the HTML so it works. Always. And I don't want to have to think about it any further than that.

Enter Cassette, riding a white horse

And this is where Cassette takes the pain away. To quote the documentation:

"Some assets must be included in a page before others. For example, your code may use jQuery, so the jQuery script must be included first. Cassette will sort all assets based on references they declare."

Just the ticket!

Declaring References Server-Side

What does this look like in reality? Let's build on what I did last time to demonstrate how I make use of Asset References to ensure my scripts turn up in the order I require.

In my _Layout.cshtml file I'm going to remove the following reference from the head of the file:

Bundles.Reference("~/bundles/core");

I'm pulling this out of my layout page because it's presence means that every page MVC serves up is also serving up jQuery and jQuery UI (which is what ~/bundles/core is). If a page doesn't actually make use of jQuery and / or jQuery UI then there's no point in doing this.

"But wait!", I hear you cry, "Haven't you just caused a bug with your reckless action? I distinctly recall that the Login.cshtml page has the following code in place:"

Bundles.Reference("~/bundles/validate");

"And now with your foolhardy, nay, reckless attitude to the ~/bundles/core bundle you've broken your Login screen. How can jQuery Validation be expected to work if there's no jQuery there to extend?"

Well, I understand your concerns but really you needn't worry - Cassette's got my back. Look closely at the code below:

See it? The ~/bundles/validate bundle declares a reference to the ~/bundles/core bundle. The upshot of this is, that if you tell Cassette to reference ~/bundles/validate it will ensure that before it renders that bundle it first renders any bundles that bundle depends on (in this case the ~/bundles/core bundle).

This is a very simple demonstration of the feature but I can't underplay just how useful I find this.

Declaring References in your JavaScript itself

And the good news doesn't stop there. Let's say you don't want to maintain your references in a separate file. You'd rather declare references inside your JavaScript files themselves. Well - you can. Cassette caters for this through the usage of Asset References.

Let's demo this. First of all add the following file at this location in the project: ~/Scripts/Views/Home/Index.js

The eagle-eyed amongst you will have noticed

  1. I'm mirroring the MVC folder structure inside the Scripts directory. (There's nothing special about that by the way - it's just a file structure I've come to find useful. It's very easy to find the script associated with a View if the scripts share the same organisational approach as the Views.).
  2. The purpose of the script is very simple, it fades out the main body of the screen, re-writes the HTML in that tag and then fades back in. It's purpose is just to do something that is obvious to the user - so they can see the evidence of JavaScript executing.
  3. Lastly and most importantly, do you notice that // @reference ~/bundles/core is the first line of the file? This is our script reference. It's this that Cassette will be reading to pick up references.

To make sure Cassette is picking up our brand new file let's take a look at CassetteConfiguration.cs and uncomment the line of code below:

bundles.AddPerIndividualFile("~/Scripts/Views");

With this in place Cassette will render out a bundle for each script in the Views subdirectory. Let's see if it works. Add the following reference to our new JavaScript file in ~/Views/Home/Index.cshtml:

Bundles.Reference("~/Scripts/Views/Home/Index.js");

If you browse to the home page of the application this is what you should now see:

What this means is, Index.js was served up by Cassette. And more importantly before Index.js was served the referenced ~/bundles/core was served too.

Avoiding the Gotcha

There is a gotcha which I've discovered whilst using Cassette's Asset References. Strictly speaking it's a Visual Studio gotcha rather than a Cassette gotcha. It concerns Cassette's support for Visual Studio XML style reference comments. In the example above I could have written this:

/// <reference path="~/bundles/core" />

Instead of this:

// @reference ~/bundles/core

It would fulfil exactly the same purpose and would work identically. But there's a problem. Using Visual Studio XML style reference comments to refer to Cassette bundles appears to trash the Visual Studios JavaScript Intellisense. You'll lose the Intellisense that's driven by ~/Scripts/_references.js in VS 2012. So if you value your Intellisense (and I do) my advice is to stick to using the standard Cassette references style instead.

Go Forth and Reference

There is also support in Cassette for CSS referencing (as well as other types of referencing relating to LESS and even CoffeeScript). I haven't made use of CSS referencing myself as, in stark contrast to my JS, my CSS is generally one bundle of styles which I'm happy to be rendered on each page. But it's nice to know the option is there if I wanted it.

Finally, as last time you can see what I've done in this post by just looking at the repository on GitHub. The changes I made are on the References branch of that particular repository.

Saturday 4 May 2013

How I'm Using Cassette part 1:
Getting Up and Running

Backing into the light

For a while now, I've been seeking a bulletproof way to handle the following scenarios... all at the same time in the context of an ASP.Net MVC application:

  1. How to serve full-fat JavaScript in debug mode and minified in release mode
  2. When debugging, ensure that the full-fat JS being served is definitely the latest version; and *not* from the cache. (The time I've wasted due to 304's...)
  3. How to add Javascript assets that need to be served up from any point in an ASP.Net MVC application (including views, layouts, partial views... even controllers if so desired) whilst preventing duplicate scripts from being served.
  4. How to ensure that Javascript files are served up last to any web page to ensure a speedy feel to users (don't want JS blocking rendering).
  5. And last but certainly not least the need to load Javascript files in dependency order. If myView.js depends on jQuery then clearly jQuery-latest.js needs to be served before myView.js.

Now the best, most comprehensive and solid looking solution to this problem has for some time seemed to me to be Andrew Davey's Cassette. This addresses all my issues in one way or another, as well as bringing in a raft of other features (support for Coffeescript etc).

However, up until now I've slightly shied away from using Cassette as I was under the impression it had a large number of dependencies. That doesn't appear to be the case at all. I also had some vague notion that I could quite simply build my own solution to these problems making use of Microsoft's Web Optimization which nicely handles my #1 problem above. However, looking again at the documentation Cassette was promising to handle scenarios #1 - #5 without breaking sweat. How could I ignore that? I figured I should do the sensible thing and take another look at it. And, lo and behold, when I started evaluating it again it seemed to be just what I needed.

With the minumum of fuss I was able to get an ASP.Net MVC 4 solution up and running, integrated with Cassette, which dealt with all my scenarios very nicely indeed. I thought it might be good to write this up over a short series of posts and share what my finished code looks like. If you follow the steps I go through below it'll get you started using Cassette. Or you could skip to the end of this post and look at the repo on GitHub. Here we go...

Adding Cassette to a raw MVC 4 project

Fire up Visual Studio and create a new MVC 4 project (I used the internet template to have some content in place).

Go to the Package Manager Console and key in "Install-Package Cassette.Aspnet". Cassette will install itself.

Now you've got Cassette in place you may as well pull out usage of Web Optimization as you're not going to need it any more.Be ruthless, delete App_Start/BundleConfig.cs and delete the line of code that references it in Global.asax.cs. If you take the time to run the app now you'll see you've miraculously lost your CSS and your JavaScript. The code referencing it is still in place but there's nothing for it to serve up. Don't worry about that - we're going to come back and Cassette-ify things later on...

You'll also notice you now have a CassetteConfiguration.cs file in your project. Open it. Replace the contents with this (I've just commented out the default code and implemented my own CSS and Script bundles based on what is available in the default template of an MVC 4 app):

In the script above I've created 4 bundles, 1 stylesheet bundle and 3 JavaScript bundles - each of these is roughly equivalent to Web Optimization bundles that are part of the MVC 4 template:

~/bundles/css
Our site CSS - this includes both our own CSS and the jQuery UI CSS as well. This is the rough equivalent of the Web Optimization bundles ~/Content/css and ~/Content/themes/base/css brought together.
~/bundles/head
What scripts we want served in the head tag - Modernizr basically. Do note the setting of the PageLocation property - the purpose of this will become apparent later. This is the direct equivalent of the Web Optimization bundle: ~/bundles/modernizr.
~/bundles/core
The scripts we want served on every page. For this example project I've picked jQuery and jQuery UI. This is the rough equivalent of the Web Optimization bundles ~/bundles/jquery and ~/bundles/jqueryui brought together.
~/bundles/validate
The validation scripts (that are dependent on the core scripts). This is the rough equivalent of the Web Optimization bundle: ~/bundles/jqueryval.

At this point we've set up Cassette in our project - although we're not making use of it yet. If you want to double check that everything is working properly then you can fire up your project and browse to "Cassette.axd" in the root. You should see something a bit like this:

How Web Optimization and Cassette Differ

If you're more familiar with the workings of Web Optimization than Cassette then it's probably worth taking a moment to appreciate an important distinction between the slightly different ways each works.

Web Optimization

  1. Create bundles as desired.
  2. Serve up bundles and / or straight JavaScript files as you like within your MVC views / partial views / layouts.

Cassette

  1. Create bundles for *all* JavaScript files you wish to serve up. You may wish to create some bundles which consist of a number of a number of JavaScript files pushed together. But for each individual file you wish to serve you also need to create an individual bundle. (Failure to do so may mean you fall prey to the "Cannot find an asset bundle containing the path "~/Scripts/somePath.js".")
  2. Reference bundles and / or individual JavaScript files in their individual bundles as you like within your MVC views / partial views / layouts / controllers / HTML helpers... the list goes on!
  3. Render the referenced scripts to the page (typically just before the closing body tag)

Making use of our Bundles

Now we've created our bundles let's get the project serving up CSS and JavaScript using Cassette. First the layout file. Take the _Layout.cshtml file from this:

To this:

And now let's take one of the views, Login.cshtml and take it from this:

To this:

So now you should be up and running with Cassette. If you want the code behind this then take I've put it on GitHub here.

Friday 26 April 2013

A navigation animation (for your users delectation)

The Vexation

The current application I'm working on lives within an iframe. A side effect of that is that my users no longer get the visual feedback that they're used to as they navigate around the site. By "visual feedback" what I mean are the little visual tics that are displayed in the browser when you're in the process of navigating from one screen to the next. Basically, these:

When an application is nested in an iframe it seems that these visual tics aren't propogated up to the top frame of the browser as the user navigates around. Clicking on links results in a short lag whilst nothing appears to be happening and then, BANG!, a new page is rendered. This is not a great user experience. There's nothing to indicate that the link has been clicked on and the browser is doing something. Well, not in Internet Explorer at least - Chrome (my browser of choice) appears to do just that. But that's really by the by, the people using my app will be using the corporate browser, IE; so I need to think about them.

Now I'm fully aware that this is more in the region of nice-to-have rather than absolute necessity. That said, my experience is that when users think an application isn't responding fast enough their action point is usually "click it again, and maybe once more for luck". To prevent this from happening, I wanted to give the users back some kind of steer when they were in the process of navigation, iframe or no iframe.

The Agreeable Resolution

To that end, I've come up with something that I feel does the job, and does it well. I've taken a CSS animation courtesy of the good folk at CSS Load and embedded it in the layout of my application. This animation is hidden from view until the user navigates to another page. At that point, the CSS animation appears in the header of the screen and remains in place until the new screen is rendered. This is what it looks like:

How's that work then guv?

You're no doubt dazzled by the glory of it all. How was it accomplished? Well, it was actually a great deal easier than you might think. First of all we have the html:

Apart from the outer div tag (#navigationAnimation) all of this is the HTML taken from CSS Load. If you wanted to use a different navigation animation you could easily replace the inner HTML with something else instead. Next up is the CSS, again courtesy of CSS Load (and it's this that turns this simple HTML into sumptuous animated goodness):

And finally we have the JavaScript which is responsible for showing animation when the user starts navigating:

It's helped along with a little jQuery here but this could easily be accomplished with vanilla JS if you fancied. The approach works by hooking into the beforeunload event that fires when "the window, the document and its resources are about to be unloaded". There's a little bit more to the functionality in the JavaScript abover which I go into in the PPS below. Essentially that covers backwards compatibility with earlier versions of IE.

I've coded this up in a manner that lends itself to re-use. I can imagine that you might also want to make use of the navigation animation if, for example, you had an expensive AJAX operation on a page and you didn't want the users to despair. So the navigation animation could become a kind of a generic "I am doing something" animation instead - I leave it to your disgression.

Oh, and a final PS

I had initially planned to use an old school animated GIF instead of a CSS animation. The thing that stopped me taking this course of action is that, to quote an answer on Stack Overflow "IE assumes that the clicking of a link heralds a new navigation where the current page contents will be replaced. As part of the process for perparing for that it halts the code that animates the GIFs.". So I needed animation that stayed animated. And lo, there were CSS animations...

Better make that a PPS - catering for IE 9 and earlier

I spoke a touch too soon when I expounded on how CSS animations were going to get me out of a hole. Unfortunately, and to my lasting regret, they aren't supported in IE 9. And yes, at least for now that is what the users have. To get round this I've delved a little bit further and discovered a frankly hacky way to make animated gifs stay animated after beforeunload has fired. It works by rendering an animated gif to the screen when beforeunload is fired. Why this works I couldn't say - but if you're interested to research more then take a look at this answer on Stack Overflow. In my case I've found an animated gif on AjaxLoad which looks pretty similar to the CSS animation:

This is now saved away as navigationAnimation.gif in the application. The JavaScript uses Modernizr to detect if CSS animations are in play. If they're not then the animated gif is rendered to the screen in place of the CSS animation HTML. Ugly, but it seems to work well; I think this will work on IE 6 - 9. The CSS animations will work on IE 10+.

Wednesday 17 April 2013

IE 10 Install Torches JavaScript Debugging in Visual Studio 2012 Through Auto Update (Probably)

OK the title of this post is a little verbose. I've just wasted a morning of my life trying to discover what happened to my ability to debug JavaScript in Visual Studio 2012. If you don't want to experience the same pain then read on...

The Symptoms

  1. I'm not hitting my JavaScript breakpoints when I hit F5 in Visual Studio.
  2. Script Documents is missing from the Solution Explorer when I'm debugging in Visual Studio.

The Cure

In the end, after a great deal of frustration, I happened upon this answer on Stack Overflow. It set me in the right direction.

In my "Browse With..." drop down in Visual Studio I was *not* seeing this:

I was seeing exactly the same as this list but with TWO instances of Internet Explorer in the list instead of one. Odd, I know.

I fixed this up by selecting Google Chrome as my target instead of IE, running it and then setting it back to IE. And interestingly, when I went to set it back to IE there was only one instance of Internet Explorer in the list again.

The Probable Cause

My machine was auto updated from IE 9 to IE 10 just the other day. I *think* my JavaScript debugging issue appeared at the same time. This would explain to me why I had two instances of "Internet Explorer" in my list. Not certain but I'd say the evidence is fairly compelling.

Painful Microsoft. Painful

Tuesday 9 April 2013

Making IE 10's clear field (X) button and jQuery UI autocomplete play nice

This morning when I logged on I was surprised to discover IE 10 had been installed onto my machine. I hadn't taken any action to trigger this myself and so I’m assuming that this was part of the general Windows Update mechanism. I know Microsoft had planned to push IE 10 out through this mechanism.

I was a little surprised that my work desktop had been upgraded without any notice. And I was initially rather concerned given that most of my users have IE 9 and now I didn't have a test harness on my development machine any more. (I've generally found that having the majority users browser on your own machine is a good idea.) However, I wasn't too concerned as I didn’t think it would makes much of a difference to my development experience. I say that because IE10, as far as I understand, is basically IE 9 + more advanced CSS 3 and extra HTML 5 features. The rendering of my existing content developed for the IE 9 target should look pixel for pixel identical in IE 10. That’s the theory anyway.

However, I have found one exception to this rule already. IE 10 provides clear field buttons in text boxes that look like this:

Unhappily I found these were clashing with our jQuery UI auto complete loading gif – looking like this:

I know; ugly isn't it? Happily I was able to resolve this with a CSS hack fix which looks like this:

And now the jQuery UI autocomplete looks like we expect during the loading phase:

But happily when the autocomplete is not in the loading phase we still have access to the IE 10 clear field button. This works because the CSS selector above only applies to the ui-autocomplete-loading class (which is only applied to the textbox when the loading is taking place). So we still get to use this:

Which is nice.

Monday 1 April 2013

Death to compatibility mode

For just over 10 years my bread and butter has been the development and maintenance of line of business apps. More particularly, web apps built on the Microsoft stack of love (© Scott Hanselman). These sort of apps are typically accessed via the company intranet and since "bring your own device" is still a relatively new innovation these apps are invariably built for everyones favourite browser: Internet Explorer. As we all know, enterprises are generally not that speedy when it comes to upgrades. So we're basically talking IE 9 at best, but more often than not, IE 8.

Now, unlike many people, I don't regard IE as a work of evil. I spent a fair number of years working for an organization which had IE 6 as the only installed browser on company desktops. (In fact, this was still the case as late as 2012!) Now, because JavaScript is so marvellously flexible I was still able to do a great deal with the help of a number of shivs / shims.

But rendering and CSS - well that's another matter. Because here we're at the mercy of "compatibility mode". Perhaps a quick history lesson is in order. What is this "compatibility mode" of which you speak?

A Brief History

Well it all started when Microsoft released IE 8. To quote them:

A fundamental problem discussed during each and every Internet Explorer release is balancing new features and functionality with site compatibility for the existing Web. On the one hand, new features and functionality push the Web forward. On the other hand, the Web is a large expanse; requiring every legacy page to support the "latest and greatest" browser version immediately at product launch just isn't feasible. Internet Explorer 8 addresses this challenge by introducing compatibility modes which gives a way to introduce new features and stricter compliance to standards while enabling it to be backward compliant.
- excerpted from understanding compatibility modes in Internet Explorer 8.

There's the rub

Sounds fair enough? Of course it does. Microsoft have generally bent over backwards to facilitate backwards compatibility. Quite right too - good business sense and all that. However, one of the choices made around backwards compatibility I've come to regard as somewhat irksome. Later down in the article you'll find this doozy: (emphasis mine)

"for Intranet pages, 7 (IE 7 Standards) rendering mode is used by default and can be changed."

For whatever reason, this decision was not particularly well promoted. As a result, a fair number of devs I've encountered have little or no knowledge of compatibility mode. Certainly it came as a surprise to me. Here was I, developing away on my desktop. I'd fire up the app hosted on my machine and test on my local install of IE 8. All would look new and shiny (well non-anchor tags would have :hover support). Happy and content, I'd push to our test system and browse to it. Wait, what's happened? Where's the new style rendering? What's up with my CSS? This is a bug right?

Obviously I know now it's not a bug it's a "feature". And I have learned how to get round the intranet default of compatibility mode through cunning deployment of meta tags and custom http headers. Recently compatibility mode has come to bite me for the second time (in this case I was building for IE 9 and was left wondering where all my rounded corners had vanished to when I deployed...).

For my own sanity I thought it might be good to document the various ways that exist to solve this particular problem. Just to clarify terms, "solve" in this context means "force IE to render in the most standards compliant / like other browsers fashion it can muster". You can use compatibility mode to do more than just that and if you're interested in more about this then I recommend this Stack Overflow answer.

Solution 1: Custom HTTP Header through web.config

If you're running IIS7 or greater then, for my money, this is the simplest and most pain free solution. All you need do is include the following snippet in your web config file:

This will make IIS serve up the above custom response HTTP header with each page.

Solution 2: Custom HTTP Header the hard way

Maybe you're running II6 and so you making a change to the web.config won't make a difference. That's fine, you can still get the same behaviour by going to the HTTP headers tab in IIS (see below) and adding the X-UA-Compatible: IE=edge header by hand.

Or, if you don't have access to IIS (don't laugh - it happens) you can fall back to doing this in code like this:

Obviously there's a whole raft of ways you could get this in, using Application_BeginRequest in Global.asax.cs would probably as good an approach as any.

Solution 3: Meta Tags are go!

The final approach uses meta tags. And, in my experience it is the most quirky approach - it doesn't always seem to work. First up, what do we do? Well, in each page served we include the following meta tag like this:

Having crawled over the WWW equivalent of broken glass I now know why this *sometimes* doesn't work. (And credit where it's due the answer came from here.) It's all down to the positioning of the meta tag:

The X-UA-compatible header is not case sensitive; however, it must appear in the Web page's header (the HEAD section) before all other elements, except for the title element and other meta elements.
- excerpted from specifying legacy document modes

That's right, get your meta tag in the wrong place and things won't work. And you won't know why. Lovely. But get it right and it's all gravy. This remains the most unsatisfactory approach in my book though.

And for bonus points: IFRAMEs!

Before I finish off I thought it worth sharing a little known feature of IFRAMEs. If page is running in compatibility mode and it contains an IFRAME then the page loaded in that IFRAME will also run in compatibility mode. No ifs, no buts.

In the case that I encountered this behaviour, the application was being hosted in an IFRAME inside Sharepoint. Because of the way our Sharepoint was configured it ended up that the only real game in town for us was the meta tags approach - which happily worked once we'd correctly placed our meta tag.

Again, it's lamentable that this behaviour isn't better documented - hopefully the act of writing this here will mean that it becomes a little better known. There's probably a good reason for this behaviour, though I'm frankly, I don't know what it is. If anyone does, I'd be interested.

That's it

Armed with the above I hope you have less compatibility mode pain than I have. The following blog entry is worth a read by the way:

http://blogs.msdn.com/b/ie/archive/2009/02/16/just-the-facts-recap-of-compatibility-view.aspx

Finally, I have an open question about compatibility mode. I think (but I don't know) that even in compatibility mode IE runs using the same JavaScript engine. However I suspect it has a different DOM to play with. If anyone knows a little more about this and wants to let me know that'd be fantastic.

Monday 11 March 2013

DecimalModelBinder for nullable Decimals

My memory appears to be a sieve. Twice in the last year I've forgotten that MVCs ModelBinding doesn't handle regionalised numbers terribly well. Each time I've thought "hmmmm.... best Google that" and lo and behold come upon this post on the issue by the fantastic Phil Haack:

http://haacked.com/archive/2011/03/19/fixing-binding-to-decimals.aspx

This post has got me 90% of the way there, the last 10% being me tweaking it so the model binder can handle nullable decimals as well.

In the expectation I that I may forget this again I thought I'd note down my tweaks now and hopefully save myself sometime when I'm next looking at this next...

And now a question...

Why hasn't MVC got an out-of-the-box model binder that does this anyway? In Phil Haack's original post it looks like they were considering putting this into MVC itself at some point:

"... In that case, the DefaultModelBinder chokes on the value. This is unfortunate because jQuery Validate allows that value just fine. I’ll talk to the rest of my team about whether we should fix this in the next version of ASP.NET MVC, but for now it’s good to know there’s a workaround..."

If anyone knows the reason this never made it into core I'd love to know. Maybe there's a good reason?

Sunday 3 March 2013

Unit testing ModelState

  • Me: "It can't be done"
  • Him: "Yes it can"
  • Me: "No it can't"
  • Him: "Yes it can, I've just done it"
  • Me: "Ooooh! Show me ..."

The above conversation (or one much like it) took place between my colleague Marc Talary and myself a couple of weeks ago. It was one of those faintly embarrassing situations where you state your case with absolute certainty only to subsequently discover that you were *completely* wrong. Ah arrogance, thy name is Reilly...

The disputed situation in this case was ModelState validation in ASP.Net MVC. How can you unit test a models validation driven by DataAnnotations? If at all. Well it can be done, and here's how.

Simple scenario

Let's start with a simple model:

And let's have a controller which makes use of that model:

When I was first looking at unit testing this I was slightly baffled by the behaviour I witnessed. I took an invalid model (where the properties set on the model were violating the model's validation DataAnnotations):

I passed the invalid model to the Edit controller action inside a unit test. My expectation was that the ModelState.IsValid code path would *not* be followed as this was *not* a valid model. So ModelState.IsValid should evaluate to false, right? Wrong!

Contrary to my expectation the validity of ModelState is not evaluated on the fly inside the controller. Rather it is determined during the model binding that takes place *before* the actual controller action method is called. And that completely explains why during my unit test with an invalid model we find we're following the ModelState.IsValid code path.

Back to the dispute

As this blog post started off I was slightly missing Marc's point. I thought he was saying we should be testing the ModelState.IsValid == false code path. And given that ModelState is determined before we reach the controller my view was that the only way to achieve this was through making use of ModelState.AddModelError in our unit test (you can read a good explanation of that here). And indeed we were already testing for this; we were surfacing errors via a JsonResult and so had a test in place to ensure that ModelState errors were transformed in the manner we would expect.

However, Marc's point was actually that we should have unit tests that enforced our design. That is to say, if we'd decided a certain property on a model was mandatory we should have a test that checked that this was indeed the case. If someone came along later and removed the Required data annotation then we wanted that test to fail.

It's worth saying, we didn't want a unit test to ensure that ASP.Net MVC worked as expected. Rather, where we had used DataAnnotations against our models to drive validation, we wanted to ensure the validation didn't disappear further down the track. Just to be clear: we wanted to test our code, not Microsoft's.

Now I get to learn something

When I grasped Marc's point I thought that the the only way to write these tests would be to make use of reflection. And whilst we could certainly do that I wasn't entirely happy with that as a solution. To my mind it was kind of testing "at one remove", if you see what I mean. What I really wanted was to see that MVC was surfacing validations in the manner I might have hoped. And you can!

.... Drum roll... Ladies and gents may I present Marc's ModelStateTestController:

This class is, as you can see, incredibly simple. It is a controller, it inherits from System.Web.Mvc.Controller and establishes a mock context in the constructor using MOQ. This controller exposes a single method: TestTryValidateModel. This method internally determines the controller's ModelState given the supplied object by calling off to Mvc's (protected) TryValidateModel method (TryValidateModel evaluates ModelState).

This simple class allows us to test the validations on a model in a simple fashion that stays close to the way our models will actually be used in the wild. It's pragmatic and it's useful.

An example

Let me wrap up with an example unit test. The test below makes use of the ModelStateTestController to check the application of the DataAnnotations on our model:

Wrapping up

In a way I think it's a shame that TryValidateModel is a protected method. If it weren't it would be simplicity to write a unit test which tested the ModelState directly in context of the action method. It would be possible to get round this by establishing a base controller class which all our controllers would inherit from which implemented the TestTryValidateModel method from above. On the other hand maybe it's good to have clarity of the difference between testing model validations and testing controller actions. Something to ponder...