Saturday, 28 April 2012

Beg, Steal or Borrow a Decent JavaScript DateTime Converter

I've so named this blog post because it shamelessly borrows from the fine work of others: Sebastian Markbåge and Nathan Vonnahme.

Sebastian wrote a blog post documenting a good solution to the ASP.NET JavaScriptSerializer DateTime problem at the tail end of last year. However, his solution didn't get me 100% of the way there when I tried to use it because of a need to support IE 8 which lead me to use Nathan Vonnahme's ISO 8601 JavaScript Date parser.

I thought it was worth documenting this, hence this post, but just so I'm clear; the hard work here was done by Sebastian Markbåge and Nathan Vonnahme and not me. Consider me just a curator in this case. The original blog posts that I am drawing upon can be found here:
  1. http://blog.calyptus.eu/seb/2011/12/custom-datetime-json-serialization/

    and here:

  2. http://n8v.enteuxis.org/2010/12/parsing-iso-8601-dates-in-javascript/

DateTime, JSON, JavaScript Dates....

Like many, I've long been frustrated with the quirky DateTime serialisation employed by the System.Web.Script.Serialization.JavaScriptSerializer class. When serialising DateTimes so they can be JSON.parsed on the client, this serialiser uses the following approach: (from MSDN)

Date object, represented in JSON as "\/Date(number of ticks)\/". The number of ticks is a positive or negative long value that indicates the number of ticks (milliseconds) that have elapsed since midnight 01 January, 1970 UTC."

Now this is not particularly helpful in my opinion because it's not human readable (at least not this human; perhaps Jon Skeet...) When consuming your data from web services / PageMethods using jQuery.ajax you are landed with the extra task of having to convert what were DateTimes on the server from Microsofts string Date format (eg "\/Date(1293840000000)\/") into actual JavaScript Dates.

It's also unhelpful because it's divergent from the approach to DateTime / Date serialisation used by a native JSON serialisers:


Just as an aside it's worth emphasising that one of the limitations of JSON is that the JSON.parsing of a JSON.stringified date will *not* return you to a JavaScript Date but rather an ISO 8601 date string which will need to be subsequently converted into a Date. Not JSON's fault - essentially down to the absence of a Date literal within JavaScript.

Making JavaScriptSerializer behave more JSON'y

Anyway, I didn't think there was anything I could really do about this in an ASP.NET classic / WebForms world because, to my knowledge, it is not possible to swap out the serialiser that is used. JavaScriptSerializer is the only game in town. (Though I am optimistic about the future; given the announcement that I first picked up on Rick Strahl's blog that Json.NET was going to be adopted as the default JSON serializer for ASP.NET Web API; what with Json.NET having out-of-the-box ISO 8601 support. I digress...)

Because it can make debugging a much more straightforward process I place a lot of value on being able to read the network traffic that web apps generate. It's much easier to drop into Fiddler / FireBug / Chrome dev tools etc and watch what's happening there and then instead of having to manually process the data separately first so that you can understand it. I think this is nicely aligned with the KISS principle. For that reason I've been generally converting DateTimes to ISO 8601 strings on the server before returning them to the client. A bit of extra overhead but generally worth it for the gains in clarity in my opinion.

So I was surprised and delighted when I happened upon Sebastian Markbåge's blog post which provided a DateTime JavaScriptConverter that could be plugged into the JavaScriptSerializer. You can see the code below (or on Sebastian's original post with a good explanation of how it works):


Using this converter meant that a DateTime that previously would have been serialised as "\/Date(1293840000000)\/" would now be serialised as "2011-01-01T00:00:00.0000000Z" instead. This is entirely agreeable because
  1. it's entirely clear what a "2011-01-01T00:00:00.0000000Z" style date represents and
  2. this is more in line with native browser JSON implementations and <statingTheObvious>consistency is a good thing.</statingTheObvious>

Getting your web services to use the ISO 8601 DateTime Converter

Sebastian alluded in his post to a web.config setting that could be used to get web services / pagemethods etc. implementing his custom DateTime serialiser. This is it:


With this in place your web services / page methods will happily be able to serialise / deserialise ISO style date strings to your hearts content.

What no ISO 8601 date string Date constructor?

As I mentioned earlier, Sebastian's solution didn't get me 100% of the way there. There was still a fly in the ointment in the form of IE 8. Unfortunately IE 8 doesn't have JavaScript Date constructor that takes ISO 8601 date strings. This lead me to using Nathan Vonnahme's ISO 8601 JavaScript Date parser, the code of which is below (or see his original post here):


With this in place I could parse ISO 8601 Dates just like anyone else. Great stuff. parseISO8601Date("2011-01-01T00:00:00.0000000Z") would give me a JavaScript Date of Sat Jan 1 00:00:00 UTC 2011.

Obviously in the fullness of time the parseISO8601Date solution should no longer be necessary because EcmaScript 5 specifies an ISO 8601 date string constructor. However, in the interim Nathan's solution is a lifesaver.

Thanks again both to Sebastian Markbåge and Nathan Vonnahme who have both generously allowed me use their work as the basis for this post.

PS And it would have worked if it wasn't for that pesky IE 9...

Subsequent to writing this post I thought I'd check that IE 9 had implemented a JavaScript Date constructor that would process an ISO 8601 date string like this: new Date("2011-01-01T00:00:00.0000000Z"). It hasn't. Take a look:


This is slightly galling as the above code works dandy in Firefox and Chrome. As you can see from the screenshot you can get the JavaScript IE 9 Date constructor to play nice by trimming off the final 4 "0"'s from the string. Frustrating. Obviously we can still use Nathan's solution but it's a shame that we can't use the native support.

Based on what I've read here I think it would be possible to amend Sebastians serializer to fall in line with IE 9's pendantry by changing this:


return new CustomString(((DateTime)obj).ToUniversalTime()
.ToString("O")
);


To this:


return new CustomString(((DateTime)obj).ToUniversalTime()
.ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffzzz")
);


I've held off from doing this myself as I rather like Sebastian's idea of being able to use Microsoft's Round-trip ("O", "o") Format Specifier. And it seems perverse that we should have to move away from using Microsoft's Round-trip Format Specifier purely because of (Microsoft's) IE! But it's a possibility to consider and so I put it out there. I would hope that MS will improve their JavaScript Date constructor with IE 10. A missed opportunity if they don't I think.

PPS Just when you thought is over... IE 9 was right!

Sebastian got in contact after I first published this post and generously pointed out that, contrary to my expectation, IE 9 technically had the correct implementation. According to the EMCAScript standard the Date constructor should not allow more than millisecond precision. In this case, Chrome and Firefox are being less strict - not more correct.

On reflection this does rather make sense as the result of a JSON.stringify(new Date()) never results in an ISO date string to the 10 millionths of a second detail. Sebastian has himself stopped using Microsoft's Round-trip ("O", "o") Format Specifier in favour of this format string:


return new CustomString(((DateTime)obj).ToUniversalTime()
.ToString("yyyy-MM-ddTHH:mm:ss.fffZ")
);


This results in date strings that comply perfectly with the ECMAScript spec. I suspect I'll switch to using this also now. Though I'll probably leave the first part of the post intact as I think the background remains interesting.

Thanks again Sebastian!

Monday, 23 April 2012

JSHint - Customising your hurt feelings

As I've started making greater use of JavaScript to give a richer GUI experience the amount of JS in my ASP.NET apps has unsurprisingly ballooned. If I'm honest, I hadn't given much consideration to the code quality of my JavaScript in the past. However, if I was going to make increasing use of it (and given the way the web is going at the moment I'd say that's a given) I didn't think this was tenable position to maintain.

A friend of mine works for Coverity which is a company that provides tools for analysing code quality. I understand, from conversations with him, that their tools provide static analysis for compiled languages such as C++ / C# / Java etc. I was looking for something similar for JavaScript.

Like many, I have read and loved Douglas Crockford's "JavaScript: The Good Parts"; it is by some margin the most useful and interesting software related book I have read.So I was aware that Crockford had come up with his own JavaScript code quality tool called JSLint. JSLint is quite striking when you first encounter it:


It's the "Warning! JSLint will hurt your feelings." that grabs you. And it's not wrong. I've copied and pasted code that I've written into JSLint and then gasped at the reams of errors JSLint would produce. I subsequently tried JSLint-ing various well known JS libraries (jQuery etc) and saw that JSLint considered they were thoroughly problematic as well. This made me feel slightly better.

It was when I started examining some of the "errors" JSLint reported that I took exception. Yes, I took exception to exceptions! (I'm *very* pleased with that!) Here's a few of the errors generated by JSLint when inspecting jquery-1.7.2.js:
  • Problem at line 16 character 10: Expected exactly one space between 'function' and '('.
  • Problem at line 25 character 1: Expected 'var' at column 13, not column 1.
  • Problem at line 31 character 5: Unexpected dangling '_' in '_jQuery'.

JSLint is, much like it's creator, quite opinionated. Which is no bad thing. Many of Crockfords opinions are clearly worth their salt. It's just I didn't want all of them enforced upon me. As you can see above most of these "problems" are essentially complaints about a different style rather than bugs or potential issues. Now there are options in JSLint that you can turn on and off which looked quite promising. But before I got to investigating them I heard about JSHint, brainchild of Anton Kovalyov and Paul Irish. In their own words:

JSHint is a fork of JSLint, the tool written and maintained by Douglas Crockford.

The project originally started as an effort to make a more configurable version of JSLint—one that doesn't enforce one particular coding style on its users—but then transformed into a separate static analysis tool with its own goals and ideals.


This sounded right up my alley! So I thought I'd repeat my jQuery test. Here's a sample of what JSHint threw back at me, with its default settings in place:
  • Line 230: return num == null ? Expected '===' and instead saw '=='.
  • Line 352: if ( (options = arguments[ i ]) != null ) { Expected '!==' and instead saw '!='.
  • Line 354: for ( name in options ) { The body of a for in should be wrapped in an if statement to filter unwanted properties from the prototype.

These were much more the sort of "issues" I was interested in. Plus it seemed there was plenty of scope to tweak my options. Excellent.

This was good. The icing on my cake would have been a plug-in for Visual Studio which would allow me to evaluate my JS files from within my IDE. Happily the world seems to be full of developers doing good turns for one another. I discovered an extension for VS called JSLint for Visual Studio 2010:


This was an extension that provided either JSLint *or* JSHint evaluation as you preferred from within Visual Studio. Fantastic! With this extension in play you could add JavaScript static code analysis to your compilation process and so learn of all the issues in your code at the same time, whether they lay in C# or JS or [insert language here].

You could control how JS problems were reported; as warnings, errors etc. You could straightforwardly exclude files from evaluation (essential if you're reliant on a number of 3rd party JS libraries which you are not responsible for maintaining). You could cater for predefined variables; allow for jQuery or DOJO. You could simply evaluate a single file in your solution by right clicking it and hitting the "JS Lint" option in the context menu. And it was simplicity itself to activate and deactivate the JSHint / JSLint extension as required.

For a more exhaustive round up of the options available I advise taking a look here: http://jslint4vs2010.codeplex.com. I would heartily recommend using JSHint if you're looking to improve your JS code quality. I'm grateful to Crockford for making JSHint possible by first writing JSLint. For my part though I think JSHint is the more pragmatic and useful tool and likely to be the one I stick with.

For interest (and frankly sheer entertainment value at the crotchetiness of Crockford) it's definitely worth having a read up on how JSHint came to pass:

Monday, 16 April 2012

A Simple Technique for Initialising Properties with Internal Setters for Unit Testing

I was recently working with my colleagues on refactoring a legacy application. We didn't have an immense amount of time available for this but the plan was to try and improve what was there as much as possible. In its initial state the application had no unit tests in place at all and so the plan was to refactor the code base in such a way as to make testing it a realistic proposition. To that end the domain layer was being heavily adjusted and the GUI was being migrated from WebForms to MVC 3. The intention was to build up a pretty solid collection of unit tests. However, as we were working on this we realised we had a problem with properties on our models with internal setters...

Background

The entities of the project in question used an approach which would store pertinent bits of normalised data for read-only purposes in related entities. I've re-read that sentence and realise it's as clear as mud. Here is an example to clarify:



public class Person
{
  public int Id { get; set; }
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public string Address { get; set; }
  public DateTime DateOfBirth { get; set; }
  /* Other fascinating properties... */
}

public class Order
{
  public int Id { get; set; }
  public string ProductOrdered { get; set; }
  public string OrderedById { get; set; }
  public string OrderedByFirstName { get; internal set; }
  public string OrderedByLastName { get; internal set; }
}

In the example above you have 2 types of entity: Person and Order. The Order entity makes use of the the Id, FirstName and LastName properties of the Person entity in the properties OrderedById, OrderedByFirstName and OrderedByLastName. For persistence (ie saving to the database) purposes the only necessary Person property is OrderedById identity. OrderedByFirstName and OrderedByLastName are just "nice to haves" - essentially present to make implementing the GUI more straightforward.

To express this behaviour / intention in the object model the setters for OrderedByFirstName and OrderedByLastName are marked as internal. The implication of this is that properties like this can only be initialised within the current assembly - or any explicitly associated "friend" assemblies. In practice this meant that internally set properties were only populated when an object was read in from the database. It wasn't possible to set these properties in other assemblies which meant less code was written (a good thing) - after all, why set a property when you don't need to?

Background explanation over. It may still be a little unclear but I hope you get the gist.

What's our problem?

I was writing unit tests for the controllers in our main web application and was having problems with my arrangements. I was mocking the database calls in my controllers much in the manner that you might expect:


  // Arrange
  var orderDb = new Mock<IOrderDb>();
  orderDb
    .Setup(x => x.GetOrder(It.IsAny<int>()))
    .Returns(new Order{
      Id = 123,
      ProductOrdered = "Packet of coffee",
      OrderedById = 987456,
      OrderedByFirstName = "John",
      OrderedByLastName = "Reilly"
    });
}

All looks fine doesn't it? It's not. Because OrderedByFirstName and OrderedByLastName have internal setters we are unable to initialise them from within the context of our test project. So what to do?

We toyed with 3 approaches and since each has merits I thought it worth going through each of them:

  1. To the MOQumentation Batman!: http://code.google.com/p/moq/wiki/QuickStart! Looking at the MOQ documentation it states the following:

    Mocking internal types of another project: add the following assembly attributes (typically to the AssemblyInfo.cs) to the project containing the internal types:

    
    // This assembly is the default dynamic assembly generated Castle DynamicProxy, 
    // used by Moq. Paste in a single line.
    [assembly:InternalsVisibleTo("DynamicProxyGenAssembly2,PublicKey=0024000004800000940000000602000000240000525341310004000001000100c547cac37abd99c8db225ef2f6c8a3602f3b3606cc9891605d02baa56104f4cfc0734aa39b93bf7852f7d9266654753cc297e7d2edfe0bac1cdcf9f717241550e0a7b191195b7667bb4f64bcb8e2121380fd1d9d46ad2d92d2d15605093924cceaf74c4861eff62abf69b9291ed0a340e113be11e6a7d3113e92484cf7045cc7")]
    [assembly: InternalsVisibleTo("The.NameSpace.Of.Your.Unit.Test")] //I'd hope it was shorter than that...
    
    

    This looked to be exactly what we needed and in most situations it would make sense to go with this. Unfortunately for us there was a gotcha. Certain core shared parts of our application platform were GAC'd. A requirement for GAC-ing an assembly is that it is signed.

    The upshot of this was that if we wanted to use the InternalsVisibleTo approach then we would need to sign our web application test project. We weren't particularly averse to that and initially did so without much thought. It was then we remembered that every assembly referenced by a signed assembly must also be signed as well. We didn't really want to sign our main web application purely for testing purposes. We could and if there weren't viable alternatives we well might have. But it just seemed like the wrong reason to be taking that decision. Like using a sledgehammer to crack a nut.

  2. The next approach we took was using mock objects. Instead of using our objects straight we would mock them as below:

    
          //Create mock and set internal properties
          var orderMock = new Mock<Order>();
          orderMock.SetupGet(x => x.OrderedByFirstName).Returns("John");
          orderMock.SetupGet(x => x.OrderedByLastName).Returns("Reilly");
    
          //Set up standard properties
          orderMock.SetupAllProperties();
          var orderStub = orderMock.Object;
          orderStub.Id = 123;
          orderStub.ProductOrdered = "Packet of coffee";
          orderStub.OrderedById = 987456;
    
    

    Now this approach worked fine but had a couple of snags:

    • As you can see it's pretty verbose and much less clear to read than it was previously.
    • It required that we add the virtual keyword to all our internally set properties like so:

      
      public class Order
      {
        // ....
        public virtual string OrderedByFirstName { get; internal set; }
        public virtual string OrderedByLastName { get; internal set; }
        // ...
      }
      
      
    • Our standard constructor already initialised the value of our internally set properties. So adding virtual to the internally set properties generated ReSharper warnings aplenty about virtual properties being initialised in the constructor. Fair enough.

    Because of the snags it still felt like we were in nutcracking territory...

  3. ... and this took us to the approach that we ended up adopting: a special mocking constructor for each class we wanted to test, for example:
    
        /// 
        /// Mocking constructor used to initialise internal properties
        /// 
        public Order(string orderedByFirstName = null, string orderedByLastName = null)
          : this()
        {
          OrderedByFirstName = orderedByFirstName;
          OrderedByLastName = orderedByLastName;
        }
    
    

    Thanks to the ever lovely Named and Optional Arguments feature of C# combined with Object Initializers it meant it was possible to write quite expressive, succinct code using this approach; for example:

    
          var order = new Order(
            orderedByFirstName: "John",
            orderedByLastName: "Reilly"
          )
          {
            Id = 123,
            ProductOrdered = "Packet of coffee",
            OrderedById = 987456
          };
    
    

    Here we're calling the mocking constructor to set the internally set properties and subsequently initialising the other properties using the object initialiser mechanism.

    Implementing these custom constructors wasn't a massive piece of work and so we ended up settling on this technique for initialising internal properties.

Thursday, 5 April 2012

Making PDFs from HTML in C# using WKHTMLtoPDF

Update 03/01/2013

I've written a subsequent post which builds on the work of this original post. The new post exposes this functionality via a WCF service and can be found here.

Making PDFs from HTML

I wanted to talk about an approach I've discovered for making PDFs directly from HTML. I realise that in these wild and crazy days of PDF.js and the like that techniques like this must seem very old hat. That said, this technique works and more importantly it solves a problem I was faced with but without forcing the users to move the "newest hottest version of X". Much as many of would love to solve problems this way, alas many corporations move slower than that and in the meantime we still have to deliver - we still have to meet requirements. Rather than just say "I did this" I thought I'd record how I got to this point in the first place. I don't know about you but I find the reasoning behind why different technical decisions get made quite an interesting topic...

For some time I've been developing / supporting an application which is used in an intranet environment where the company mandated browser is still IE 6. It was a requirement that there be "print" functionality in this application. As is well known (even by Microsoft themselves) the print functionality in IE 6 was never fantastic. But the requirement for usable printouts remained.

The developers working on the system before me decided to leverage Crystal Reports (remember that?). Essentially there was a reporting component to the application at the time which created custom reports using Crystal and rendered them to the user in the form of PDFs (which have been eminently printable for as long as I care to remember). One of the developers working on the system realised that it would be perfectly possible to create some "reports" within Crystal which were really "print to PDF" screens for the app.

It worked well and this solution stayed in place for a very long time. However, some years down the line the Crystal Reports was discarded as the reporting mechanism for the app. But we were unable to decommission Crystal entirely because we still needed it for printing.

I'd never really liked the Crystal solution for a number of reasons:

  1. We needed custom stored procs to drive the Crystal print screens which were near duplicates of the main app procs. This duplication of effort never felt right.
  2. We had to switch IDEs whenever we were maintaining our print screens. And the Crystal IDE is not a joy to use.
  3. Perhaps most importantly, for certain users we needed to hide bits of information from the print. The version of Crystal we were using did not make the dynamic customisation of our print screens a straightforward proposition. (In its defence we weren't really using it for what it was designed for.) As a result the developers before me had ended up creating various versions of each print screen revealing different levels of information. As you can imagine, this meant that the effort involved in making changes to the print screens had increased exponentially

It occurred to me that it would be good if we could find some way of generating our own PDF reports without using Crystal that would be a step forward. It was shortly after this that I happened upon WKHTMLtoPDF. This is an open source project which describes itself as a "Simple shell utility to convert html to pdf using the webkit rendering engine, and qt." I tested it out on various websites and it worked. It wasn't by any stretch of the imagination a perfect HTML to PDF tool but the quality it produced greatly outstripped the presentation currently in place via Crystal.

This was just the ticket. Using WKHTMLtoPDF I could have simple web pages in the application which could be piped into WKHTMLtoPDF to make a PDF as needed. It could be dynamic - because ASP.NET is dynamic. We wouldn't need to write and maintain custom stored procs anymore. And happily we would no longer need to use Crystal.

Before we could rid ourselves of Crystal though, I needed a way that I could generate these PDFs on the fly within the website. For this I ended up writing a simple wrapper class for WKHTMLtoPDF which could be used to invoke it on the fly. In fact a good portion of this was derived from various contributions on a post on StackOverflow. It ended up looking like this:

With this wrapper I could pass in URLs and extract out PDFs. Here's a couple of examples of me doing just that:


    //Create PDF from a single URL
    var pdfUrl = PdfGenerator.HtmlToPdf(pdfOutputLocation: "~/PDFs/",
        outputFilenamePrefix: "GeneratedPDF",
        urls: new string[] { "http://news.bbc.co.uk" });

    //Create PDF from multiple URLs
    var pdfUrl = PdfGenerator.HtmlToPdf(pdfOutputLocation: "~/PDFs/",
        outputFilenamePrefix: "GeneratedPDF",
        urls: new string[] { "http://www.google.co.uk", "http://news.bbc.co.uk" });

As you can see from the second example above it's possible to pipe a number of URLs into the wrapper all to be rendered to a single PDF. Most of the time this was surplus to our requirements but it's good to know it's possible. Take a look at the BBC website PDF generated by the first example:

Pretty good, no? As you can see it's not perfect from looking at the titles (bit squashed) but I deliberately picked a more complicated page to show what WKHTMLtoPDF was capable of. The print screens I had in mind to build would be significantly simpler than this.

Once this was in place I was able to scrap the Crystal solution. It was replaced with a couple of "print to PDF" ASPXs in the main web app which would be customised when rendering to hide the relevant bits of data from the user. These ASPXs would be piped into the HtmlToPdf method as needed and then the user would be redirected to that PDF. If for some reason the PDF failed to render the users would see the straight "print to PDF" ASPX - just not as a PDF if you see what I mean. I should say that it was pretty rare for a PDF to not render but this was my failsafe.

This new solution had a number of upsides from our perspective:

  • Development maintenance time (and consequently cost for our customers) for print screens was significantly reduced. This was due to the print screens being part of the main web app. This meant they shared styling etc with all the other web screens and the dynamic nature of ASP.NET made customising a screen on the fly simplicity itself.
  • We were now able to regionalise our print screens for the users in the same way as we did with our main web app. This just wasn't realistic with the Crystal solution because of the amount of work involved.
  • I guess this is kind of a DRY solution :-)

You can easily make use of the above approach yourself. All you need do is download and install WKHTMLtoPDF on your machine. I advise using version 0.9.9 as the later release candidates appear slightly buggy at present.

Couple of gotchas:

  1. Make sure that you pass the correct installation path to the HtmlToPdf method if you installed it anywhere other than the default location. You'll see that the class assumes the default if it wasn't passed
  2. Ensure that Read and Execute rights are granted to the wkhtmltopdf folder for the relevant process
  3. Ensure that Write rights are granted for the location you want to create your PDFs for the relevant process

In our situation we are are invoking this directly in our web application on demand. I have no idea how this would scale - perhaps not well. This is not really an issue for us as our user base is fairly small and this functionality isn't called excessively. I think if this was used much more than it is I'd be tempted to hive off this functionality into a separate app. But this works just dandy for now.