Tuesday 13 November 2012

Getting up to speed with Bloomberg's Open API...

A good portion of any devs life is usually spent playing with APIs. If you need to integrate some other system into the system you're working on (and it's rare to come upon a situation where this doesn't happen at some point) then it's API time.

Some APIs are well documented and nice to use. Some aren't. I recently spent a goodly period of time investigating Bloomberg's Open API and it was a slightly painful experience. So much so that I thought it best to write up my own experiences and maybe I can save others time and a bit of pain.

Also, as I investigated the Bloomberg Open API I found myself coming up with my own little mini-C#-API. (It's generally a sure sign you've found an API you don't love if you end up writing your own wrapper.) This mini API did the heavy lifting for me and just handed back nicely structured data to deal with. I have included this wrapper here as well.

Research

The initial plan was to, through code, extract Libor and Euribor rates from Bloomberg. I had access to a Bloomberg terminal and I had access to the internet - what could stop me? After digging around for a little while I found some useful resources that could be accessed from the Bloomberg terminal:

  1. Typing “WAPI<GO>” into Bloomberg lead me to the Bloomberg API documentation.
  2. Typing “DOCS 2055451<GO>” into Bloomberg (I know - it's a bit cryptic) provided me with samples of how to use the Bloomberg API in VBA

WAPI - pretty, no?

To go with this I found some useful documentation of the Bloomberg Open API here and I found the .NET Bloomberg Open API itself here.

Hello World?

The first goal when getting up to speed with an API is getting it to do something. Anything. Just stick a fork into it and see if it croaks. Sticking a fork into Open API was achieved by taking the 30-odd example apps included in the Bloomberg Open API and running each in turn on the Bloomberg box until I had my "he's alive!!" moment. (I did find it surprising that not all of the examples worked - I don't know if there's a good reason for this...)

However, when I tried to write my own C# console application to interrogate the Open API it wasn't as plain sailing as I'd hoped. I'd write something that looked correct, compiled successfully and deploy it onto the Bloomberg terminal only to have it die a sad death whenever I tried to fire it off.

I generally find the fastest way to get up and running with an API is to debug it. To make calls to the API and then examine, field by field and method by method, what is actually there. This wasn't really an option with my console app though. I was using a shared Bloomberg terminal with very limited access. No Visual Studio on the box and no remote debugging enabled.

It was then that I had something of a eureka moment. I realised that the code in the VBA samples I'd downloaded from Bloomberg looked quite similar to the C# code samples that shipped with Open API. Hmmmm.... Shortly after this I found myself sat at the Bloomberg machine debugging the Bloomberg API using the VBA IDE in Excel. (For the record, these debugging tools are aren't too bad at all - they're nowhere near as slick as their VS counterparts but they do the job.) This was my Rosetta Stone - I could take what I'd learned from the VBA samples and translate that into equivalent C# / .NET code (bearing in mind what I'd learned from debugging in Excel and in fact sometimes bringing along the VBA comments themselves if they provided some useful insight).

He's the Bloomberg, I'm the Wrapper

So I'm off and romping... I have something that works. Hallelujah! Now that that hurdle had been crossed I found myself examining the actual Bloomberg API code itself. It functioned just fine but it did a couple of things that I wasn't too keen on:

  1. The Bloomberg API came with custom data types. I didn't want to use these unless it was absolutely necessary - I just wanted to stick to the standard .NET types. This way if I needed to hand data onto another application I wouldn't be making each of these applications dependant on the Bloomberg Open API.
  2. To get the data out of the Bloomberg API there was an awful lot of boilerplate. Code which handled the possibilities of very large responses that might be split into several packages. Code which walked the element tree returned from Bloomberg parsing out the data. It wasn't a beacon of simplicity.

I wanted an API that I could simply invoke with security codes and required fields. And in return I wanted to be passed nicely structured data. As I've already mentioned a desire to not introduce unnecessary dependencies I thought it might well suit to make use of nested Dictionaries. I came up with a simple C# Console project / application which had a reference to the Bloomberg Open API. It contained the following class; essentially my wrapper for Open API operations: (please note this is deliberately a very "bare-bones" implementation)

The project also contained this class which demonstrates how I made use of my wrapper:

And here's what the output looked like:

This covered my bases. It was simple, it was easy to consume and it didn't require any custom types. My mini-API is only really catering for my own needs (unsurprisingly). However, there's lots more to the Bloomberg Open API and I may end up taking this further in the future if I encounter use cases that my current API doesn't cover.

Update (07/12/2012)

Finally, a PS. I found in the Open API FAQs that "Testing any of that functionality currently requires a valid Bloomberg Desktop API (DAPI), Server API (SAPI) or Managed B-Pipe subscription. Bloomberg is planning on releasing a stand-alone simulator which will not require a subscription." There isn't any word yet on this stand-alone simulator. I emailed Bloomberg at open-tech@bloomberg.net to ask about this. They kindly replied that "Unfortunately it is not yet available. We understand that this makes testing API applications somewhat impractical, so we're continuing to work on this tool." Fingers crossed for something we can test soon!

Note to self (because I keep forgetting)

If you're looking to investigate what data is available about a security in Bloomberg it's worth typing “FLDS<GO>” into Bloomberg. This is the Bloomberg Fields Finder. Likewise, if you're trying to find a security you could try typing “SECF<GO>” into Bloomberg as this is the Security Finder.

Friday 2 November 2012

XSD/XML Schema Generator + Xsd.exe:
Taking the pain out of manual XML

Is it 2003 again?!?

I've just discovered Xsd.exe. It's not new. Or shiny. And in fact it's been around since .NET 1.1. Truth be told, I've been aware of it for years but up until now I've not had need of it. But now now I've investigated it a bit I've found that it, combined with the XSD/XML Schema Generator can make for a nice tool to add to the utility belt.

Granted XML has long since stopped being sexy. But if you need it, as I did recently, then this is for you.

To the XML Batman!

Now XML is nothing new to me (or I imagine anyone who's been developing within the last 10 years). But most of the time when I use XML I'm barely aware that it's going on - by and large it's XML doing the heavy lifting underneath my web services. But the glory of this situation is, I never have to think about it. It just works. All I have to deal with are nice strongly typed objects which makes writing robust code a doddle.

I recently came upon a situation where I was working with XML in the raw; that is to say strings. I was going to be supplied with strings of XML which would represent various objects. It would be my job to take the supplied XML, extract out the data I needed and proceed accordingly.

We Don't Need No Validation...

I lied!

In order to write something reliable I needed to be able to validate that the supplied XML was as I expected. So, XSD time. If you're familiar with XML then you're probably equally familar with XSD which, to quote Wikipedia "can be used to express a set of rules to which an XML document must conform in order to be considered 'valid'".

Now I've written my fair share of XSDs over the years and I've generally found it a slightly tedious exercise. So I was delighted to discover an online tool to simplify the task. It's called the XSD/XML Schema Generator. What this marvellous tool does is allow you to enter an example of your XML which it then uses to reverse engineer an XSD.

Here's an example. I plugged in this:

And pulled out this:

Fantastic! It doesn't matter if the tool gets something slightly wrong; you can tweak the generated XSD to your hearts content. This is great because it does the hard work for you, allowing you to step back, mop your brow and then heartily approve the results. This tool is a labour saving device. Put simply, it's a dishwasher.

Tools of the Trade

How to get to the actual data? I was initially planning to break out the XDocument, plug in my XSD and use the Validate method. Which would do the job just dandy.

However I resisted. As much as I like LINQ to XML I turned to use Xsd.exe instead. As I've mentioned, this tool is as old as the hills. But there's gold in them thar hills, listen: "The XML Schema Definition (Xsd.exe) tool generates XML schema or common language runtime classes from XDR, XML, and XSD files, or from classes in a runtime assembly."

Excited? Thought not. But what this means is we can hurl our XSD at this tool and it will toss back a nicely formatted C# class for me to use. Good stuff! So how's it done? Well MSDN is roughly as informative as it ever is (which is to say, not terribly) but fortunately there's not a great deal to it. You fire up the Visual Studio Command Prompt (and I advise doing this in Administrator mode to escape permissions pain). Then you enter a command to generate your class. Here's an example using the Contact.xsd file we generated earlier:

xsd.exe "C:\\Contact.xsd" /classes /out:"C:\\" /namespace:"MyNameSpace"

Generation looks like this:


Never let it be said that the command line lacks visual flair...

And you're left with the lovely Contact.cs class:

Justify Your Actions

But why is this good stuff? Indeed why is this more interesting than the newer, and hence obviously cooler, LINQ to XML? Well for my money it's the following reasons that are important:

  1. Intellisense - I have always loved this. Call me lazy but I think intellisense frees up the mind to think about what problem you're actually trying to solve. Xsd.exe's generated classes give me that; I don't need to hold the whole data structure in my head as I code.
  2. Terse code - I'm passionate about less code. I think that a noble aim in software development is to write as little code as possible in order to achieve your aims. I say this as generally I have found that writing a minimal amount of code expresses the intention of the code in a far clearer fashion. In service of that aim Xsd.exe's generated classes allow me to write less code than would be required with LINQ to XML.
  3. To quote Scott Hanselman "successful compilation is just the first unit test". That it is but it's a doozy. If I'm making changes to the code and I've been using LINQ to XML I'm not going to see the benefits of strong typing that I would with Xsd.exe's generated classes. I like learning if I've broken the build sooner rather than later; strong typing gives me that safety net.

Serialization / Deserialization Helper

As you read this you're no doubt thinking "but wait he's shown us how to create XSDs from XML and classes from XSDs but how do we take XML and turn it into objects? And how do we turn those objects back into XML?"

See how I read your mind just there? It's a gift. Well, I've written a little static helper class for the very purpose:

And here's an example of how to use it:

I was tempted to name my methods in tribute to Crockford's JSON (namely ToXML becoming stringify and ToObject becoming parse). Maybe later.

And that's us done. Whilst it's no doubt unfashionable I think that this is a very useful approach indeed and I commend it to the interweb!

Update - using Xsd.exe to generate XSD from XML

I was chatting to a friend about this blog post and he mentioned that you can actually use Xsd.exe to generate XSD files from XML as well. He's quite right - this feature does exist. To go back to our example from earlier we can execute the following command:

xsd.exe "C:\\Contact.xml" /out:"C:\\"

And this will generate the following file:

However, the XSD generated above is very much a "Microsoft XSD"; it's an XSD which features MS properties and so on. It's fine but I think that generally I prefer my XSDs to be as vanilla as possible. To that end I'm likely to stick to using the XSD/XML Schema Generator as it doesn't appear to be possible to get Xsd.exe to generate "vanilla XSD".

Thanks to Ajay for bringing it to my attention though.

Monday 22 October 2012

MVC 3 meet Dictionary

Documenting a JsonValueProviderFactory Gotcha

About a year ago I was involved in the migration of an ASP.NET WebForms application over to MVC 3. We'd been doing a lot of AJAX-y / Single Page Application-y things in the project and had come to the conclusion that MVC might be a slightly better fit since we intended to continue down this path.

During the migration we encountered a bug in MVC 3 concerning Dictionary deserialization. This bug has subsequently tripped me up a few more times as I failed to remember the nature of the problem correctly. So I've written the issue up here as an aide to my own lamentable memory.

Before I begin I should say that the problem has been resolved in MVC 4. However given that I imagine many MVC 3 projects will not upgrade instantly there's probably some value in documenting the issue (and how to work around it). By the way, you can see my initial plea for assistance in this StackOverflow question.

The Problem

The problem is that deserialization of Dictionary objects does not behave in the expected and desired fashion. When you fire off a dictionary it arrives at your endpoint as the enormously unhelpful null. To see this for yourself you can try using this JavaScript:

With this C#:

You get a null null dictionary.


Alas, and indeed, alack...

After a long time googling around on the topic I eventually discovered, much to my surprise, that I was actually tripping over a bug in MVC 3. It was filed by Darin Dimitrov of Stack Overflow fame and I found details about it filed as an official bug here. To quote Darin:

"The System.Web.Mvc.JsonValueProviderFactory introduced in ASP.NET MVC 3 enables action methods to send and receive JSON-formatted text and to model-bind the JSON text to parameters of action methods. Unfortunately it doesn't work with dictionaries"

The Workaround

My colleague found a workaround for the issue here. There are 2 parts to this:

  1. Dictionaries in JavaScript are simple JavaScript Object Literals. In order to workaround this issue it is necessary to JSON.stringify our Dictionary / JOL before sending it to the endpoint. This is done so a string can be picked up at the endpoint.
  2. The signature of your action is switched over from a Dictionary reference to a string reference. Deserialization is then manually performed back from the string to a Dictionary within the Action itself.

I've adapted my example from earlier to demonstrate this; first the JavaScript:

Then the C#:

And now we're able to get a dictionary:


That's more like it!

Summary and a PS

So that's it; a little unglamourous but this works. I'm slightly surprised that that wasn't picked up before MVC 3 was released but at least it's been fixed for MVC 4. I look forward to this blog post being irrelevant and out of date ☺.

For what it's worth in my example above we're using the trusty old System.Web.Script.Serialization.JavaScriptSerializer to perform deserialization. My preference is actually to use JSON.Nets implementation but for the sake of simplicity I went with .NETs internal one here. To be honest, either is fine to my knowledge.

Friday 5 October 2012

Using Web Optimization with MVC 3

A while ago I wrote about optimally serving up JavaScript in web applications. I mentioned that Microsoft had come up with a NuGet package called Microsoft ASP.NET Web Optimization which could help with that by minifying and bundling CSS and JavaScript. At the time I was wondering if I would be able to to use this package with pre-existing MVC 3 projects (given that the package had been released together with MVC 4). Happily it turns out you can. But it's not quite as straightforward as I might have liked so I've documented how to get going with this here...

Getting the Basics in Place

To keep it simple I'm going to go through taking a "vanilla" MVC 3 app and enhancing it to work with Web Optimization. To start, follow these basic steps:

  1. Open Visual Studio (bet you didn't see that coming!)
  2. Create a new MVC 3 application (I called mine "WebOptimizationWithMvc3" to demonstrate my imaginative flair). It doesn't really matter which sort of MVC 3 project you create - I chose an Intranet application but really that's by the by.
  3. Update pre-existing NuGet packages
  4. At the NuGet console type: "Install-Package Microsoft.AspNet.Web.Optimization"

Whilst the NuGet package adds the necessary references to your MVC 3 project it doesn't add the corresponding namespaces to the web.configs. To fix this manually add the following child XML element to the <namespaces> element in your root and Views web.config files:

<add namespace="System.Web.Optimization" />

This gives you access to Scripts and Styles in your views without needing the fully qualified namespace. For reasons best known to Microsoft I had to close down and restart Visual Studio before intellisense started working. You may need to do likewise.

Next up we want to get some JavaScript / CSS bundles in place. To do this, create a folder in the root of your project called "App_Start". There's nothing magical about this to my knowledge; this is just a convention that's been adopted to store all the bits of startup in one place and avoid clutterage. (I think this grew out of Nuget; see David Ebbo talking about this here.) Inside your new folder you should add a new class called BundleConfig.cs which looks like this:

The above is what you get when you create a new MVC 4 project (as it includes Web Optimization out of the box). All it does is create some JavaScript and CSS bundles relating to jQuery, jQuery UI, jQuery Validate, Modernizr and the standard site CSS. Nothing radical here but this example should give you an idea of how bundling can be configured and used. To make use of BundleConfig.cs you should modify your Global.asax.cs so it looks like this:

Once you've done this you're ready to start using Web Optimization in your MVC 3 application.

Switching over _Layout.cshtml to use Web Optimization

With a "vanilla" MVC 3 app the only use of CSS and JavaScript files is found in _Layout.cshtml. To switch over to using Web Optimization you should replace the existing _Layout.cshtml with this: (you'll see that the few differences that there are between the 2 are solely around the replacement of link / script tags with references to Scripts and Styles instead)

Do note that in the above Scripts.Render call we're rendering out 3 bundles; jQuery, jQuery UI and jQuery Validate. We're not using any of these in _Layout.cshtml but rendering these (and their associated link tags) gives us a chance to demonstrate that everything is working as expected.

In your root web.config file make sure that the following tag is in place: <compilation debug="true" targetFramework="4.0">. Then run, the generated HTML should look something like this:

This demonstrates that when the application has debug set to true you see the full scripts / links being rendered out as you would hope (to make your debugging less painful).

Now go back to your root web.config file and chance the debug tag to false: <compilation debug="false" targetFramework="4.0">. This time when you run, the generated HTML should look something like this:

This time you can see that in non-debug mode (ie how it would run in Production) minified bundles of scripts and css files are being served up instead of the raw files. And that's it; done.

Wednesday 3 October 2012

Unit Testing and Entity Framework: The Filth and the Fury

Just recently I've noticed that there appears to be something of a controversy around Unit Testing and Entity Framework. I first came across it as I was Googling around for useful posts on using MOQ in conjunction with EF. I've started to notice the topic more and more and as I have mixed feelings on the subject (that is to say I don't have a settled opinion) I thought I'd write about this and see if I came to any kind of conclusion...

The Setup

It started as I was working on a new project. We were using ASP.NET MVC 3 and Entity Framework with DbContext as our persistence layer. Rather than crowbarring the tests in afterwards the intention was to write tests to support the ongoing development. Not quite test driven development but certainly test supported development. (Let's not get into the internecine conflict as to whether this is black belt testable code or not - it isn't but he who pays the piper etc.) Oh and we were planning to use MOQ as our mocking library.

It was the first time I'd used DbContext rather than ObjectContext and so I thought I'd do a little research on how people were using DbContext with regards to testability. I had expected to find that there was some kind of consensus and an advised way forwards. I didn't get that at all. Instead I found a number of conflicting opinions.

Using the Repository / Unit of Work Patterns

One thread of advice that came out was that people advised using the Repository / Unit of Work patterns as wrappers when it came to making testable code. This is kind of interesting in itself as to the best of my understanding ObjectSet / ObjectContext and DbSet / DbContext are both in themselves implementations of the Repository / Unit of Work patterns. So the advice was to build a Repository / Unit of Work pattern to wrap an existing Repository / Unit of Work pattern.

Not as mad as it sounds. The reason for the extra abstraction is that ObjectContext / DbContext in the raw are not MOQ-able.

Or maybe I'm wrong, maybe you can MOQ DbContext?

No you can't. Well that's not true. You can and it's documented here but there's a "but". You need to be using Entity Frameworks Code First approach; actually coding up your DbContext yourself. Before I'd got on board the project had already begun and we were already some way down the road of using the Database First approach. So this didn't seem to be a go-er really.

The best article I found on testability and Entity Framework was this one by K. Scott Allen which essentially detailed how you could implement the Repository / Unit of Work patterns on top of ObjectSet / ObjectContext. In the end I adapted this to do the same thing sat on top of DbSet / DbContext instead.

With this in place I had me my testable code. I was quite happy with this as it seemed quite intelligible. My new approach looked similar to the existing DbSet / DbContext code and so there wasn't a great deal of re-writing to do. Sorted, right?

Here come the nagging doubts...

I did wonder, given that I found a number of articles about applying the Repository / Unit of Work patterns on top of ObjectSet / ObjectContext that there didn't seem to be many examples to do the same for DbSet / DbContext. (I did find a few examples of this but none that felt satisfactory to me for a variety of reasons.) This puzzled me.

I also started to notice that a 1 man war was being waged against the approach I was using by Ladislav Mrnka. Here are a couple of examples of his crusade:

Ladislav is quite strongly of the opinion that wrapping DbSet / DbContext (and I presume ObjectSet / ObjectContext too) in a further Repository / Unit of Work is an antipattern. To quote him: "The reason why I don’t like it is leaky abstraction in Linq-to-entities queries ... In your test you have Linq-to-Objects which is superset of Linq-to-entities and only subset of queries written in L2O is translatable to L2E". It's worth looking at Jon Skeets explanation of "leaky abstractions" which he did for TekPub.

As much as I didn't want to admit it - I have come to the conclusion Ladislav probably has a point for a number of reasons:

1. Just because it compiles and passes unit tests don't imagine that means it works...

Unfortunately, a LINQ query that looks right, compiles and has passing unit tests written for it doesn't necessarily work. You can take a query that fails when executed against Entity Framework and come up with test data that will pass that unit test. As Ladislav rightly points out: LINQ-to-Objects != LINQ-to-Entities.

So in this case unit tests of this sort don't provide you with any security. What you need are integration tests. Tests that run against an instance of the database and demonstrate that LINQ will actually translate queries / operations into valid SQL.

2. Complex queries

You can write some pretty complex LINQ queries if you want. This is made particularly easy if you're using comprehension syntax. Whilst these queries may be simple to write it can be uphill work to generate test data to satisfy this. So much so that at times it can feel you've made a rod for your own back using this approach.

3. Lazy Loading

By default Entity Framework employs lazy loading. This a useful approach which reduces the amount of data that is transported. Sometimes this approach forces you to specify up front if you require a particular entity through use of Include statements. This again doesn't lend itself to testing particularly well.

Where does this leave us?

Having considered all of the above for a while and tried out various different approaches I think I'm coming to the conclusion that Ladislav is probably right. Implementing the Repository / Unit of Work patterns on top of ObjectSet / ObjectContext or DbSet / DbContext doesn't seem a worthwhile effort in the end.

So what's a better idea? I think that in the name of simplicity you might as well have a simple class which wraps all of your Entity Framework code. This class could implement an interface and hence be straightforwardly MOQ-able (or alternatively all methods could be virtual and you could forego the interface). Along with this you should have integration tests in place which test the execution of the actual Entity Framework code against a test database.

Now I should say this approach is not necessarily my final opinion. It seems sensible and practical. I think it is likely to simplify the tests that are written around a project. It will certainly be more reliable than just having unit tests in place.

In terms of the project I'm working on at the moment we're kind of doing this in a halfway house sense. That is to say, we're still using our Repository / Unit of Work wrappers for DbSet / DbContext but where things move away from simple operations we're adding extra methods to our Unit of Work class or Repository classes which wrap this functionality and then testing it using our integration tests.

I'm open to the possibility that my opinion may be modified further. And I'd be very interested to know what other people think on the subject.

Update

It turns out that I'm not alone in thinking about this issue and indeed others have expressed this rather better than me - take a look at Jimmy Bogard's post for an example: http://lostechies.com/jimmybogard/2012/09/20/limiting-your-abstractions/.

Update 2

I've also recently watched the following Pluralsight course by Julie Lerman: http://pluralsight.com/training/Courses/TableOfContents/efarchitecture#efarchitecture-m3-archrepo. In this course Julie talks about different implementations of the Repository and Unit of Work patterns in conjunction with Entity Framework. Julie is in favour of using this approach but in this module she elaborates on different "flavours" of these patterns that you might want to use for different reasons (bounded contexts / reference contexts etc). She makes a compelling case and helpfully she is open enough to say that this a point of contention in the community. At the end of watching this I think I felt happy that our "halfway house" approach seems to fit and seems to work. More than anything else Julie made clear that there isn't one definitively "true" approach. Rather many different but similar approaches for achieving the same goal. Good stuff Julie!

Monday 24 September 2012

Giving OData to CRM 4.0

Just recently I was tasked with seeing if we could provide a way to access our Dynamics CRM instance via OData. My initial investigations made it seem like there was nothing for me to do; CRM 2011 provides OData support out of the box. Small problem. We were running CRM 4.0.

It could well have ended there apart from the fact that Microsoft makes it astonishingly easy to to create your own OData service using WCF Data Services. Because it's so straightforward I was able to get an OData solution for CRM 4.0 up and running with very little heavy lifting at all. Want to know how it's done?

LINQ to CRM

To start with you're going to need the CRM SDK 4.0. This contains a "vanilla" LINQ to CRM client which is used in each of the example applications that can be found in microsoft.xrm\samples. We want this client (or something very like it) to use as the basis for our OData service.

In order to get a LINQ to CRM provider that caters for your own customised CRM instance you need to use the crmsvcutil utility from the CRM SDK (found in the microsoft.xrm\tools\ directory). Detailed instructions on how to use this can be found in this Word document: microsoft.xrm\advanced_developer_extensions_-_developers_guide.docx. Extra information around the topic can be found using these links:

You should end up with custom generated data context classes which look not dissimilar to similar classes that you may already have in place for Entity Framework etc. With your Xrm.DataContext in hand (a subclass of Microsoft.Xrm.Client.Data.Services.CrmDataContext) you'll be ready to move forwards.

Make me an OData Service

As I said, Microsoft makes it fantastically easy to get an OData service up and running. In this example an entity context model is created from the Northwind database and then exposed as an OData service. To create my CRM OData service I followed a similar process. But rather than creating an entity context model using a database I plugged in the Xrm.DataContext instance of CRM that we created a moment ago. These are the steps I followed to make my service:

  1. Create a new ASP.NET Web Application called "CrmOData" (in case it's relevant I was using Visual Studio 2010 to do this).
  2. Remove all ASPXs / JavaScript / CSS files etc leaving you with an essentially empty project.
  3. Add references to the following DLLs that come with the SDK:
    • microsoft.crm.sdk.dll
    • microsoft.crm.sdktypeproxy.dll
    • microsoft.crm.sdktypeproxy.xmlserializers.dll
    • microsoft.xrm.client.dll
    • microsoft.xrm.portal.dll
    • microsoft.xrm.portal.files.dll
  4. Add the <microsoft.xrm.client> config section to your web.config (not forgetting the associated Xrm connection string)
  5. Add this new file below to the root of the project:

And that's it - done. When you run this web application you will find an OData service exposed at http://localhost:12345/Crm.svc. You could have it even simpler if you wanted - you could pull out the logging that's in place and leave only the InitializeService there. That's all you need. (The GetEntityById method is a helper method of my own for identifying the GUIDs of CRM.)

You may have noticed that I have made use of caching for my OData service following the steps I found here. Again you may or may not want to use this.

Now, a warning...

Okay - not so much a warning as a limitation. Whilst most aspects of the OData service work as you would hope there is no support for the $select operator. I had a frustrating time trying to discover why and then came upon this explanation:

"$select statements are not supported. This problem is being discussed here http://social.msdn.microsoft.com/Forums/en/adodotnetdataservices/thread/366086ee-dcef-496a-ad15-f461788ae678 and is caused by the fact that CrmDataContext implements the IExpandProvider interface which in turn causes the DataService to lose support for $select projections"

You can also see here for the original post discussing this.

Finishing off

In the example I set out here I used the version of WCF Data Services that shipped with Visual Studio 2010. WCF Data Services now ships separately from the .NET Framework and you can pick up the latest and greatest from Nuget. I understand that you could easily switch over to using the latest versions but since I didn't see any feature that I needed on this occasion I haven't.

I hope you find this useful.

Thursday 6 September 2012

Globalize and jQuery Validation

Update 05/10/2015

If you're after a version of this that works with Globalize 1.x then take a look here.

Update 27/08/2013

To make it easier for people to use the approach detailed in this post I have created a repository for jquery.validate.globalize.js on GitHub here.

This is also available as a nuget package here.

To see a good demo take a look here.

Background

I've written before about a great little library called Globalize which makes locale specific number / date formatting simple within JavaScript. And I've just stumbled upon an old post written by Scott Hanselman about the business of Globalisation / Internationalisation / Localisation within ASP.NET. It's a great post and I recommend reading it (I'm using many of the approaches he discusses).

jQuery Global is dead... Long live Globalize!

However, there's one tweak I would make to Scotts suggestions and that's to use Globalize in place of the jQuery Global plugin. The jQuery Global plugin has now effectively been reborn as Globalize (with no dependancy on jQuery). As far as I can tell jQuery Global is now disappearing from the web - certainly the link in Scotts post is dead now at least. I've ripped off been inspired by the "Globalized jQuery Unobtrusive Validation" section of Scotts article and made jquery.validate.globalize.js.

And for what it's worth jquery.validate.globalize.js applies equally to standard jQuery Validation as well as to jQuery Unobtrusive Validation. I say that as the above JavaScript is effectively a monkey patch to the number / date / range / min / max methods of jQuery.validate.js which forces these methods to use Globalize's parsing support instead.

Here's the JavaScript:

The above script does 2 things. Firstly it monkey patches jquery.validate.js to make use of Globalize.js number and date parsing in place of the defaults. Secondly it initialises Globalize to relevant current culture driven by the html lang property. So if the html tag looked like this:


<html lang="de-DE">
...
</html>

Then Globalize would be initialised with the "de-DE" culture assuming that culture was available and had been served up to the client. (By the way, the Globalize initialisation logic has only been placed in the code above to demonstrate that Globalize needs to be initialised to the culture. It's more likely that this initialisation step would sit elsewhere in a "proper" app.)

Wait, where's html lang getting set?

In Scott's article he created a MetaAcceptLanguage helper to generate a META tag like this: <meta name="accept-language" content="en-GB" /> which he used to drive Globalizes specified culture.

Rather than generating a meta tag I've chosen to use the lang attribute of the html tag to specify the culture. I've chosen to do this as it's more in line with the W3C spec. But it should be noted this is just a different way of achieving exactly the same end.

So how's it getting set? Well, it's no great shakes; in my _Layout.cshtml file my html tag looks like this:


<html lang="@System.Globalization.CultureInfo.CurrentUICulture.Name">

And in my web.config I have following setting set:


<configuration>
  <system.web>
    <globalization culture="auto" uiCulture="auto" />
    <!--- Other stuff.... -->
  </system.web>
</configuration>

With both of these set this means I get <html lang="de-DE"> or <html lang="en-GB"> etc. depending on a users culture.

Serving up the right Globalize culture files

In order that I send the correct Globalize culture to the client I've come up with this static class which provides the user with the relevant culture URL (falling back to the en-GB culture if it can't find one based your culture):

Putting it all together

To make use of all of this together you'll need to have the html lang attribute set as described earlier and some scripts output in your layout page like this:


<script src="@Url.Content("~/Scripts/jquery.js")" type="text/javascript"></script>
<script src="@Url.Content(GlobalizeUrls.Globalize)" type="text/javascript"></script>
<script src="@Url.Content(GlobalizeUrls.GlobalizeCulture)" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.js")" type="text/javascript"></script>
<script src="@Url.Content("~/scripts/jquery.validate.globalize.js")" type="text/javascript"></script>

@* Only serve the following script if you need it: *@
<script src="@Url.Content("~/scripts/jquery.validate.unobtrusive.js")" type="text/javascript"></script>

Which will render something like this:


<script src="/Scripts/jquery.js" type="text/javascript"></script>
<script src="/Scripts/globalize.js" type="text/javascript"></script>
<script src="/scripts/globalize/globalize.culture.en-GB.js" type="text/javascript"></script>
<script src="/Scripts/jquery.validate.js" type="text/javascript"></script>
<script src="/Scripts/jquery.validate.globalize.js" type="text/javascript"></script>
<script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"></script>

This will load up jQuery, Globalize, your Globalize culture, jQuery Validate, jQuery Validates unobtrusive extensions (which you don't need if you're not using them) and the jQuery Validate Globalize script which will set up culture aware validation.

Finally and just to re-iterate, it's highly worthwhile to give Scott Hanselman's original article a look. Most all the ideas in here were taken wholesale from him!

Friday 24 August 2012

How to attribute encode a PartialView in MVC (Razor)

This post is plagiarism. But I'm plagiarising myself so I don't feel too bad.

I posted a question on StackOverflow recently asking if there was a simple way to attribute encode a PartialView in Razor / ASP.NET MVC. I ended up answering my own question and since I thought it was a useful solution it might be worth sharing.

The Question

In the project I was working on I was using PartialViews to store the HTML that would be rendered in a tooltip in my ASP.NET MVC application. (In case you're curious I was using the jQuery Tools library for my tooltip effect.)

I had thought that Razor, clever beast that it is, would automatically attribute encode anything sat between quotes in my HTML. Unfortunately this doesn't appear to be the case. In the short term I was able to workaround this by using single quotation marks to encapsulate my PartialViews HTML. See below for an example:


<div class="tooltip" 
     title='@Html.Partial("_MyTooltipInAPartial")'>
    Some content
</div>

Now this worked just fine but I was aware that if any PartialView needed to use single quotation marks I would have a problem. Let's say for a moment that _MyTooltipInAPartial.cshtml contained this:


<span style="color:green">fjkdsjf'lksdjdlks</span>

Well when I used my handy little single quote workaround, the following would result:


<div class="tooltip"
     title='<span style="color:green">fjkdsjf'lksdjdlks</span>'>
    Some content
</div>

Which although it doesn't show up so well in the code sample above is definite "does not compute, does not compute, does not compute *LOUD EXPLOSION*" territory.

The Answer

This took me back to my original intent which was to encapsulate the HTML in double quotes like this:


<div class="tooltip" 
     title="@Html.Partial("_MyTooltipInAPartial")">
    Some content
</div>

Though with the example discussed above we clearly had a problem whether we used single or double quotes. What to do?

Well the answer wasn't too complicated. After a little pondering I ended up scratching my own itch by writing an HTML helper method called PartialAttributeEncoded which made use of HttpUtility.HtmlAttributeEncode to HTML attribute encode a PartialView.

Here's the code:

Using the above helper is simplicity itself:


<div class="tooltip" 
     title="@Html.PartialAttributeEncoded("_MyTooltipInAPartial")">
    Some content
</div>

And, given the example I've been going through, it would provide you with this output:


<div class="tooltip"
     title="&lt;span style=&quot;color:green&quot;>fjkdsjf&#39;lksdjdlks</span>">
    Some content
</div>

Now the HTML in the title attribute above might be an unreadable mess - but it's the unreadable mess you need. That's what the HTML we've been discussing looks like when it's been encoded.

Final thoughts

I was surprised that Razor didn't handle this out of the box. I wonder if this is something that will come along with a later version? It's worth saying that I experienced this issue when working on an MVC 3 application. It's possible that this issue may actually have been solved with MVC 4 already; I haven't had chance to check yet though.

Thursday 16 August 2012

ClosedXML - the real SDK for Excel

Simplicity appeals to me. It always has. Something that is simple is straightforward to comprehend and is consequently easy to use. It's clarity.

Open XML

So imagine my joy when I first encountered Open XML. In Microsofts own words:

ECMA Office Open XML ("Open XML") is an international, open standard for word-processing documents, presentations, and spreadsheets that can be freely implemented by multiple applications on multiple platforms.

What does that actually mean? Well, from my perspective in the work I was doing I needed to be able to programmatically interact with Excel documents from C#. I needed to be able to create spreadsheets, to use existing template spreadsheets which I could populate dynamically in code. I needed to do Excel. And according to Microsoft, the Open XML SDK was how I did this.

What can I say about it? Open XML works. The API functions. You can use this to achieve your aims; and I did (initially). However, there's a but and it's this: it became quickly apparent just how hard Open XML makes you work to achieve relatively simple goals. Things that ought to be, in my head, a doddle require reams and reams of obscure code. Sadly, I feel that Open XML is probably the most frustrating API that I have yet encountered (and I've coded against the old school Lotus Notes API).

Closed XML - Open XML's DbContext

As I've intimated I found Open XML to be enormously frustrating. I'd regularly find myself thinking I'd achieved my goal. I may have written War and Peace code-wise but it compiled, it looked right - the end was in sight. More fool me. I'd run, sit back watch my Excel doc get created / updated / whatever. Then I'd open it and be presented with some obscure error about a corrupt file. Not great.

As I was Googling around looking for answers to my problem that I discovered an open source project on CodePlex called Closed XML. I wasn't alone in frustrations with Open XML - there were many of us sharing the same opinion. And some fantastic person had stepped into the breach to save us! In ClosedXMLs own words:

ClosedXML makes it easier for developers to create Excel 2007/2010 files. It provides a nice object oriented way to manipulate the files (similar to VBA) without dealing with the hassles of XML Documents. It can be used by any .NET language like C# and Visual Basic (VB).

Hallelujah!!!

The way it works (as far as I understand) is that ClosedXML sits on top of Open XML and exposes a really straightforward API for you to interact with. I haven't looked into the guts of it but my guess is that it internally uses Open XML to achieve this (as to use ClosedXML you must reference DocumentFormat.OpenXml.dll).

I've found myself thinking of ClosedXML's relationship to Open XML in the same way as I think about Entity Frameworks DbContexts relationship to ObjectContext. They do the same thing but the former in both cases offers a better API. They makes achieving the same goals *much* easier. (Although in fairness to the EF team I should say that ObjectContext was not particularly problematic to use; just DbContext made life even easier.)

Support - This is how it should be done!

Shortly after I started using ClosedXML I was asked if we could use it to perform a certain task. I tested. We couldn't.

When I discovered this I raised a ticket against the project asking if the functionality was likely to be added at any point. I honestly didn't expect to hear back any time soon and was mentally working out ways to get round the issue for now.

To my surprise within 5 hours MDeLeon the developer behind ClosedXML had released a patch to the source code! By any stretch of the imagination that is fast! As it happened there were a few bugs that needed ironing out and over the course of the next 3 working days MDeLeon performed a number of fixes and left me quickly in the position of having a version of ClosedXML which allowed me to achieve my goal.

So this blog post exists in part to point anyone who is battling Open XML to ClosedXML. It's brilliant, well documented and I'd advise anyone to use it. You won't be disappointed. And in part I wanted to say thanks and well done to MDeLeon who quite made my week! Thank you!

http://closedxml.codeplex.com/

Monday 6 August 2012

jQuery Unobtrusive Validation (+ associated gotchas)

I was recently working on a project which had client side validation manually set up which essentially duplicated the same logic on the server. Like many things this had started out small and grown and grown until it became arduos and tedious to maintain.

Time to break out the unobtrusive jQuery validation.

If you’re not aware of this, as part of MVC 3 Microsoft leveraged the pre-existing jQuery Validate library and introduced an “unobtrusive” extension to this which allows the library to be driven by HTML 5 data attributes. I have mentioned this lovely extension before but I haven't been using it for the last 6 months or so. And coming back to it I realised that I had forgotten a few of the details / quirks.

First up, "where do these HTML 5 data attributes come from?" I hear you cry. Why from the Validation attributes that live in System.ComponentModel.DataAnnotations.

Let me illustrate. This decoration:


  [Required(),
   Range(0.01, Double.MaxValue, ErrorMessage = "A positive value is required for Price"),
   Display(Name = "My Price")]
  public double Price { get; set; }

specifies that the Price field on the model is required, that it requires a positive numeric value and that it’s official name is “My Price”. As a result of this decoration, when you use syntax like this in your view:


  @Html.LabelFor(x => x.Price)
  @Html.TextBoxFor(x => x.Price, new { id = "itsMyPrice", type = "number" })

You end up with this HTML:


  
  

As you can see MVC has done the hard work of translating these data annotations into HTML 5 data attributes so you don’t have to. With this in place you can apply your validation in 1 place (the model) and 1 place only. This reduces the code you need to write exponentially. It also reduces duplication and therefore reduces the likelihood of mistakes.

To validate a form it’s as simple as this:


  $("form").validate();

Or if you wanted to validate a single element:


  $("form").validate().element("elementSelector")

Or if you wanted to prevent default form submission until validation was passed:


  $("form").submit(function (event) {

    var isValid = $(this).validate().valid();

    return isValid; //True will allow submission, false will not
        
  });

See what I mean? Simple!

If you want to read up on this further I recommend these links:


    /// <summary>
    /// MVC HtmlHelper extension methods - html element extensions
    /// These are drop down list extensions that work round a bug in MVC 3: http://aspnet.codeplex.com/workitem/7629
    /// These workarounds were taken from here: http://forums.asp.net/t/1649193.aspx/1/10
    /// </summary>
    public static class DropDownListExtensions
    {
        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]
        public static MvcHtmlString SelectListFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> selectList)
        {
            return SelectListFor(htmlHelper, expression, selectList, null /* optionLabel */, null /* htmlAttributes */);
        }


        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]
        public static MvcHtmlString SelectListFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> selectList, object htmlAttributes)
        {
            return SelectListFor(htmlHelper, expression, selectList, null /* optionLabel */, new RouteValueDictionary(htmlAttributes));
        }


        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]
        public static MvcHtmlString SelectListFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> selectList, IDictionary<string, object> htmlAttributes)
        {
            return SelectListFor(htmlHelper, expression, selectList, null /* optionLabel */, htmlAttributes);
        }


        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]
        public static MvcHtmlString SelectListFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> selectList, string optionLabel)
        {
            return SelectListFor(htmlHelper, expression, selectList, optionLabel, null /* htmlAttributes */);
        }


        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]
        public static MvcHtmlString SelectListFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> selectList, string optionLabel, object htmlAttributes)
        {
            return SelectListFor(htmlHelper, expression, selectList, optionLabel, new RouteValueDictionary(htmlAttributes));
        }


        [SuppressMessage("Microsoft.Design", "CA1011:ConsiderPassingBaseTypesAsParameters", Justification = "Users cannot use anonymous methods with the LambdaExpression type")]
        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]
        public static MvcHtmlString SelectListFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> selectList, string optionLabel, IDictionary<string, object> htmlAttributes)
        {
            if (expression == null)
            {
                throw new ArgumentNullException("expression");
            }


            ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);


            IDictionary<string, object> validationAttributes = htmlHelper
                .GetUnobtrusiveValidationAttributes(ExpressionHelper.GetExpressionText(expression), metadata);


            if (htmlAttributes == null)
                htmlAttributes = validationAttributes;
            else
                htmlAttributes = htmlAttributes.Concat(validationAttributes).ToDictionary(k => k.Key, v => v.Value);


            return SelectExtensions.DropDownListFor(htmlHelper, expression, selectList, optionLabel, htmlAttributes);
        }
    }

Monday 16 July 2012

Rendering Partial View to a String

Well done that man!

Every now and then I'm thinking to myself "wouldn't it be nice if you could do x..." And then I discover that someone else has thought the self same thoughts and better yet they have the answer! I had this situation recently and discovered the wonderful Kevin Craft had been there, done that and made the T-shirt. Here's his blog:

http://craftycodeblog.com/2010/05/15/asp-net-mvc-render-partial-view-to-string/

I wanted to talk about how this simple post provided me with an elegant solution to something I've found niggling and unsatisfactory for a while now...

How it helped

Just last week I was thinking about Partial Views. Some background. I'm working on an ASP.NET MVC 3 project which provides users with a nice web interface to manage the workflow surrounding certain types of financial asset. The user is presented with a web page which shows a kind of grid to the user. As the user hovers over a row they are presented with a context menu which allows them to perform certain workflow actions. If they perform an action then that row will need to be updated to reflect this.

Back in the day this would have been achieved by doing a full postback to the server. At the server the action would be taken, the persistent storage updated and then the whole page would be served up to the user again with the relevant row of HTML updated but everything else staying as is.

Now there's nothing wrong with this approach as such. I mean it works just fine. But in my case since I knew that it was only that single row of HTML that was going to be updated and so I was loath to re-render the whole page. It seemed a waste to get so much data back from the server when only a marginal amount was due to change. And also I didn't want the user to experience the screen refresh flash. Looks ugly.

Now in the past when I've had a solution to this problem which from a UI perspective is good but from a development perspective slightly unsatisfactory. I would have my page call a controller method (via jQuery.ajax) to perform the action. This controller would return a JsonResult indicating success or failure and any data necessary to update the screen. Then in the success function I would manually update the HTML on the screen using the data provided.

Now this solution works but there's a problem. Can you tell what it is yet? It's not very DRY. I'm repeating myself. When the page is initially rendered I have a View which renders (in this example) all the relevant HTML for the screen *including* the HTML for my rows of data. And likewise I have my JavaScript method for updating the screen too. So with this solution I have duplicated my GUI logic. If I update 1, I need to update the other. It's not a massive hardship but it is, as I say, unsatisfactory.

I was recently thinking that it would be nice if I could refactor my row HTML into a Partial View which I could then use in 2 places:
  1. In my standard View as I iterated through each element for display

    and

  2. Nested inside a JsonResult...
The wonderful thing about approach 2 is that it allows me to massively simplify my success to this:

$("myRowSelector")
    .empty()
    .html(data.RowHTML); //Where RowHTML is the property that 
                         //contains my stringified PartialView

and if I later make changes to the Partial View these changes will not require me to make any changes to my JavaScript at all. Brilliant! And entirely satisfactory.

On the grounds that someone else might have had the same idea I did a little googling around. Sure enough I discovered Kevin Craft's post which was just the ticket. It does exactly what I'd hoped.

Besides being a nice and DRY solution this approach has a number of other advantages as well:
  • Given it's a Partial View the Visual Studio IDE provides a nice experience when coding it up with regards to intellisense / highlighting etc. Not something available when you're hand coding up a string which contains the HTML you'd like passed back...
  • A wonderful debug experience. You can debug the rendering of a Partial View being rendered to a string in the same way as if the ASP.NET MVC framework was serving it up. I could have lived without this but it's fantastic to have it available.
  • It's possible to nest *multiple* Partial Views within your JsonResult. THIS IS WONDERFUL!!! This means that if several parts of your screen need to be updated (perhaps the row and a status panel as well) then as long as both are refactored into a Partial View you can generate them on the fly and pass them back.
Excellent stuff!

Sunday 1 July 2012

Optimally Serving Up JavaScript

I have occasionally done some server-side JavaScript with Rhino and Node.js but this is the exception rather than the rule. Like most folk at the moment, almost all the JavaScript I write is in a web context.

Over time I've come to adopt a roughly standard approach to how I structure my JavaScript; both the JavaScript itself and how it is placed / rendered in the an HTML document. I wanted to write about the approach I'm using. Partly just to document the approach but also because I often find writing about something crystalises my feelings on the subject in one way or another. I think that most of what I'm doing is sensible and rational but maybe as I write about this I'll come to some firmer conclusions about my direction of travel.

What are you up to?

Before I get started it's probably worth mentioning the sort of web development I'm generally called to do (as this has obviously influenced my decisions).

Most of my work tends to be on web applications used internally within a company. That is to say, web applications accessible on a Company intranet. Consequently, the user base for my applications tends to be smaller than the Amazons and Googles of this world. It almost invariably sits on the ASP.NET stack in some way. Either classic WebForms or MVC.

"Render first. JS second."

I took 2 things away from Steve Souder's article:

  1. Async script loading is better than synchronous script loading
  2. Get your screen rendered and *then* execute your JavaScript

I'm not doing any async script loading as yet; although I am thinking of giving it a try at some point. In terms of choosing a loader I'll probably give RequireJS first crack of the whip (purely as it looks like most people are tending it's direction and that can't be without reason).

However - it seems that the concept of async script loading is kind of conflict with one of the other tenets of web wisdom: script bundling. Script bundling, if you're not already aware, is the idea that you should combine all your scripts into a single file and then just serve that. This prevents multiple HTTP requests as each script loads in. Async script loading is obviously okay with multiple HTTP requests, presumably because of the asynchronous non-blocking pattern of loading. So. 2 different ideas. And there's further movement on this front right now as Microsoft are baking in script bundling to .NET 4.5.

Rather than divide myself between these 2 horses I have at the moment tried to follow the "JS second" part of this advice in my own (perhaps slightly old fashioned) way...

I want to serve you...

I have been making sure that scripts are the last thing served to the screen by using a customised version of Michael J. Ryan's HtmlHelper. This lovely helper allows you to add script references as required from a number of different sources (layout page, view, partial view etc - even the controller if you so desired). It's simple to control the ordering of scripts by allowing you to set a priority for each script which determines the render order.

Then as a final step before rendering the </body> tag the scripts can be rendered in one block. By this point the web page is rendered visually and a marginal amount of blocking is, in my view, acceptable.

If anyone is curious - the class below is my own version of Michael's helper. My contribution is the go faster stripes relating to the caching suffix and the ability to specify dependancies using script references rather than using numeric priority mechanism):

Minification - I want to serve you less...

Another tweak I made to the script helper meant that when compiling either the debug or production (minified) versions of common JS files will be included if available. This means in a production environment the users get minified JS files so faster loading. And in a development environment we get the full JS files which make debugging more straightforward.

What I haven't started doing is minifying my own JS files as yet. I know I'm being somewhat inconsistent here by sometimes serving minified files and sometimes not. I'm not proud. Part of my rationale for this that since most of my users use my apps on a daily basis they will for the most part be using cached JS files. Obviously there'll be slightly slower load times the first time they go to a page but nothing that significant I hope.

I have thought of starting to do my own minification as a build step but have held off for now. Again this is something being baked into .NET 4.5; another reason why I have held off doing this a different way for now.

Update

It now looks like this Microsofts optimisations have become this Nuget package. It's early days (well it was released on 15th August 2012 and I'm writing this on the 16th) but I think this looks not to be tied to MVC 4 or .NET 4.5 in which case I could use it in my current MVC 3 projects. I hope so...

By the way there's a nice rundown of how to use this by K. Scott Allen of Pluralsight. It's fantastic. Recommended.

Update 2

Having done a little asking around I now understand that this *can* be used with MVC 3 / .NET 4.0. Excellent!

One rather nice alternative script serving mechanism I've seen (but not yet used) is Andrew Davey's Cassette which I mean to take for a test drive soon. This looks fantastic (and is available as a Nuget package - 10 points!).

CDNs (they want to serve you)

I've never professionally made use of CDNs at all. There are clearly good reasons why you should but most of those good reasons relate most to public facing web apps.

As I've said, the applications I tend to work on sit behind firewalls and it's not always guaranteed what my users can see from the grand old world of web beyond. (Indeed what they see can change on hour by hour basis sometimes...) Combined with that, because my apps are only accessible by a select few I don't face the pressure to reduce load on the server that public web apps can face.

So while CDN's are clearly a good thing. I don't use them at present. And that's unlikely to change in the short term.

TL:DR

  1. I don't use CDNs - they're clearly useful but they don't suit my particular needs
  2. I serve each JavaScript file individually just before the body tag. I don't bundle.
  3. I don't minify my own scripts (though clearly it wouldn't be hard) but I do serve the minified versions of 3rd party libraries (eg jQuery) in a Production environment.
  4. I don't use async script loaders at present. I may in future; we shall see.

I expect some of the above may change (well, possibly not point #1) but this general approach is working well for me at present.

I haven't touched at all on how I'm structuring my JavaScript code itself. Perhaps next time.

Monday 4 June 2012

Reasons to be Cheerful (why now is a good time to be a dev)

I've been a working as a developer in some way, shape or form for just over 10 years now. And it occurred to me the other day that I can't think of a better time to be a software developer than right now. This year was better than last year. Last year was better than the year before. This is a happily recurring theme.

So why? Well I guess there are a whole host of reasons; this is my effort to identify just some of them...

Google and the World Wide Web (other search providers are available)

When I first started out as a humble Delphi developer back in 1999 learning was not the straightforward proposition it is today. If you want to know how to do something these days a good place to start is firing up your browser and putting your question into Google. If I was to ask the question "how do I use AJAX" of a search engine 10 years ago and now I would see very different things.

On the left the past, on the right the present. Do try not to let the presence of W3Schools in the search results detract... And also best ignore that the term AJAX wasn't coined until 2006...

What I'm getting at is that finding out information these days is can be done really quickly. Excellent search engines are now the norm. Back when I started out this was not the case and you were essentially reliant on what had been written down in books and the kindliness of more experienced developers. Google (and others like them) have done us a great service. They've made it easier to learn.

Blogs / Screencasts / Training websites

Something else that has made it easier to learn is the rise and rise of blogs, screencasts and training websites. Over the last 5 years the internet has been filling up with people writing blogs talking about tools, techniques and approaches they are using. When you're searching for advice on how to do something you can pretty much guarantee these days that some good soul will have written about it already. The most generous devs out there have gone a step further producing screencasts demonstrating them coding and sharing it with the world *for free*. See an example from the ever awesome Rebecca Murphey below:



Similarly, there are now a number of commercially available screencasts which make it really easy to ramp up and learn. There's TekPub, there's Pluralsight (who have massively improved my commute with their mobile app by the way). All of these help tug away the curtain away from the software development Wizard of Oz. All this is a very wonderful thing indeed!

Podcasts

If you're a Boogie Down Productions fan then you may be aware of the concept of Edutainment. That is to say, the bridge that can exist between entertainment and education. This is what I've found podcasts to be. I listen to a lot. Hanselminutes. Herding Code. JavaScript Jabber. The JavaScript Show. Yet Another Podcast. There's more.

There's something wonderful about about listening to other developers who are passionate about what they are doing. Interested in their work. Enthusiastic about their projects. It's infectious. It makes you want to grab a keyboard and start trying things out. I can't imagine I'm the only dev that feels this way.

And of course I couldn't fail to mention my favourite podcast: This Developer's Life. Put together by Scott Hanselman and Rob Conery (I love these guys by the way), and inspired by This American Life, this show tells some of the stories experienced by developers. It gives an insight into what it's like to be a developer. This podcast is more entertaining than educational but it's absolutely *fantastic*.

JavaScript (and HTML and CSS too)

All of the above have eased the learning path of developers and made it easier to keep in touch with the latest and greatest happenings in the dev world. Along with this there has, in my opinion, also been something of a unifying of purpose in the developer community of late. I attribute this to JavaScript, HTML and CSS.

Back when I started out it seemed much more the case that developers were split into different tribes. There was the Delphi tribe, the Visual Basic tribe, the C++ tribe, the Java tribe (very much the "hip young gunslingers" tribe back then - I guess these days it'd be the Node.JS guys) as well as many others. And each tribe more or less seemed to keep themselves to themselves. This wasn't malicious that I could tell; that just seemed to be the way it was.

But shortly after I started out the idea of the web application took off in a major way. I was involved in this coming from the position of being an early adopter of ASP.NET (which I used, and loved, since it was first in beta). Many other web application technologies were available; JSP, PHP, Perl and the like. But what they all had in common was this: they all pumped out HTML and CSS to the user. Suddenly all these developers from subtly different backgrounds were all targeting the same technology for their GUI.

This unifying effect has been *massively* reinforced by JavaScript. Whereas HTML is a markup language, JavaScript is a programming language. And more by accident than grand design JavaScript has kind of become the VM of the web. Given the rise and rise of the rich web client (driven onwards and upwards by the popularity of AJAX, Backbone.JS etc) this has meant that devs of all creeds and colours have been forced to pitch a tent on the same patch of dirt. Pretty much all of the tribes now have an embassy in JavaScript land.

So there are all these devs out there who are used to working with different server-side technologies from each other. But when it comes to the client, we are all sharing the common language of JavaScript. To a certain extent we're all creating data services that just pump out JSON to the client. Through forums like StackOverflow devs of all the tribes are helping each other with web client "stuff". They're all interacting in ways that they probably wouldn't otherwise if the web client environment was as diverse as the server-side environment...

The Browser Wars Begin Again

Didn't things seem a little dull around 2003/2004? IE 6 had come out 3 years previously and had vanquished all comers. Microsoft was really the only game in town browser-wise. Things had stopped changing; it seemed like browsers were "done". You know, good enough and there was no need to take things any further.

Then came Firefox. This lone browser appeared as an alternative to might of IE. I must admit the thing that first attracted me to Firefox was the fact it had tabs. I mean technically I knew Firefox was more secure than IE but honestly it was the tabs that attracted me in the first place. (This may offer some insight as to why so many people still smoke...)

And somehow Firefox managed to jolt Microsoft out of it's inertia on the web. Microsoft started caring about IE again. (Not enough until quite recently in my book but you've got to start somewhere.) I'm a firm believer that change for it's own sake can often be a good thing. Change makes you think about why you do what you do and wonder if there might be better approaches that could be used instead. And these changes kind of feed into...

...HTML 5!

That's right HTML 5 which is all about change. It's taking HTML as we know and love it and bolting on new stuff. New elements (canvas), new styling (CSS 3), new JavaScript APIs, faster JavaScript engines, support for JavaScript 5. The list goes on...

And all this new stuff is exciting, whizzy, fun to play with. That which wasn't possible yesterday is possible now. Playing with new toys is half the fun of being a dev. There's a lot of new toys about right now.

The Feeling of Possibilites

This is what it comes down to I think. It's so easy to learn these days and there's so much to learn about.

Right now lots of things are happening above and beyond what I've mentioned above. Open source has come of age and gone mainstream. Github is with us. Google are making contentious forays into new languages with Dart and Native Client. Microsoft aren't remotely evil empire-like these days; they've made .NET like a Swiss army knife. You can even run Node.js on IIS these days! Signal-R, Websockets, Coffeescript, JS.Next, Backbone.JS, Entity Framework, LINQ, the mobile web, ASP.NET MVC, Razor, Knockout.JS, the cloud, Windows Azure...

So much is happening right now. People are making things. It's a very interesting time to be a dev. There are many reasons to be cheerful.

Wednesday 30 May 2012

Dad Didn't Buy Any Games

Inspired by Hanselmans post on how he got started in programming I thought I'd shared my own tale about how it all began...

I grew up the 80's just outside London. For those of you of a different vintage let me paint a picture. These were the days when "Personal Computers", as they were then styled, were taking the world by storm. Every house would be equipped with either a ZX Spectrum, a Commodore 64 or an Amstrad CPC. These were 8 bit computers which were generally plugged into the family television and spent a good portion of their time loading games like Target: Renegade from an audio cassette.

But not in our house; we didn't have a computer. I remember mournfully pedalling home from friends houses on a number of occasions, glum as I compared my lot with theirs. Whereas my friends would be spending their evenings gleefully battering their keyboards as they thrashed the life out of various end-of-level bosses I was reduced to *wasting* my time reading. That's right Enid Blyton - you were second best in my head.

Then one happy day (and it may have been a Christmas present although I'm not certain) our family became the proud possessors of an Amstrad CPC 6128:

Glory be! I was going to play so many games! I would have such larks! My evenings would be filled with pixelated keyboard related destruction! Hallelujah!!

But I was wrong. I had reckoned without my father. For reasons that I've never really got to the bottom of Dad had invested in the computer but not in the games. Whilst I was firmly of the opinion that these 2 went together like Lennon and McCartney he was having none of it. "You can write your own son" he intoned and handed over a manual which contained listings for games:


It wasn't this - but it wasn't much different

And that's where it first began really. I would spend my evenings typing the Locomotive Basic listings for computer games into the family computer. Each time I started I would be filled with great hopes for what might result. Each time I tended to be rewarded with something that looked a bit like this:


Frankly I prefer Double Dragon....

I'm not sure that it's possible to learn to program by osmosis but if it is I'm definitely a viable test case. I didn't become an expert Locomotive Basic programmer (was there ever such a thing?) but I did undoubtedly begin my understanding of software.... Thanks Dad!

Monday 7 May 2012

Globalize.js - number and date localisation made easy

I wanted to write about a JavaScript library which seems to have had very little attention so far. And that surprises me as it's

  1. Brilliant!
  2. Solves a common problem that faces many app developers who work in the wonderful world of web; myself included

The library is called Globalize.js and can be found on GitHub here. Globalize.js is a simple JavaScript library that allows you to format and parse numbers and dates in culture specific fashion.

Why does this matter?

Because different countries and cultures do dates and numbers in different ways. Christmas Day this year in England will be 25/12/2012 (dd/MM/yyyy). But for American eyes this should be 12/25/2012 (M/d/yyyy). And for German 25.12.2012 (dd.MM.yyyy). Likewise, if I was to express numerically the value of "one thousand exactly - to 2 decimal places", as a UK citizen I would do it like so: 1,000.00. But if I was French I'd express it like this: 1.000,00. You see my point?

Why does this matter to me?

For a number of years I've been working on applications that are used globally, from London to Frankfurt to Shanghai to New York to Singapore and many other locations besides. The requirement has always been to serve up localised dates and numbers so users experience of the system is more natural. Since our applications are all ASP.NET we've never really had a problem server-side. Microsoft have blessed us with all the goodness of System.Globalization which covers hundreds of different cultures and localisations. It makes it frankly easy:


using System.Globalization;

//Produces: "06.05.2012"
new DateTime(2012,5,6).ToString("d", new CultureInfo("de-DE")); 

//Produces: "45,56"
45.56M.ToString("n", new CultureInfo("fr-FR")); 

The problem has always been client-side. If you need to localise dates and numbers on the client what do you do?

JavaScript Date / Number Localisation - the Status Quo

Well to be frank - it's a bit rubbish really. What's on offer natively at present basically amounts to this:

This is better than nothing - but not by much. There's no real control or flexibility here. If you don't like the native localisation format or you want something slightly different then tough. This is all you've got to play with.

For the longest time this didn't matter too much. Up until relatively recently the world of web was far more about the thin client and the fat server. It would be quite standard to have all HTML generated on the server. And, as we've seen .NET (and many other back end enviroments as well) give you all the flexiblility you might desire given this approach.

But the times they are a-changing. And given the ongoing explosion of HTML 5 the rich client is very definitely with us. So we need tools.

Microsoft doing *good things*

Hands up who remembers when Microsoft first shipped it's ASP.NET AJAX library back in 2007?

Well a small part of this was the extensions ASP.NET AJAX added to JavaScripts native Date and Number objects.... These extensions allowed the localisation of Dates and Numbers to the current UI culture and the subsequent string parsing of these back into Dates / Numbers. These extensions pretty much gave JavaScript the functionality that the server already had in System.Globalization. (not quite like-for-like but near enough the mark)

I'm not aware of a great fuss ever being made about this - a fact I find surprising since one would imagine this is a common need. There's good documentation about this on MSDN - here's some useful links:

When our team became aware of this we started to make use of it in our web applications. I imagine we weren't alone...

Microsoft doing *even better things* (Scott Gu to the rescue!)

I started to think about this again when MVC reared it's lovely head.

Like many, I found I preferred the separation of concerns / testability etc that MVC allowed. As such, our team was planning to, over time, migrate our ASP.NET WebForms applications over to MVC. However, before we could even begin to do this we had a problem. Our JavaScript localisation was dependant on the ScriptManager. The ScriptManager is very much a WebForms construct.

What to do? To the users it wouldn't be acceptable to remove the localisation functionality from the web apps. The architecture of an application is, to a certain extent, meaningless from the users perspective - they're only interested in what directly impacts them. That makes sense, even if it was a problem for us.

Fortunately the Great Gu had it in hand. Lo and behold the this post appeared on the jQuery forum and the following post appeared on Guthrie's blog:

http://weblogs.asp.net/scottgu/archive/2010/06/10/jquery-globalization-plugin-from-microsoft.aspx

Yes that's right. Microsoft were giving back to the jQuery community by contributing a jQuery globalisation plug-in. They'd basically taken the work done with ASP.NET AJAX Date / Number extensions, jQuery-plug-in-ified it and put it out there. Fantastic!

Using this we could localise / globalise dates and numbers whether we were working in WebForms or in MVC. Or anything else for that matter. If we were suddenly seized with a desire to re-write our apps in PHP we'd *still* be able to use Globalize.js on the client to handle our regionalisation of dates and numbers.

History takes a funny course...

Now for my part I would have expected that this announcement to be followed in short order by dancing in the streets and widespread adoption. Surprisingly, not so. All went quiet on the globalisation front for some time and then out of the blue the following comment appeared on the jQuery forum by Richard D. Worth (he of jQuery UI fame):

http://blog.jquery.com/2011/04/16/official-plugins-a-change-in-the-roadmap/#comment-527484

The long and short of which was:

  • The jQuery UI team were now taking care of (the re-named) Globalize.js library as the grid control they were developing had a need for some of Globalize.js's goodness. Consequently a home for Globalize.js appeared on the jQuery UI website: http://wiki.jqueryui.com/Globalize
  • The source of Globalize.js moved to this location on GitHub: https://github.com/jquery/globalize/
  • Perhaps most significantly, the jQuery globalisation plug-in as developed by Microsoft had now been made a standalone JavaScript library. This was clearly brilliant news for Node.js developers as they would now be able to take advantage of this and perform localisation / globalisation server-side - they wouldn't need to have jQuery along for the ride. Also, this would be presumably be good news for users of other client side JavaScript libraries like Dojo / YUI etc.

Globalize.js clearly has a rosy future in front of it. Using the new Globalize.js library was still simplicity itself. Here's some examples of localising dates / numbers using the German culture:


<script 
  src="/Scripts/Globalize/globalize.js" 
  type="text/javascript"></script>
<script 
  src="/Scripts/Globalize/cultures/globalize.culture.de-DE.js" 
  type="text/javascript"></script>

Globalize.culture("de-DE");

//"2012-05-06" - ISO 8601 format
Globalize.format(new Date(2012,4,6), "yyyy-MM-dd");

//"06.05.2012" - standard German short date format of dd.MM.yyyy
Globalize.format(new Date(2012,4,6), Globalize.culture().calendar.patterns.d);   

//"4.576,3" - a number rendered to 1 decimal place
Globalize.format(4576.34, "n1");

Stick a fork in it - it's done

The entry for Globalize.js on the jQuery UI site reads as follows:

"version: 0.1.0a1 (not a jQuery UI version number, as this is a standalone utility)
status: in development (part of Grid project)"

I held back from making use of the library for some time, deterred by the "in development" status. However, I had a bit of dialog with one of the jQuery UI team (I forget exactly who) who advised that the API was unlikely to change further and that the codebase was actually pretty stable. Our team did some testing of Globalize.js and found this very much to be case. Everything worked just as we expected and hoped. We're now using Globalize.js in a production environment with no problems reported; it's been doing a grand job.

In my opinion, Number / Date localisation on the client is ready for primetime right now - it works! Unfortunately, because Globalize.js has been officially linked in with the jQuery UI grid project it seems unlikely that this will officially ship until the grid does. Looking at the jQuery UI roadmap the grid is currently slated to release with jQuery UI 2.1. There isn't yet a release date for jQuery UI 1.9 and so it could be a long time before the grid actually sees the light of day.

I'm hoping that the jQuery UI team will be persuaded to "officially" release Globalize.js long before the grid actually ships. Obviously people can use Globalize.js as is right now (as we are) but it seems a shame that many others will be missing out on using this excellent functionality, deterred by the "in development" status. Either way, the campaign to release Globalise.js officially starts here!

The Future?

There are plans to bake globalisation right into JavaScript natively with EcmaScript 5.1. There's a good post on the topic here. And here's a couple of historical links worth reading too:

http://norbertlindenberg.com/2012/02/ecmascript-internationalization-api/
http://wiki.ecmascript.org/doku.php?id=globalization:specification_drafts