WCF dot net

Microsoft WCF gets serious about HTTP

Chances are, if you have read any of my blog posts before, you will probably be aware of my appreciation of a certain HTTPClient library that appeared in the WCF REST Starter kit a few years ago.  After a very long incubation period and some periods of uncertainty about its future I am really excited by the fact that a official .Net 4 version is now downloadable.

When I started the articles on HttpClient library I had planned to write a piece on how HTTPClient can be used on the server.  I never did get around to it, and I’m kind of glad I didn’t spend any time on it, because now I have something much better to write about, Microsoft.ServiceModel.Http.

WCF dot net

PDC 2010

In Glenn’s talk at PDC today he announced an evolution of WCF’s approach to dealing with the web.  This new set of libraries are built on the foundations of WCF but with a different perspective.  Don’t abstract the application protocol, embrace it.  If there was a mantra to this project, it was “let’s do HTTP right, the way RFC2616 says it should be done.”  At the core, the WCF engine it is still powering the bytes across the wire, because, for all the things I have cursed WCF for in the past, performance and reliability have never been issues for me.  It is one solid communication library.  

But what about all the other Http offerings

I am sure many people are going to ask, but what about System.ServiceModel.Web, isn’t that WCF’s solution to HTTP and REST?   What about WCF Data Services? OData? Why WCF at all, why not just stick with ASP.NET MVC, it does HTTP pretty darn well. 

All of these solutions do address a certain problem space.  OData makes it really quick to expose raw data. ASP.NET MVC is great for delivering HTML and javascript.  System.ServiceModel.Web provides web services without the friction of SOAP.

However, to an extent each one of these solutions are technology silos, and they each come with their set of limitations and quirks relating to their underlying technology.    And if you believe System.ServiceModel.Web was sufficient to easily do RESTful systems, there are hundreds of people waiting for your wisdom here.

Old friends and new friends

This new WCF HTTP stack provides the building blocks that can be used to address all of these problem spaces, either as a direct replacement or as a complimentary solution.  In the first code drop on Codeplex there is a prototype project called Microsoft.ServiceModel.WebHttp that emulates the behaviour of System.ServiceModel.Web.  Other prototypes have been worked on that provide an experience closer to the way OpenRasta works.  This new infrastructure also plays very nicely with ASP.NET as you can see in the ContactManager Sample.  The ASP.NET ServiceRoutes can be used to host WCF services so that ASP.NET and WCF can work seamlessly side by side.

Although, there is currently no sample showing it, the new stuff can also be self-hosted.  I am currently putting together a sample to do that.

RFC2616 all the way

One of the best things about the HttpClient library was the strong types that helped me when making HTTP requests.  All the headers have strong types that deal with the string parsing details.  And because the types are consistent with RFC2616 learning them is easy if you already know HTTP.  In fact it has been quite surprising to me how much I have learned about HTTP from the types.  Microsoft.ServiceModel.Http actually has a dependency on the Microsoft.Http client library and it re-uses all those strong types.  You use the same HttpResponseMessage and HttpRequestMessage on the server as you do on the client.

Where’s the magic?

One thing I hated about WCF was the magic.  Sprinkle some attributes on your classes, scatter some Xml elements in config files and fire up your service.  When a request comes in, magically it executes your operation.  This inevitably leads to questions like, how do I implement logging across all endpointshow do I handle exceptions?  and hundreds of others on the intricacies and limitations of how WCF deals with parameters passed to operations. 

So how does Microsoft.ServiceModel.Http attempt to address these cross-cutting concerns?  With a pipeline of course.  To be more specific a request and response pipeline for every Http Method / URI template combination.  However, the pipeline is not just there for WCF to do its thing, it is also available to the application developer to add in whatever functionality they want.  So you can make your own magic happen!

The basic idea is that you can add processors into the the request and response pipelines and each processor will be given an opportunity to contribute to the end result in some way.  Each processor is required to declare what it wants as input parameters and what it will output.  The output of the processor is accumulated in the context of the request and can be consumed by later processors.

As one example, if you consider the System.ServiceModel.WebHttp prototype that is used by the ContactManager sample. It automatically sets up a UriTemplateProcessor and an XmlProcessor on the request pipeline and a ResponseEntityBodyProcessor and an XmlProcessor on the response side.  The UriTemplateProcessor is responsible for using the URITemplate to parse parameters out of the incoming URI. The XmlProcessor is resposible for serializing and deserializing objects to XML. The ResponseEntityBodyProcessor is there to set the ContentType header of the response. 

With just a few processors sitting on top of the HTTP stack, it is possible to emulate what System.ServiceModel.Web does.  This is going to be very important for backwards compatibility, for those who have already invested in these tools, but would like to gain the flexibility of the new stuff.

Pipeline processors are going to enable us to do all sorts of cool stuff and I expect to be doing some more articles on my adventures on building processors to support the REST projects I am building.

it’s the same, but different

For people who have seen WSGI and Rack, this idea of a pipeline may sound familiar.  However, from what I understand it is different in a couple of significant ways.  First, processors are very explicit about what they accept as input parameters and what output parameters they contribute.  The second is related to the fact that WCF HTTP does not route all requests through the same pipeline.  When the service is initialized, a configuration sequence is used to identify all of the operations that exist and what processors will be used when accessing that specific endpoint. 

Initially I was horrified by the idea of of hundreds of these pipelines being built at startup but the more I become familiar with the inner workings of WCF, the more it appears to be a common trade off: do as much work as possible during initialization, to reduce the work required while processing a request.  Considering the number of times a service is started, as compared to the number of times a request is processed, I think I can live with a slower startup time and a bit of additional memory overhead.    This approach also makes it much less costly to apply a processor to only a subset of URIs as the filtering is done just once upfront.

These are just building blocks

The bits on the new WCF Codeplex site are really just the beginning.  They lay a foundation on top of which we can build a really effective REST frameworks.  I intentionally used frameworks, plural, because there are many ways to implement RESTful systems and I think opinionated frameworks are a good idea but not everyone shares the same opinions :-). 

At last a reality that is not painful

So in essence, my experience with both System.ServiceModel.Http and Microsoft.Http have led me to the opinion that this is a great starting point for building distributed applications using HTTP.  The benefit of it being based on WCF technology is that it is a proven technology that plays nicely with other protocols, so when HTTP is not the right solution, you can use the same infrastructure.  And finally, we have a solution that begins the process of unifying the variety of different web technology stacks around a solid web specification, with an extensibility model that is actually comprehensible.

What next?

You will quickly discover the documentation on the Codeplex site is sparse.  I think one reason for this is the code has been changing so much over the last few months that trying to do documentation would have been futile.  However, moving forward I expect that to change pretty rapidly.  You will also notice that the docs and samples are currently heavily focused on IIS/ASP.Net hosted solutions.  Do not let that deceive you, self-hosting still rulez!  I hope to contribute towards fixing that disparity.

Related Blog