Web APIs: Don’t be a victim of your success.

It seems everyone these days wants an API for their web application.  Getting a primitive Web API up and running can be deceptively simple. However, what many people don’t realize is that with the success of an API can come a variety of problems.


As developers, we are used to dealing with unforeseen problems, we just fix them and redeploy.  However, APIs are different beasts, because the applications that consume the API are often not written by an external developer.  Making significant changes is likely to impact clients, and breaking clients not only annoys the client developers but also the users of those clients.  It’s just not good for business.

This article discusses approaches you can take upfront, that will minimize that chances of breaking clients as your API evolves to meet the crushing demands of success 🙂 .

The goals

Before we discuss the problems and solutions of API development, it would seem prudent to identify why we are creating the API in the first place.  Here are my assumptions as to why people want APIs for their web application :

  • Reach more people with less effort.  By enabling third parties to write apps you can get your application on more devices without doing the work yourself.
  • Enable scripted scenarios.  Sometimes the best way to extract data is to write long running “spider like” processes that crawl data looking for answers.
  • Facilitate mash-ups. 
  • Enable developers to use data in unforeseen ways.

The problems

If the API becomes popular, then it is likely going to need to handle a very large number of hits.  Back in April 2010 Twitter revealed that 75% of its traffic was via its API.  The API needs to have the same scalable qualities that web sites themselves have.

One of the stated goals is that we want to allow developers to use our data in unforeseen ways.  By definition if you don’t know what your users are going to be doing with your API, how can you capacity plan?  And more curiously, how can you design the API for an unknown set of use cases?

Once people are writing clients for your API how can you add more features, manage expensive features and discourage people from making wasteful requests.  If you look at many of the twitter client applications with a HTTP tracing tool you will see that there is a significant amount of unnecessary requests being made.  Twitter attempts to encourage client developers to be efficient by putting a API limit but that has limited effectiveness.

Using Twitter as another interesting example, they recently posted a message to the developer community discouraging people from developing new clients.  One of the stated reasons was that users were getting an inconsistent experience from the client applications.  Is it possible to design an API in such a way that client applications behave in a similar way, even though they have been developed on different platforms/devices by completely different teams?

The current approach

If you look at most APIs, developers seem to have taken their database schema, or object model and created a CRUD type of interface to access those objects.  The thinking appears to be, that a CRUD interface is sufficiently generic that any client will be able to do whatever it wants with the data.  In some respects this is true but it is an approach that is full of problems.  It requires the client application to have intimate knowledge of the structure of your domain to do even the most simple operation.  Also, because the interface is so primitive it can make more complex operations very expensive to perform and you as an API provider are paying for that waste.

One solution that API producers are trying, is allowing arbitrary queries against the API.  Frameworks like OData enable these types of queries out of the box using a sophisticated query string syntax. I understand the reasoning behind it, but could you imagine approaching your favourite DBA and asking if he minds opening up his database to allow any dev to run any SQL query against his database.  It is a solution that will work right up until the point where your API becomes successful, and then you will have a big performance problem.  StackOverflow avoided the problem by creating the StackExchange Data Explorer which allows arbitrary queries to be run off a static data dump.  Their API offering seems to have gained little traction.

The current contingency plan for handling the scaling of successful APIs is to get VC money and then throw more hardware at the problem.  More memcached servers, more ngnix servers,  all to try and keep up with the  complex demands of third party apps that are forced to use a primitive generic API that requires far too many round trips to do anything significant.

When I reviewed the goals of an API earlier, I suggested that one benefit of an API is that you can offload some of the work of supporting multiple platforms onto third party developers. Unfortunately, the current approach to APIs has burdened API providers with a more onerous task.  It is assumed that API providers will provide client libraries in a variety of different languages, supporting different response formats, with the intent of simplifying the lives of client developers. I have seen some cases where the API provider has dozens of different client libraries that they are required to maintain.

Unfortunately providing client libraries just moves the point of coupling from the flexible HTTP interface to tightly coupling on a client library interface.  It can be beneficial to provide some kind of library that helps with parsing responses, especially if custom media types are used, but the examples that I see regularly severely limit the ability for the API to evolve.

A better approach

The first critical thing to do when designing a web API to identify the most common usage scenarios.  Design a way for consumers to access that information efficiently.  Efficiently for both you and for them.

Encourage API consumers to traverse the API in the way you feel is the best.  Don’t create multiple different ways to access the same thing just in case somebody might need it.  Designing for serendipitous re-use does not mean you should attempt to plan for every usage scenario, it means people used it in ways you did not expect.

What you do next is listen, measure and evolve.   The key is to start small and plan for your API to evolve based on feedback and measured metrics.  If consumers are constantly using a path through your resources that takes four round trips and you can add an extra link into the root resource that allows them to do it in one, they will be happy and you will have less load.  If users are pounding a particular resource hundreds of times per second, then tweak the caching parameters.

Web API building is a classic case of the Agile versus Big Design Up Front approaches.  Currently people are building a complete V1 API and throwing it over the wall and then 6-12 months later throwing a V2 over the wall and hoping their clients will upgrade.  Trying to control and constrain change is very difficult to do.  Instead of trying to limit change, you need to accept that change will be necessary and use a methodology that accepts change as the norm and enables it to happen with minimum negative impact.

Listening to feedback and responding quickly is critical.  Users are going to ask,  “can you also include this piece of data”.  If you can do it with minimal impact on your load then do it, and do it quickly.  Having a limited initial API will be forgiven by the consumers if they feel you are prepared to add new stuff as required.

If it is going to be more expensive to provide some requested information, then don’t include it directly in a response but give a link to it.  Those consumers who want it can request it, those who don’t will not pay the price.  If you decide at a later time that you can’t afford to provide that data, you can always remove it.  Client developers should be told that they should build their apps to gracefully handle the removal of links from responses.

It is important that consumers of your API  understand this evolving approach you are taking to building the API and that they take some responsibility to ensuring that you can continue to develop your API without breaking their clients.  This requires the clients use the links you provide to them instead of them constructing them manually.  It requires them to react to the responses that are returned instead of presuming certain information will be returned.


Step 1 : Build an API home page

Your API consumers should start at a single URL http://awesomecompany.com/api and that should be the only URL that your consumers should hard code into their application.

Step 2 : Fill the home page with links to your top level resources

The home page should contains a list of URI templates of all the places the consumer can go next. API consumers shouldn’t just use a Web API, they should surf the API.

Each link needs to be identified in some way.  How depends to a certain extent on the media type you use, but in the XML world, the “rel” attribute otherwise know as a  link relation is becoming the standard way to convey meaning to a link between two resources.

Step 3: Link the rest of your resources together in the way that makes the most sense to you.

Not only should your top level document link to available resources, but so should each resource link to some other resource.  The intent is to build a web of links between your resources to allow the client developer to surf your entire API.

Step 4:  Decide on the lifetime for every resource.

Every resource should be considered for how often that resource changes.  In the case of a home page, you might decide to allowing it to last for 24 hours before becoming stale.  Other resources you may want consider them current for a few seconds.  Caching is an incredibly powerful tool that will allow a great deal of control over the scaling of your API.  Caching can happen on the client and it can also happen on the server and responses should be tuned to take advantage of both types.

Step 5: Setup the infrastructure to measure how the API is being used.

Without this infrastructure in place early, you will not be able to react to unforeseen usages to ensure that the clients are getting the data they want with the minimum amount of effort from your API.

Step 6:  Setup some kind of feedback mechanism for your users.

Discussion forums,  UserVoice, wikis, there are wealth of ways that you can accept feedback.  The critical thing is to ensure your users are aware that you are listening and acting upon that feedback.

Step 7:  Teach your consumers how to use your API without creating unnecessary coupling.

Provide sample code that shows users how to “surf your API”.  Explain why they should not construct URLs.  Show them how to resolve your URI templates and describe the parameters you use.  Provide the specifications for any custom media types.  Describe the link relations that you use.

Show me!

In the coming weeks I will be showing a sample API that follows the recommendations that I have made here. It has been built using the newly released Microsoft Web API library.  This library is important because it  faithfully exposes HTTP as an application protocol.  It allows us to take full advantage of HTTP to build APIs that scale the way the web was intended to.  This library is just an add-on to the .Net 4 framework that can easily installed as a NuGet.  No service-pack required!

Also, I will be building client code that uses my RESTAgent library which builds on top of the HttpClient that is in Microsoft’s Web API library.

While you are waiting for me to write those posts, can I suggest you take a look of some of these great resources relating to web API design:

A RESTful Hypermedia API in Three Easy Steps – http://www.amundsen.com/blog/archives/1041

Hal: a hypermedia media type – http://restafari.blogspot.com/2010/10/evolving-hal.html

and there are lots more interesting videos here : http://code.google.com/p/implementing-rest/wiki/Video

Related Blog