Sorry, but the Flickr API isn't REST

After SemanticCamp me and Rey popped in to see Paul and everyone at Osmosoft towers. A good few interesting conversations ensued, including one about the difference between mashups and integration. All good fun basically. Simon also had an interesting take on the topic as well.

What has all this to do with the topic of this post? Well, Simon says:

When compared to small RESTful APIs like flickr and twitter…

Now I’m not really picking on Simon here, more that I’ve been meaning to write something on this topic for a while and this proved a good catalyst for a little rant.

The flickr API is pretty darn cool. But it’s not RESTful in pretty much any way you want to shake a stick at it. It’s a well designed RPC (remote procedure call) API. Now even flickr get this wrong. They even have a page which confuses the issue and makes REST out to be a response format based on XML. Bad flickr.

flickr states that:

The REST Endpoint URL is http://api.flickr.com/services/rest/

This turns out to be completely against the whole RESTful principles. Lets try and explain. You can define most of what you want to do with an API with the use of nouns, verbs and content types. REST is based around limiting the set of verbs to those available in HTTP; for instance GET, POST, PUT and DELETE. For any given application or API you’ll also specify a set of content types (representations); for instance HTML, JSON or XML. All you get to play around with are the nouns, in the case of the web these are our URLs.

In a typical RPC style API you have one URL (like flickr does) which acts as the end point to which all calls are made. You then probably define a few functions which you can call. Lets look at a simple book example.

<code>getBook()
deleteBook()
createBook()
editBook()</code>

The RESTful way of designing this might look a little bit more like this:

<code>GET     /books/{book-id}
DELETE  /books/{book-id}
POST    /books
PUT     /books/{book-id}</code>

We mentioned content types or representations earlier. Lets say instead of the default response format we might want to get a JSON representation of a given book we might do something like the following.

<code>GET /books/{book-id}.json</code>

The advantages of using URLs as the API nouns in this way include more than just sane URLs for the site or application in question. The web is based around many of these architectural principles and that seemed to scale pretty well. The idea is that fell envisaged RESTful applications have an advantage here too. For me one of the real benefits of RESTful APIs are in the simplicity they bring to documentation. I already know the available verbs, all I need to know are the set of resource URLs and I can probably use CURL to work out the REST (sorry, bad pun).

This misunderstanding is pretty common. Even WikiPedia appreciates their is a problem:

The difference between the uses of the term “REST” causes some confusion in technical discussions.

This isn’t just pedantry on my part, well not quite. I’d recommend anyone involved in designing and architecting web sites read RESTful Web Services as well as Roy Fielding’s Architectural Styles and the Design of Network-based Software Architectures. But if you just want to get the general principles and don’t fancy wading through technical documents then you have to read How I Explained REST to My Wife hilariously written by Ryan Tomayko.

And remember, just because an API makes use of HTTP doesn’t make it RESTful. I’m looking at you twitter.

Invited to Join WaSP

I had a great time at SemanticCamp over the weekend which was not too much of a surprise. What was a surprise was getting back home to find out that I’ve been invited to join the Web Standards Project (WaSP) on the Education Task Force. I even have my own page.

Thanks for inviting me guys. Anyone who has had the misfortune of having me present during a discussion of the current state of web education knows it’s one of my favourite subjects. Hopefully I can make myself useful around the place and help with getting a few things done. More on this when I know more but in the meantime feel free to pester me endlessly if you have a particular axe to grind.

Example of the Yahoo Live Api

Yahoo! Live launched recently along with a nice RESTful API. I’ve spoken before about the beauty of REST being in lowering the barrier to hacking and when I wanted a quick feature for Live it was simplicity itself to put together.

A few friends are using it far too much it seems, Ben has 7.6 hours and Si has already clocked up 15 hours. But for the most part I keep missing their no-doubt highly entertaining antics. One thing that Live misses I feel is a favourite users or previously viewed channels list. Basically I want to see which of my friends who use the service are broadcasting right now. Something like:

Yahoo! Live Online

The API request we’re interested in is the /channel/PERMALINK method. This lets us get information about whether the user is broadcasting at the moment.

<code><?php
$api = 'http://api.live.yahoo.com/api/v1/channel';
$friends = array(
  'garethr',
  'benward',
  'sijobling'
);
$statuses = array();
foreach ($friends as $friend) { 
  $response = simplexml_load_file($api . '/' . $friend);
  $name = $response->name; 
  if ($response->broadcast) {
    $status = 'live';
  } else {
    $status = 'offline';
  }
  $statuses["$name"] = $status;
}
function displaylist($array) {
  $output = '';
  if (count($array) >= 0) {
    $output .= '<ul>';
    foreach ($array as $key => $value) {
      $output .= "<li class=\"$value\">";
      $output .= "<a href=\"http://live.yahoo.com/$key\">";
      $output .= "$key</a>";
      $output .= "<span>$value</span></li>";
    }
    $output .= '</ul>';
  }
  return $output;
}
echo displaylist($statuses);
?></code>

I’ll add a few more people to my list when I discover other people using the service. If you have an account leave a comment. I’ve added a touch of javascript as well so as to avoid having to reload the page manually. This way I can loiter on my little aggregator until someone I know starts broadcasting and head over to Live for whatever Si has been spending 15 hours doing.

Continuous Integration for Front End Developers

Most software developers, especially those with a grounding in Agile development methodologies, will probably be familiar with Continuous Integration

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.

The emphasis above is mine, purely as it’s at the heart of what I’m going to ramble on about. A little closer to home Ryan King just posted about a new site; inursite. The premise is simple; enter a few of your sites and inursite will visit them once a day and run a markup validation service over the page. You then get a feed of the pass or failure status. It’s simple but brilliant. For example, I have this very site added to the service. If I put some invalid markup in this post, tomorrow morning I’ll get an item in my feedreader telling me of my mistake. I’ll get that every day until I fix the problem.

This green/red (pass/fail) type approach to simple tests is what I find most powerful about continuous integration systems like Cruise Control. Ryan asked over on his site in one of the comments what I’d like to see, so lets see:

  • Has all the CSS been compressed using something like CSSTidy.
  • Has all the javascript been compressed using something like JSMin.
  • Does any Javascript pass the harsh taskmaster that is JSLint.
  • If my markup a little bloated? Maybe I could set a maximum size for the markup and get a fail is I go over that.
  • Ditto CSS file size.
  • Ditt Javascript.
  • Ditto images.
  • If pages have associated feeds, then validate them as well according to the relevant specification (probably RSS or Atom).
  • How many HTTP Requests does it take to load the whole page, including all the relevant assets. I’d like to be able to set a maximum number.
  • How many hosts are required to load the whole page? I’d like to be able to set a maximum number and get a fail if I go over that.
  • Is the page gzipped on the server.
  • And just to keep this topical, does the page have either the IE8 meta element or the associated HTTP header set to a particular value.

Lots of this is front-end best practice, some coming from the YAHOO! exceptional performance work. It’s something I’ve touched on before too. Can anyone else think of other things you’d like to see when you’re working away crafting markup and CSS? Once you have all these tests running you could display them in widgets, gadgets, twitter, firefox extensions, large displays, mobile devices, the works.

Now that sounds like an awful lot of stuff for one person (or even for one application) but I have something else in mind. If inursite allowed you to hook up external webservices which accept a URL as an argument, along with any service specific parameters, and return true or false then, in theory, anyone could add their own custom checks to it. This becomes particularly useful for larger teams than are likely to have internal quality tools already. On top of all that I’d probably pay for a service like this that let me run it on demand (rather than once per day) - or maybe even better, pay for a downloadable version (a.l.a. Mint) I can install locally.

As you can probably tell, I think the general idea of continuous integration for front end web developer is one for which time has come. It’s simply part of our discipline growing up and becoming more professional. Whether Ryan looks to extend his fantastic simple service in this direction or not I hope something will come along that does all this and more. I might even work on it myself - but then I always say that!

Simple deployment with SVN and Phing

Another approach to deploying web apps is to use Phing. Phing is at heart a PHP clone of Ant, another common build and deployment tool. The main advantage of using Phing, at least if you’re already using PHP, is close integration with other PHP specific tools (PHPDocumentor, PHPLint and PHPUnit to name a few) and ease of install.

Speaking to installation Phing has it’s own PEAR channel. I still think PEAR is great in so many different ways - it always makes me frown when I see people cussing at PEAR (especiallly the installer). You can install Phing as follows.

<code>pear channel-discover pear.phing.info
pear install phing/phing</code>

The Phing documentation is nothing if not comprehensive. Unfortunately, unless you are pretty familiar with Ant or you’re trying to do something complex (and willing to invest the time) the chances are you’ll be a little lost. More and simpler examples for common problems would be useful for those beginners.

I might publish a few little recipes down the line or even a full commented production recipe but for the moment lets start simple. The following build script is designed to be run on your remote web server and relies on a subversion repository (hopefully you’re using source control, if not that’s another post I’m afraid). It simply exports the specified repository to the specified export directory on the server. Note that you need to set your own SVN details and replace the capitalised properly values. The following should be saved as build.xml on your web server, outside the web root.

<code><?xml version="1.0"?>
<project name="build" default="main">

  <property name="exportdir"  value="WEB_ROOT" />
  <property name="svnpath" value="YOUR_SVNPATH" />
  <property name="username" value="YOUR_SVN_USERNAME" />
  <property name="password" value="YOUR_SVN_PASSWORD" />
  <property name="repo" value="YOUR_SVN_REPO" />

  <target name="main" depends="svnexport"/>

  <target name="svnexport">
    <svnexport
       svnpath="${svnpath}"
       username="${username}"
       password="${password}"
       nocache="true"
       repositoryurl="${repo}"
       todir="${exportdir}"/>
  </target>
</project></code>

Let’s break some of that down. Property definitions allow you to specify variables for use in your build script. In this example we only have one task specified so maybe we don’t need the extra abstraction but the moment we start reusing scripts or adding more tasks it’s a good idea.

<code><property name="exportdir"  value="./web" /></code>

The other major point of interest is the task itself. Here we make use of the properties we have already specified to run a subversion export. The svnexport task is build in to Phing.

<code><target name="svnexport">
  <svnexport
     svnpath="${svnpath}"
     username="${username}"
     password="${password}"
     nocache="true"
     repositoryurl="${repo}"
     todir="${exportdir}"/>
</target></code>

Also of note is the setting of a default task, main, as a property of the project element and specifying that the main task depends on another task, svnexport. Again, we could avoid that at this stage but the moment we add another few tasks then we’ll want more control over execution order.

Phing should be run from the directory containing the build script. You can run the command without any arguments, in which case the default task will be run (in our case main) or you can pass an argument to specify a specific task. For our simple build script the two following commands do the same thing:

<code>phing</code>

<code>phing svnexport</code>

Given the above build script simple runs a subversion export command you might be wondering what use it is. The answer is not much at this stage. You do get a simple build script which can be stored in your source control system and used by the whole team. The real advantage is where you might go from here. If everyone in your team deploys in the same way (ie. using the build script) then anyone can add extra tasks and everyone gets the benefit. Simple example might be generating code documentation, running Unit tests (and not running the export if the tests fail) and creating a zip file of the deployed source for backup purposes.

Build scripts can get complicated quickly, but if you start out small and add tasks as you need and understand them, you should be able to raise the quality of your application and avoid easy to make mistakes.

Questions, Pointers and Reality

Along with an awful lot of noise, the whole X-UA-COMPATIBLE IE8 issue is also bringing out some well rounded and thought through arguments from people I admire.

I still disagree with Jeremy Keith on the default issue but agree with everything else. Jeremy says:

Let me make it perfectly clear: I understand the need for version targeting. But the onus must be on the publisher to enable it.

At the moment it comes down to a sense of stubborn realism on my part. All the people who know about the issue and the need for the new header (that would be me, Jeremy, probably you and the whole web standards developer crowd) are probably also in a position to add it fairly easily. All those that don’t know about the header don’t have to know about it. Ever. I would prefer if this wasn’t the case but alas I think it might be.

Mike Davies has one of the most well argued points out their. Throwing into the pot the issue of content not served via a web server raises even more issues that make changing the default behaviour difficult. I think Mike sums this up pretty well:

Every other browser has either started from scratch from the ground up (Firefox and Opera), or created a browser after the browser-wars (Safari). As such, all three have benefited from not having to support Microsoft’s excess baggage. And Microsoft being dragged to a complete stop because of it. This is Microsoft’s browser-war victory biting them in the ass.

Whether we like it or not this issue is about Microsoft, and I (maybe naively) believe Microsoft would like nothing better than to get back up from the mat and innovate in the browser space. But they can’t because of the ball and chain.

Who hasn’t written at least some code that, in hindsight, turned into a horrible maintenance nightmare? You might dream of jumping in and rewriting it from scratch but if that code has already been shipped to a client then what is that client doing to do? Turning around and saying actually, what we gave you originally was pretty rubbish isn’t going to go down well - even when it’s the right thing to do.

Mike is right in that the IE team are actually going to be shipping two browsers. That has got to be a massive issue for them - certainly a bigger issue than me adding a new header to my site or a meta element to my pages. But if it lets the second of those browsers support CSS3, HTML5, XHTML, Canvas and anything else we dream of then I’m all for it. We all know the first of those browsers just won’t be up to the task.

One particular thorny problem I wasn’t aware of was issues surrounding accessibility. Bruce Lawson has some of the details but in short by locking a large number of sites into the ways of IE7 we essentially freeze the (poor) state of accessibility on the web. Patrick Lauke also raises a related issue in the comments regarding further alienating screen reader and assistive technology developers. These points for me are ones I want addressing by Microsoft and one’s without resolution which would make me change my mind on this whole issue of defaults.

So, the jury is still out in my mind. I hope it is in the minds of those working closer to the heart of this problem at Microsoft. Like Molly I’m glad this issue has been raised now, rather than simply revealed at launch. I just hope the reason for that was to get the sort of high quality feedback we’re seeing in some areas of the community in order to test the waters and come to a consensus.

Who loses out to X-UA-Compatible?

If you work on the web you’ll probably have already seen or noted the existence of the latest issue of A List Apart:

My twitter feed positively exploded with negative feedback on this issue but I’ve only just got round to reading (and re-reading) the various interesting articles and (some half through through) comments. I though I’d have a think through who this affects.

What we’re disucssing is either adding a meta element to your documents or alternatively sending the relevant HTTP header. The code for that looks like:

<code><meta http-equiv="X-UA-Compatible" content="IE=8" /></code>

IE Users

Users of Internet Explorer get a bum deal here I think. Their browser is going to get bigger - potentially a lot bigger. With all those rendering engines rolled into one it’s also likely to need lots of memory. How this affects the mobile version of IE is anyones guess. Memory and file size are even more important here.

Other Browser Vendors

I have a feeling they will all go Meh and move along as if nothing happened. Certainly my experience of upgrades to Firefox, Safari and Opera (including recent bleeding edge versions) doesn’t break any of my bits of the web or the bits I frequent.

Web Standards Savvy Designers and Developers

A really cool feature for us designers and developers has gone unmentioned I feel. We’ve been asking Microsoft for one application in which we can test all their browsers in. IE8 will be that browser. Think about it. By changing the contents of the meta element (or the header) we can trigger the rendering engine to use IE6, IE7 or IE8.

I’m a big fan of the whole progressive enhancement approach, using new browser features that not everyone supports (generated content, CSS3 selectors, etc.) in a way that they don’t affect older browsers. If I were to use a feature not yet supported by IE7 but expected in IE8 for instance in this way I would expect it work in IE8. If I include an IE7 X-UA-Compatible declaration this won’t happen. Which is why I probably won’t include one. The problem is if I don’t include it at all I still have the same problem. I’ll come back to the solution (and the problem with the solution) in a moment.

Other Web Designers and Developers

Not everyone is using Web Standards, and even less get progressive enhancement. Most of the web is broken and we are not going to fix this. Virtually no one thinks XHTML2 is a good idea because it starts with this fact and says “Let’s start again, but get it right this time”. The fact that all those websites (and more importantly for Microsoft the users of those websites) standardise on IE7 is fine by me. IE7 isn’t that bad remember.

So, all in all I don’t see a major problem for me, or other savvy web designers and developers. I have one issue; namely the strong language around the use of edge.

This option, though strongly discouraged, will cause a site to target the latest IE browser versions as they release.

What this is describing is how things work now. It’s also how I want to work - using progressive enhancement to make my websites better when people have browsers that support them. I’ll be adding the relevant edge headers to my sites by default as I get a moment to tinker.

In my view this should allow the web to move forward faster with newer features hitting IE sooner, leaving a ghetto populated by IE users who don’t upgrade quickly and designers and developers who don’t want to be professional about their craft (or tools that aim for the lowest common denominator). Yes, I’ll need to add a header to all of my sites in the future but in return I get the latest version of IE that lets me test all previous versions of IE.

A few interesting upcoming events

After a break over Christmas the web world is back in full swing with a few upcoming events.

BarCamp Scotland - February 2nd.

I only just found out about this one but it looks like a few people from Refresh Newcastle and Refresh Edinburgh are going to make it along. The short notice does mean I have to think of something to talk about sharpish though.

Think and a Drink - February 7th

Bit of shameless self promotion here. I’m speaking at the monthly Codeworks get together on something to do with the Web. Also speaking is Paul Downey from BT and another regular BarCamp attendee. If you’re around Newcastle try and get along (it’s a membership event but let me know if you want to go along and I’ll speak with the organisers).

Semantic Camp - February 16th - 17th

A whole BarCamp style event brought to you by tommorris. This looks likely to be particularly interesting event I think.

Future of Web Design - April 17th - 18th

The Future of… rolls into town again, this time with a focus on web design. I’d be tempted to go along to this solely for the Photoshop tennis doubles match with Andy Clarke commentating. The rest of the content looks pretty good to.

Thinking Digital - May 21st - 23rd

Another event I’m involved in, this time on the board along with Ian Forrester from The BBC and Mike Butcher from Techcrunch. This one is a little different to most of the events I go along to - being less web centric and more about all sorts of interesting ideas. With the likes of Tara Hunt, Sean Phelan (MultiMap), Matt Locke and Dan Lyons (a.k.a. Fake Steve Jobs) speaking it should be good fun. Oh, and did I mention it’s in Newcastle?

And look, only two of those are in London!

It’s already looking like 2008 will be even more jam packed than last year. I also have a sneaking feeling that their will be lots more events Up North this year than last. It will be interesting to see how this works out. Will more people come along that might not have travelled down to London? Will the event happy northerners still go to London as well? Will anyone from London travel north of Watford? Time will tell.

On Process and Design

I consider myself something of a hybrid, flirting with both the design and development side of things. I also like to think of myself as something of a student of the discipline; reading and consuming as much knowledge as possible on the process as well as the practice. Something I took to pondering while reading My Job Went to India was why the majority of theory books tend to be from the software side?

It’s appears the same when it comes to discussions of process, and in turn innovation within process. Their is lots of talk around Agile processes and practices amongst software developers, but I don’t see lots of designers discussing the how. User Centred Design is maybe a slight exception, but again it seems to be the software people and information architects, as apposed to those who consider themselves designers, who talk about this. I’m just wondering whether others have found this too?

The Design Council published a pretty good resource About Design but I haven’t seen much other real literature on a modern design process. Communicating Design is a fantastic book on design documentation, but doesn’t frame this in terms of a discussion of process. If anyone knows of any books or blogs out their on the subject I’d love to know about them. I’d love to be proved wrong and end up with a good reading list.

My Job Went to India (which is excellent by the way) contains 52 hints and tips for your career as a programmer. In reality most of these ideas apply to any creative discipline, only the examples and analogies need changing. You could probably argue the same for books like The Pragmatic Programmer, Practices of an Agile Developer and Peopleware. I’m also thinking whether or things like the mythical man month apply? Or if other truths of the design practice exist?

The importance of design in any discipline is becoming increasingly well recognised. But where will the next generation of managers learn about how to manage designers and the design process? One of the problems with front end web development and design work has been the difficulty of learning the craft. Is this the same for managing the web design process?

How to deploy PHP sites with the Pake build tool

So in case you hadn’t guessed project automation is the new black. I’ve been getting back into some development work recently with Is it Birthday? and getjobsin and trying to automate as much of the repetitive and boring work as possible.

I’m not absolutely sure that that many people outside large or particularly organised teams realise that large web sites are not generally deployed by someone with an FTP client and crossed fingers. This sort of effort, along with other repetitive tasks like running tests or generating documentation, can be automated. This is a win-win for everyone. It’s more reliable (scripts don’t forget to restart a service before they go home), quicker and removes the need for having one person in charge of deployments.

Even where people know about this build process idea they might not use it for their projects, probably for similar reasons why not everyone uses source control. I think the reason is probably that most web designers and developers (even particularly geeky ones) thing this software engineering stuff is maybe a step too far. Their is also something of a barrier to entry, knowing where to look and how to get started without reading lots of documentation (often filled with XML examples) and trial and error. Also project automation is apparently not sexy?

Anyway, if you’ve been working with Rails you will have come across Rake, which is a build tool used to automate various tasks. Well the nice symfony people have written a PHP version called Pake for use as part of the framework. It’s used for all the command line action, from running tests to clearing the cache and automating deployments. Pake is however a separate tool that you can use in your own projects, whatever framework or hand rolled codebase you are using.

Pake can be downloaded using PEAR on the command line:

<code>pear upgrade PEAR
pear channel-discover pear.symfony-project.com
pear install symfony/pake</code>

The documentation for Pake is pretty much non-existent as far as I can tell, but it is a really handy tool so worth a little effort. The best source of knowledge is to look through the default Pake tasks that are provided as part of symfony. One of my favourites, which we’ll look at now, allows for incremental deployments via Rsync over SSH. I’ve been using this with non-symfony projects too.

Rsync is a command line tool for syncronising two file structures. The Rsync command that does most of the heavy lifting for the deployment looks like the following. Note I’ve used {} to denote placeholders in the following examples.

<code>rsync --progress --azC --force \
--delete -e ssh ./ {user}@{host}:{dir}</code>

The sync task I’m using is straight from symfony, but the licence allows for distribution so here is an example zip of all the files needed to follow along. You’ll need these to follow along as I haven’t printed the full sourcecode for the pakefile here.

First a little configuration. Using YAML we define an environment, staging in this case_ and specify a host, port, user and the full path on the remote server. You can of course specify multiple environments in this file, we’ll see how to use them shortly.

<code>[staging]
  host={host}
  port=22
  user={user}
  dir={dir}</code>

You can also include an rsync_exclude.txt or an rsync_include.txt file. This gives you control over the files being synced when you run the Pake task. The following example is a good starting point, it stops you pushing those pesky .DS_Store files that OSX creates to you web server, as well as avoiding subversion metadata and the configuration files for the Pake task.

<code>.svn
.DS_Store
/config/properties.ini
/config/rsync_exclude.txt
/config/rsync_include.txt
/config/rsync.txt</code>

We can now run the following command, from the directory containing the pakefile.php script, using Pake. The first example will do a dry run, showing you what will happen. You’ll be prompted for your SSH password as part of the command unless you’re using keys for authentication.

<code>pake sync staging</code>

When you’re happy you can run the sync command with the go option which will instruct Rsync to do it’s thing.

<code>pake sync staging go</code>

Pake has a handy flag to find out what tasks are available.

<code>pake -T</code>

This should give you a list of tasks and a brief description, useful to find out what you can do if you’ve been away from a project for a while.

This is a pretty simple example but one I’m already finding useful. Rsync is but one way of deploying apps but with Pake has the advantage of being simple and in lots of situations good enough. It’s certainly better than a manual deployment process. It would be simple enough to build into the task a simple logging system so you have a log of all deployments; when they happened and who did them for instance.

If that has whet your appetite then their are other deployment tools you might want to look into; Capistrano (Ruby), Ant (Java), Maven (Java) and Phing (PHP) spring to mind. If anyone knows of a Python equivalent that would be useful too? I’m also using Phing for a few tasks on projects at the moment, mainly for some nifty Subversion tasks (and you can use Phing with Pake as symfony does), but that will have to wait until another post.

So, what are peoples experiences of build tools? Any good pointers? Or maybe reasons why you don’t use them in your projects?