Twitter has had substantial downtime over the last several days, and this has prompted no end of commentary and analysis. Ruby on Rails was initially blamed for the problems a year ago, then exonerated, then blamed again (and exonerated again). But blaming the hammer for improperly driving a screw is not very illuminating; blaming a screwdriver for how it drives a nail even less so, and although using a hammer and screwdriver combination to drive a large number of finishing nails probably isn't the best solution, until a better machine is invented you wouldn't necessarily know that.
The reason Twitter is having difficulties is that it truly is a novel application. The rules are deceptively simple on the surface, but the emergent complexity is profound, especially as you start to build a massive database of users (which Twitter certainly is now doing). The sort of many-to-many relationships embodied in the way people follow one another, coupled with the different options on what sorts of tweets you want to see, and the different ways of interfacing – the website, instant messaging, text messaging, a raft of third party applications (Twhirl, gTwitter, FriendFeed, et cetera, etc, &c, ...), the ability to track specific terms...
All of this adds up into an extremely complex system that gets exponentially harder to manage as the user base grows. The telephone systems' switching rules are simple by comparison: they are simple, one-to-one connections that connect, persist a short time, and go away, leaving nothing but possibly a billing record (and definitely an entry in an NSA database). A tweet goes onto a user's own list, their friends lists, possibly the lists of friends-of-friends, the list of anyone who is tracking that term, sends it out via SMS, instant messenger and the API, AND persists the message forever; if the user then decides to delete it or make it private then it is removed from all of those lists. Simple, huh? Oh yeah, and it has to do all of that in realtime.
Twitter is built on Ruby on Rails, which came from a simple project management application. Obviously a simple project management application isn't designed to robustly handle the type of complex operations outlined above. It turns out nothing is, which is why Twitter has no easy solutions at hand. Their difficulties in scaling would have likely happened with any existing platform, as not even airline reservation and telephone switching systems handle such a flood of interrelated and interdependent traffic coming from so many different sources – traffic that doubles in two months.
Evan Williams and company invented something new, and they shouldn't be blamed for not initially understanding the true potential and nature of the beast. Although it isn't profitable, it continues to attract investors; anything with this kind of growth and engagement is interesting to businesspeople. NTT invested for a reason, and it's not just because it is popular (and profitable) in Japan. This is an example of how next-generation communication is working: modern switching rules, attention-based networking – a step beyond instant messaging, a step beyond SMS and a step sideways from the phone system. The right tools for the job probably don't exist yet; maybe Erlang is a step in the right direction.
Lastly, I don't blame the Twitter staff for doing experiments on the site during the day. They live in the United States and there's no reason they should have to stay up all night. Besides, we should face the sobering conclusion that Japan's market and the rest of Asia might be more important to Twitter than the depressed, aging, and troubled North American market. From that standpoint, the US is a cheap, talented labour pool crafting clever mercantile goods to send to Asia in exchange for hard currency. Oh, how the worm turns.
No comments:
Post a Comment