Showing posts with label mashup. Show all posts
Showing posts with label mashup. Show all posts

13 Jun 2009

Vancouver's Open Data, Open Standards, Open Source and the Vancouver Public Library

Vancouver has adopted a policy of Open Data, Open Standards, Open Source and I'm really excited about it. David Ascher presented on the topic at Open Web Vancouver 2009 and pointed out that if we don't engage the city and use this data it will go nowhere.

The Vancouver Public Library is one of my favourite places. I love libraries, I love books, but the library here in Vancouver is a really special library for me. So I've been thinking of ways that the library could share data so that I could build applications to make the library more interesting and more valuable to the people of the city.

Here's some data I'd like to have:
  • Books on order

    I'd like to know what new books are currently on order, but not available. I want a preview of coming attractions.

  • Most unpopular books

    What doesn't get checked out? What's likely to get sold in the next round of disposal, ahem, book sale?

  • Most popular books

    What's everybody reading?

  • Top 100 sites for library patrons

    What are the most popular sites browsed from the library? I'd like to be able to contrast this with the most popular sites according to Alexa. That should help tell the library what sorts of services patrons need.

These are things that I could mash up into interesting applications, such as presenting a unified view of new popular books on Amazon and which ones are in the library, or popular in the local community.

28 Oct 2008

Semantic web startup Twine hard to get wrapped up in

Twine is [yet another] site that offers recommendations for webpages, stories and information based on things that you've read. I've seen demos that are amazing, that pull together disparate threads of data in new and surprising ways. It is powered by some sort of fantastic semantic juju that allows it to create recommendations and connections that simpler probabilistic analyses cannot. Sounds good right?

The problem is that it is just too. damned. much. work. You start with nothing, and have to enter your links, from scratch, one at a time. You don't get any immediate satisfaction. Unlike FriendFeed or SocialMedian, it doesn't just figure stuff out based on your other activity elsewhere on the web. It doesn't even attempt to figure out what you already like. So all of the heavy lifting is left up to the user, and there's no immediate payoff. The new user is left wondering just what the hell this site is supposed to do for them.

So although it probably has good technology, so far it's a failure. If they don't realize that everybody's not suddenly going to start posting everything in their little walled garden with a promise of getting payoff, maybe, someday, they'll be left behind by other sites who have given a great experience out of the gate to new users. Other sites – Facebook, FriendFeed, etc. – can add this semantic hooey to their own sites at their leisure. Sometimes technology really doesn't matter.

13 Apr 2008

Speech synthesis on Ubuntu

Text-to-speech (TTS) has been around for a couple of decades, and it keeps getting better. There are a bunch of really fun untapped applications for it, combining RSS, filters (like Pipes), podcasts, telephony, and hidden speakers.

Under Linux there is a nice package available called Festival. To get started, grab an appropriate package, such as:
  • festvox-hi-nsk (Hindi male)
  • festvox-kallpc16k (american English male)
  • festvox-rablpc16k (British English male)
  • festvox-mr-nsk (Marathi male)
  • festvox-suopuhe-lj Finnish female
  • festvox-suopuhe-mv (Finnish male)
  • festvox-te-nsk (Telugu male)
Too bad you can't get a female speaker except in Finnish. (I had never heard of the Indian languages Marathi and Telugu, and I consider myself a language buff... sigh.)

The results are pretty good. Here's how to use it from the command line:
text2wave text-file.txt -o audio-file.wav

For extra fun, use pidgin-festival to turn incoming instant messages into speech (use festival-gaim if you haven't made the jump to Hardy Heron yet).

24 Sept 2007

Why write HTML for AIR?

I was trying to figure out why anyone would want to write standards-based web pages specifically for Adobe AIR. I'm not talking about Flash/Flex or ActionScript development, I'm talking about using regular HTML, CSS and Javascript to build a website, but then shoehorning it into the Adobe Flash runtime. Why would anyone want to do this?

The stated reason is to ensure that you are providing a consistent experience across browsers and versions, the WORA promise again, but for real this time. That way you can provide the same crap user experience to all users, disabling their scroll wheel and all that great stuff that Flash provides so "richly". Gag.

No, what this is about is locking down web content: keeping people from right-clicking images and saving them, safeguarding the sacred goodies. But first and foremost: disabling Adblock to make sure the punters see the ads. Keeping the geeks from personalizing your pages with Greasemonkey, and preventing client-side mashups. As collateral damage, keeping the visually impaired or blind from using their accessibility tools. But most of all, maintaining control. Control at any cost!

It's the gospel of control that Adobe has always preached, but a way of applying it to a whole other set of technologies: the ones that have formed a generation of software that enabled freedom of choice and diversity of platforms and spurred the development of the web. Adobe's other gospel of richness means two things: your page looks rich (shiny, high in fat), and Adobe builds a monopoly and a position of control that they can monetize someday. And that would make Adobe rich.

13 Feb 2007