This is Layar

@ubistudio project: mobile AR layer for 2010 01SJ biennial

Wednesday, September 15th, 2010

One of the goals of the @ubistudio is to actually do projects with new media technologies, not just talk about them. In that spirit, we made a mobile augmented reality experience for the 2010 01SJ Biennial that takes place this weekend, Sept 16-19, 2010.

It’s a fairly simple layer, developed on the Layar AR browser and featuring basic points of interest (POIs) for many of the public artworks and venues of the 01SJ festival. Here’s a screen shot of our layer in action on an iPhone 3Gs:

01sj-shot-land2

Among the many artworks featured are “Play Me I’m Yours“, ~20 street pianos created by artists Luke Jerram; Poetics of Dis-communication by Patrick Manning, and ZOROP by Ken Eklund and Annette Mees. You’ll also find the major venues and outdoor performances, to say nothing of the stops where you can catch the ZOROP Mexican Party Bus!

We submitted the layer to the Layar developer program, and it was approved earlier this week. If you’re at 01SJ and have a newer iPhone or Android phone, please check it out and let us know how you like it. You’ll need to download the free Layar app for your phone if you don’t have it already. Then just search for “01SJ” and you should be able to find it easily. All of the interesting points are in the downtown San Jose area, so if you’re not in that area you won’t see much ;-) If you have questions or feedback, ping us on Twitter: @ubistudio or just get involved by coming to our next Ubiquitous Media Studio meetup.

a few remarks about augmented reality and layar

Wednesday, June 24th, 2009

I genuinely enjoyed the demo videos from last week’s launch of the Layar AR browser platform. The team has made a nice looking app with some interesting features, and I’m excited about the prospects of an iPhone 3GS version and of course some local Silicon Valley layarage.

At a technical level, I was reminded of my Cooltown colleagues’ Websign project, which had the very similar core functionality of a mobile device with integrated GPS and magnetometer, plus a set of web services and a markup language for binding web resources (URLs) to locations with control parameters (see also: Websigns: Hyperlinking Physical Locations to the Web in IEEE Computer, August 2001). It was a sweet prototype system, but it never made it out of the lab because there was no practical device with a digital compass until the G1 arrived. Now that we have location and direction support in production platforms, I’m pretty sure this concept will take off. Watch out for the patents in this area though, I think there was closely related prior art that even predated our work.

Anyway I looked carefully at all the demos from Layar and the various online coverage, and wondered about a few things:

  • Layar’s graphical overlay of points of interest appears to be derived entirely from the user’s location and the direction the phone is pointed. There is no attempt to do real-time registration of the AR graphics with objects in the camera image, which is the kind of AR that currently requires markers or a super-duper 3D point cloud like Earthmine. That’s fine for many applications, and it is definitely an advantage for hyperlinks bound to locations that are out of the user’s line of sight (behind a nearby building, for example). Given this, I don’t understand why Layar uses the camera at all. The interaction model seems wrong; rather than using Layar as a viewfinder held vertically in my line of sight, I want to use it like a compass — horizontally like a map, and the phone pointed axially toward my direction of interest. This is most obvious in the Engadget video, where they are sitting in a room and the links from across town are overlaid on images of the bookshelves ;-) Also, it seems a bit unwieldy and socially awkward to be walking down the street holding the phone in front of you. Just my $0.02 there.
  • How will Layar handle the navigation problem of large numbers of active items? The concept of separate “layars” obviously helps, but in a densely augmented location you might have hundreds or even thousands of different layers. Yes this is a hard UI/UX problem, but I guess it’s a problem we would love to have, too much geowebby goodness to sort through. I suppose it will require some nicely intuitive search/filtering capability in the browser, maybe with hints from your personal history and intent profile.
  • Will Layar enable participatory geoweb media creation? I’d be surprised if they don’t plan to do this, and I hope it comes quickly. There will be plenty of official corporate and institutional voices in the geoweb, but a vibrant and creative ecosystem will only emerge from public participation in the commons. This will demand another layer of media literacy, and this will take time and experimentation to develop. I say the sooner we get started, the better.

In any case, good luck to the Layar team!