Archive for the ‘ubistudio’ Category

experiments in historical augmented reality

Sunday, November 14th, 2010

In collaboration with Adriano @Farano, I’ve been experimenting with creating historical experiences in augmented reality. Adriano’s on a Knight Fellowship at Stanford, and he’s seeking to push the boundaries of journalism using AR; my focus is developing new approaches to experience design for blended physical/digital storytelling, so our interests turn out to be nicely complementary. This is also perfectly aligned with the goals of @ubistudio, to explore ubiquitous media and the world-as-platform through hands-on learning and doing.

Adriano’s post about our first playtesting session, Rapid prototyping in Stanford’s Main Quad, included this image:

Arches on the Quad 1905

Taken from the interior of the Quad looking toward the Oval and Palm Drive, you can see that the photo aligns reasonably well with the real scene. Notably, the 1905 picture reveals a large arch in the background, which no longer stands today. We later found out this was Memorial Arch, which was severely damaged in the great 1906 earthquake and subsequently demolished.

In our second playtesting session, we continued to experiment with historical images of the Quad using Layar, Hoppala and my iPhone 3Gs as our testbed. Photos were courtesy of the Stanford Archives. This view is from the front entrance to the Quad near the Oval, looking back toward the Quad. Here you can see the aforementioned Memorial Arch in 1906, now showing heavy damage from the earthquake. The short square structure on the right in the present day view is actually the right base of the arch, now capped with Stanford’s signature red tile roof.

Memorial Arch after the 1906 earthquake

In this screencap, Arches on the Quad 1905 is showing as the currently selected POI, even though the photo is part of a different POI.

One of the more famous images from post-earthquake Stanford is this one, the statue of Louis Agassiz embedded in a walkway:

Statue of Louis Agassiz 1906

Although the image is scaled a bit too large to see the background well, you can make out that we are in front of Jordan Hall; the white statue mounted above the archway on the left is in fact the same one that is shown in the 1906 photo, nearly undamaged and restored to its original perch.

Finally we have this pairing of Memorial Church in 2010 with its 1906 image. In the photo, you can see the huge bell tower that once crowned Mem Chu; this was also later destroyed in the earthquake.

Memorial Church 1906

Each of these images conveys some idea of the potential we see in using AR to tell engaging stories about the world. The similarities and differences seen over the distance of a century are striking, and begin to approach what Reid et al defined as “magic moments” of connection between the virtual and the real [Magic moments in situated mediascapes, pdf]. However, there are many problematic aspects of today’s mobile AR experience that impose significant barriers to reaching those compelling moments. And so, the experiments continue…

augmented reality 4 poets

Thursday, October 21st, 2010

Earlier this month I attended THATCamp Bay Area, a 2-day head-on collision of scholars and practitioners in the humanities with a range of folks from the tech world. It was quite refreshing and challenging to (attempt to) wrap my mind around linguistics, environmental history, experimental poetics and art curation, just to name a few of the disciplines that were represented. Interestingly, I also discovered unexpected hidden connections that led back to the EVOKE Summit and forward to @ubistudio; more about these later perhaps.

My contribution to the fray was a session named “Augmented Reality 4 Poets”, a hands-on workshop on creating basic mobile AR using the Layar platform and Hoppala CMS service, no programming required. It worked out pretty well, and I wanted to share the materials here. I’ll likely reprise some of this in a session at ARDevCamp in December, and possibly at other future events. Anyway, here’s the tutorial. I’ve kept it simple on purpose — both Layar and Hoppala have additional capabilities you should take the time to explore. Also, you’ll see that for THATCamp I made the shared @ubistudio accounts available, but if you want to go through this on your own, you will need to sign up for a Layar developer account and a Hoppala login (it’s easy).

Mobile Augmented Reality for Non-Programmers
A Simple Tutorial for Layar and Hoppala

1. What you need to create your first mobile AR layer:

* A smartphone that supports the Layar AR browser. This means an iPhone 3GS or 4, or an equivalent Android device that has built-in GPS and compass.
* The Layar app, downloaded onto your device from the appropriate app store.
* A computer with web access.
* A developer account with Layar and a login at Hoppala. For this tutorial, you will use our shared ubistudio account. Later, you can request your own at http://site.layar.com/create/start-now/

2. Get connected:

The ubistudio credentials we will be using today are: [redacted]

You should use these credentials to sign in at 3 places:

* The Hoppala website: http://augmentation.hoppala.eu
* The Layar developer website: http://layar.com/publishing
* The Layar app on your device

Because these are shared credentials, you will see other people’s layers in these environments [only true for the shared tutorial account]. PLEASE DON’T TOUCH ;-) There is no undo or undelete!

3. Get started:

Log into Hoppala. You should see the Dashboard, a simple list of layers with Titles, Names and POI URLs.

Hoppala dashboard

At the bottom right of the page, click “Add layer service” to create a new layer. A new line will be added to the list, with “Untitled” and “noname”. On the far right of that line, click the pencil icon and give your layer a new title and name. The name needs to be lowercase alphanumeric. Click the Save button.

Next, click on the name of your new layer. You should see a Google Map. Navigate to our location and zoom in.

Hoppala map view

To add a point of interest (POI), click “Add augment” at bottom right of the page. This will add a basic POI in the center of the map. You can drag it to the location you want.

To customize your new POI, click on it and a popup will launch. The popup has 5 tabs, and we’ll mostly care about the first 3. Each tab is a form we will use to enter data about the POI.

Hoppala POI menu (click for larger view)

GENERAL

* Title and description fields can be whatever you want. Footnote is not editable
* Image is the picture that is displayed for the POI’s information panel in the mobile app view. You can use one of the images already loaded, or you can upload your own from your computer.
* POI Icons are what show up in the AR view for basic POIs. Choose ‘default’, and select a Custom Icon from the drop down list. You can also upload your own.
* BE SURE TO CLICK THE SAVE BUTTON and wait for the confirmation.

MODEL

* For basic POIs, don’t worry about this.
* For images or 3D models, select the appropriate Type.
* Use the pulldown menus to select a preloaded image or model. You can also upload your own. 3D models need to be in a custom Layar l3d format.
* BE SURE TO CLICK THE SAVE BUTTON and wait for the confirmation.

ACTIONS

* In the Layar browser, you can have actions triggered from POIs. These can include going to a website, playing an audio or video, sending an email or text, and making a call.
* If you make changes, BE SURE TO CLICK THE SAVE BUTTON and wait for the confirmation.

You can add more POIs, or move on to testing the layer.

4. Testing your layer:

Log into the Layar developer site. You will see a table of existing layers.

Layar Developer Site (click for larger view)

To add your new layer, click the “Create a layer” button. You will see a popup form.

* Layer name must be exactly the same as the name you chose in Hoppala.
* Title can be a friendly name of your choosing.
* Layer type should be whichever type you have made.
* API endpoint URL is the URL for your layer copied from the Hoppala dashboard (the long ugly one).
* Short description is just some text.

Click Create layer and you should be done!

(There are more editing options, but you can safely ignore them for now).

Start up the Layar app on your mobile device. Be sure you are logged in to the developer account , or you will not see your unpublished test layer. Select “YOURS”, and then “TEST”. You should see several test layers, including your own [different versions of the Layar app may put the TEST listing in different places, so you may need to poke around a bit]. Select your layer and LAUNCH it. Now look for your POIs and see if they came out looking the way you had expected.

Congratulations, you are now an AR author!

@ubistudio project: mobile AR layer for 2010 01SJ biennial

Wednesday, September 15th, 2010

One of the goals of the @ubistudio is to actually do projects with new media technologies, not just talk about them. In that spirit, we made a mobile augmented reality experience for the 2010 01SJ Biennial that takes place this weekend, Sept 16-19, 2010.

It’s a fairly simple layer, developed on the Layar AR browser and featuring basic points of interest (POIs) for many of the public artworks and venues of the 01SJ festival. Here’s a screen shot of our layer in action on an iPhone 3Gs:

01sj-shot-land2

Among the many artworks featured are “Play Me I’m Yours“, ~20 street pianos created by artists Luke Jerram; Poetics of Dis-communication by Patrick Manning, and ZOROP by Ken Eklund and Annette Mees. You’ll also find the major venues and outdoor performances, to say nothing of the stops where you can catch the ZOROP Mexican Party Bus!

We submitted the layer to the Layar developer program, and it was approved earlier this week. If you’re at 01SJ and have a newer iPhone or Android phone, please check it out and let us know how you like it. You’ll need to download the free Layar app for your phone if you don’t have it already. Then just search for “01SJ” and you should be able to find it easily. All of the interesting points are in the downtown San Jose area, so if you’re not in that area you won’t see much ;-) If you have questions or feedback, ping us on Twitter: @ubistudio or just get involved by coming to our next Ubiquitous Media Studio meetup.

@ubistudio: Introducing the Ubiquitous Media Studio

Tuesday, July 13th, 2010

As promised during my talk at ARE2010, I’m launching a new project called the Ubiquitous Media Studio, a.k.a. @ubistudio. The idea is to gather an open network of technologists, artists, experience designers, social scientists and other interested folks, to explore the question “If the world is our platform, then what is our creative medium?” I’m provisionally calling this notion “ubiquitous media”, building on initial research I did in this area several years back. The idea is also very much inspired and influenced by my friends at the most excellent Pervasive Media Studio in Bristol England, who you should know as well.
button-ubi So what is ubiquitous media? I don’t know exactly, thus the exploration. But it seems to me that its outlines can be sensed in the choppy confluence of ubicomp, social networks, augmented reality, physical computing, personal sensing, transmedia and urban systems. It’s like that ancient parable of the blind monks trying to describe an elephant; the parts all feel very weird and different, and we’re trying to catch a glimpse of the future in its entirety. When you look through an AR magic lens, ubiquitous media is in there. When your kid went crazy over the Pokemon and Yu-Gi-Oh story-game universes, it was in there too. When you snap your Nike+ sensor into your running shoe, you’re soaking in it. When you go on a soundwalk or play a mediascape, there’s more than a bit of ubiquitous media in the experience.

Blind-monks-450x337

Anyway, we are going to investigate this, with the goals of learning new creative tools and applying them in creative projects. And “we” includes you. If you’re in the Bay Area and you think you might be interested, just jump right in! We’re having a little get-together in Palo Alto:

@ubistudio: Ubiquitous Media Studio #1
Thursday July 22, 2010 5:30-8:30PM
Venue: The Institute for the Future
Details & RSVP: http://meetup.com/ubistudio

I hope you’ll join us. You can also stay connected through @ubistudio on Twitter, and a soon-to-be-more-than-a-placeholder website at ubistudio.org.

Beyond Augmented Reality: Ubiquitous Media

Saturday, June 19th, 2010

Here are the slides I presented during my talk at ARE2010, the first Augmented Reality Event on June 3, 2010 in Santa Clara. Many thanks to all who attended, asked questions and gave feedback. For interested Bay Area folks, I will be organizing some face to face gatherings of the Ubiquitous Media Studio to explore the ideas raised here. The first one will be in July; follow @ubistudio on Twitter for further details.

what is ubiquitous media?

Friday, June 26th, 2009

In the 2003 short paper “Creating and Experiencing Ubimedia“, members of my research group sketched a new conceptual model for interconnected media experiences in a ubiquitous computing environment. At the time, we observed that media was evolving from single content objects in a single format (e.g., a movie or a book), to collections of related content objects across several formats. This was exemplified by media properties like Pokemon and Star Wars, which manifested as coherent fictional universes of character and story across TV, movies, books, games, physical action figures, clothing and toys, and American Idol which harnessed large-scale participatory engagement across TV, phones/text, live concerts and the web. Along the same lines, social scientist Mimi Ito wrote about her study of Japanese media mix culture in “Technologies of the Childhood Imagination: Yugioh, Media Mixes, and Otaku” in 2004, and Henry Jenkins published his notable Convergence Culture in 2006. We know this phenomenon today as cross-media, transmedia, or any of dozens of related terms.

Coming from a ubicomp perspective, our view was that the implicit semantic linkages between media objects would also become explicit connections, through digital and physical hyperlinking. Any single media object would become a connected facet of a larger interlinked media structure that spanned the physical and digital worlds. Further, the creation and experience of these ubimedia structures would take place in the context of a ubiquitous computing technology platform combining fixed, mobile, embedded and cloud computing with a wide range of physical sensing and actuating technologies. So this is the sense in which I use the term ubiquitous media; it is hypermedia that is made for and experienced on a ubicomp platform in the blended physical/digital world.

Of course the definitions of ubicomp and transmedia are already quite fuzzy, and the boundaries are constantly expanding as more research and creative development occur. A few examples of ubiquitous media might help demonstrate the range of possibilities:

nikeplus430px

An interesting commercial application is the Nike+ running system, jointly developed between Nike and Apple. A small wireless pressure sensor installed in a running shoe sends footfall data to the runner’s iPod, which also plays music selected for the workout. The data from the run is later uploaded to an online service for analysis and display. The online service includes social components, game mechanics, and the ability to mashup running data with maps. Nike-sponsored professional athletes endorse Nike-branded music playlists on Apple’s iTunes store. A recent feature extends Nike+ connectivity to specially-designed exercise machines in selected gyms. Nike+ is a simple but elegant example of embodied ubicomp-based media that integrates sensing, networking, mobility, embedded computing, cloud services, and digital representations of people, places and things. Nike+ creates new kinds of experiences for runners, and gives Nike new ways to extend their value proposition, expand their brand footprint, and build customer loyalty. Nike+ has been around since 2006, but with the recent buzz about personal sensing and quantified selves it is receiving renewed attention including a solid article in the latest Wired.

mediascapes430px

A good pre-commercial example is HP Labs’ mscape system for creating and playing a media type called mediascapes. These are interactive experiences that overlay audio, visual and embodied media interactions onto a physical landscape. Elements of the experience are triggered by player actions and sensor readings, especially location-based sensing via GPS. In the current generation, mscape includes authoring tools for creating mediascapes on a standard PC, player software for running the pieces on mobile devices, and a community website for sharing user-created mediascapes. Hundreds of artists and authors are actively using mscape, creating a wide variety of experiences including treasure hunts, biofeedback games, walking tours of cities, historical sites and national parks, educational tools, and artistic pieces. Mscape enables individuals and teams to produce sophisticated, expressive media experiences, and its open innovation model gives HP access to a vibrant and engaged creative community beyond the walls of the laboratory.

These two examples demonstrate an essential point about ubiquitous media: in a ubicomp world, anything – a shoe, a city, your own body – can become a touchpoint for engaging people with media. The potential for new experiences is quite literally everywhere. At the same time, the production of ubiquitous media pushes us out of our comfort zones – asking us to embrace new technologies, new collaborators, new ways of engaging with our customers and our publics, new business ecologies, and new skill sets. It seems there’s a lot to do, so let’s get to it.