hijacking the here and now: my first Ignite talk

March 25th, 2011 | Gene | Comments Off on hijacking the here and now: my first Ignite talk

Excited (and a bit nervous ;-) about doing my first ever Ignite talk, next Monday 3/28 in SF. I’ve got 5 minutes to get through 20 auto-advancing slides on the subject of “Hijacking the Here and How: Adventures in Augmented Reality.” Here’s what I said in my submission:

Augmented reality is all about webcam marketing gimmicks and filling the world with geotagged logos, right? Nope, wrong. Instead, we’re learning that the natural mode of expression for AR, is enabling people to *hack time and space*. In 5 minutes, I’ll show you ~20 solid examples of how artists, journalists, historians and citizen activists are using augmented reality to hijack the here and now.

If you’re in San Francisco on Monday, come by and check it out — there are loads of fun speakers lined up, and it would be great to see some familiar faces! Details here: http://www.facebook.com/event.php?eid=192821184089845

HR

the critical challenge for augmented reality in 2011

January 3rd, 2011 | Gene | 1 Comment

If there’s one thing we in the augmented reality community need to do this year, it is to make actually using AR a better experience than looking at screenshots and videos of it.

If you ask most AR experts what the biggest challenge is for mobile AR, you’ll likely hear about technology issues. “Mobile AR won’t be taken seriously until it can do X” where X is something like centimeter-accurate location and camera pose determination, continuous image recognition, 3D object tracking, or perhaps making a band of angels dance on the surface of a contact lens.

I love that stuff too. Advancing the technology is great, and it does make for cool screenshots and demo reels. But I guess I have a different view of what’s most important.

AR is a unique medium because it blends digital content with the physical world. It happens in the places where we experience our lives; in cities, villages, countryside and wilderness, with family and friends and strangers around. It happens in the now, day and night, spring and fall. It engages our senses and taps into our emotions, revealing the invisible stories of the world around us. AR is a medium of living human experience.

In AR, we already have an incredible toolbox of capabilities to work with. Mobile is mainstream, we’re all connected, and data is gushing from every spigot. It’s truly an embarassment of riches.

What the AR community needs most this year is to push the frontiers of creative expression; to engage all of our senses, to embrace narrative and culture and play, to use the world as both a platform and a stage. In 2011, we need to become the best possible storytellers and experience designers for this new physical, digital, experiential medium.

If we can’t do this, none of our technology will matter, except maybe on YouTube.

HR

merry christmas, layar style

December 24th, 2010 | Gene | Comments Off on merry christmas, layar style

layar-xmas-2010

Our Layar augmented reality Christmas card is live at http://m.layar.com/open/xmaslayar on your mobile, where you can throw snowballs at your favorite Layar team member and leave your wishes for us! Have a great holiday!

HR

I’m joining Layar

November 29th, 2010 | Gene | 8 Comments

layar-logoI have a bit of news: I’m joining Layar, the Dutch mobile AR company, as an Augmented Reality Strategist. \0/ In this new role I’ll be developing the creative ecosystem for the Layar platform, working with artists, developers, agencies, brands and media geeks to push the boundaries of AR experience design. As the first US-based team member, I’ll also be helping establish Layar’s Bay Area presence and growing the North American community of mobile AR enthusiasts. There’s more at the Layar blog.

Signing on with the Layar team feels like a natural evolution to me. I’ve worked at the intersection of digital media and physical reality for more than a decade, and mobile AR is one of the most important world-changing developments in that field. I believe that today’s mobile AR is the leading edge of an emerging new medium of expression and communication, a vision the Layar team shares. The chance to help shape something this important, working with a team of this caliber…well let’s just say it’s going to be fun ;-)

As for Lightning Laboratories, I’ll still be blogging here and sending out the occasional Connected World newsletter. However, the consulting part of the business will be offline for the foreseeable future. My social media practice will move over to TrendJammer, the new social consulting & analytics venture that my friend Risto Haukioja has launched, and for which I’m an advisor. If you’re interested in that sort of thing, there’s lots of fun going on at trendjammer.com and @trendjamr on Twitter.

If you want to get in touch, the usual channels still apply: @genebecker, @ubistudio, gene at lightninglaboratories dot com, etc. For super-double-secret Layar business, I’m now gene at layar dot com. Now let’s get out there, people — we’ve got a new medium to build!

HR

experiments in historical augmented reality

November 14th, 2010 | Gene | 1 Comment

In collaboration with Adriano @Farano, I’ve been experimenting with creating historical experiences in augmented reality. Adriano’s on a Knight Fellowship at Stanford, and he’s seeking to push the boundaries of journalism using AR; my focus is developing new approaches to experience design for blended physical/digital storytelling, so our interests turn out to be nicely complementary. This is also perfectly aligned with the goals of @ubistudio, to explore ubiquitous media and the world-as-platform through hands-on learning and doing.

Adriano’s post about our first playtesting session, Rapid prototyping in Stanford’s Main Quad, included this image:

Arches on the Quad 1905

Taken from the interior of the Quad looking toward the Oval and Palm Drive, you can see that the photo aligns reasonably well with the real scene. Notably, the 1905 picture reveals a large arch in the background, which no longer stands today. We later found out this was Memorial Arch, which was severely damaged in the great 1906 earthquake and subsequently demolished.

In our second playtesting session, we continued to experiment with historical images of the Quad using Layar, Hoppala and my iPhone 3Gs as our testbed. Photos were courtesy of the Stanford Archives. This view is from the front entrance to the Quad near the Oval, looking back toward the Quad. Here you can see the aforementioned Memorial Arch in 1906, now showing heavy damage from the earthquake. The short square structure on the right in the present day view is actually the right base of the arch, now capped with Stanford’s signature red tile roof.

Memorial Arch after the 1906 earthquake

In this screencap, Arches on the Quad 1905 is showing as the currently selected POI, even though the photo is part of a different POI.

One of the more famous images from post-earthquake Stanford is this one, the statue of Louis Agassiz embedded in a walkway:

Statue of Louis Agassiz 1906

Although the image is scaled a bit too large to see the background well, you can make out that we are in front of Jordan Hall; the white statue mounted above the archway on the left is in fact the same one that is shown in the 1906 photo, nearly undamaged and restored to its original perch.

Finally we have this pairing of Memorial Church in 2010 with its 1906 image. In the photo, you can see the huge bell tower that once crowned Mem Chu; this was also later destroyed in the earthquake.

Memorial Church 1906

Each of these images conveys some idea of the potential we see in using AR to tell engaging stories about the world. The similarities and differences seen over the distance of a century are striking, and begin to approach what Reid et al defined as “magic moments” of connection between the virtual and the real [Magic moments in situated mediascapes, pdf]. However, there are many problematic aspects of today’s mobile AR experience that impose significant barriers to reaching those compelling moments. And so, the experiments continue…

HR

augmented reality developers camp 2010

November 10th, 2010 | Gene | Comments Off on augmented reality developers camp 2010

ARDevCampThe Bay Area’s 2nd annual (?) Augmented Reality Developers Camp is officially on for Saturday Dec 4th 2010. This year’s event will be held in downtown San Francisco at GAFFTA, the Gray Area Foundation For The Arts. ARDevCamp is an open unconference organized by and for the local community of AR developers, artists and enthusiasts, with the participation and support of leading AR companies including Layar, Metaio, Qualcomm and FXPAL. I’m helping organize again this year, along with @chris23, @metaverseone, @anselm & @mikeliebhold. If you’re in the Bay Area and into augmented reality, you need to be there! More info on the wiki at ardevcamp.org

HR

toward human-centered mobility

November 4th, 2010 | Gene | Comments Off on toward human-centered mobility

zuckerberg-bi-250pxAt yesterday’s Facebook press event launching new mobile features, Mark Zuckerberg stirred up a minor tempest in the pundit-o-sphere. When asked about when there would be a Facebook mobile app for the iPad, he responded glibly:

“iPad’s not mobile. Next question.” [laughter] “It’s not mobile, it’s a computer.” (watch the video)

This of course spawned dozens of blog posts and hundreds of tweets, basically saying either “he’s nuts!” or “validates what I’ve said all along.” And in this post-PC connected world, it’s interesting that Zuckerberg sees a distinction between mobiles and computers. But as you might imagine, I have a somewhat different take on this: We need to stop thinking that ‘mobile’ is defined by boxes.

Boxes vs. Humans

The entire mobile industry is built around the idea that boxes — handsets, tablets, netbooks, e-readers and so on — are the defining entity of mobility. Apps run on boxes. Content is formatted and licensed for boxes. Websites are (sometimes) box-aware. Network services are provisioned and paid for on a box-by-box basis. And of course, we happily buy the latest shiny boxes from the box makers, so that we can carry them with us everywhere we go.

And there’s the thing: boxes aren’t mobile. Until we pick them up and take them with us, they just sit there. Mobility is not a fundamental property of devices; mobility is a fundamental property of us. We humans are what’s mobile — we walk, run, drive and fly, moving through space and also through the contexts of our lives. We live in houses and apartments and favelas, we go to offices and shops and cities and the wilderness, and we pass through interstitial spaces like airports and highways and bus stations. Humans are mobile; you know this intuitively. We move fluidly through the physical and social contexts of our lives, but our boxes are little silos of identity, apps, services and data, and our apps are even smaller silos inside the boxes. Closed, box-centric systems are the dominant model of the mobile industry, and this is only getting worse in the exploding diversity of the embedded, embodied, connected world.

So why doesn’t our technology support human-centered mobility?

One big reason is, it’s a hard problem to solve. Or rather, it’s a collection of hard problems that interact. A platform for human-centered mobility might have the following dimensions:

* Personal identity, credentials, access rights, context store, data store, and services you provide, independent of devices. Something like a mash-up of Facebook profiles, Connect, Groups, and Apps with Skype, Evernote and Dropbox, for example.

* Device & object identities, credentials, rights, relationships, data & services; a Facebook of Things, if you will.

* Place identities, credentials, rights, relationships, data & services. Think of this as a Future Foursquare that provides place-based services to the people and objects within.

* Device & service interaction models, such that devices you are carrying could discover and interact with your other devices/services, other people’s devices/services, and public devices/services in the local environment. For example, your iPod could spontaneously discover and act as a remote controller for your friend’s connected social TV when you are at her house, but your tweets sent via her TV would originate from your own account.

* Physical context models that provide raw sensor data (location, motion, time, temperature, biometrics, physiological data etc) and outputs of sensor fusion algorithms (“Gene’s phone is in his pocket [p=75%] in his car [p=100%] on Hwy 101 [p=95%] in Palo Alto, CA USA [p=98%] at 19.05 UTC [p=97%] on 02010 Nov 4 [p=100%]”).

* Social context models that map individuals and groups based on their relationships and memberships in various communities. Personal, family, friendship, professional and public spheres, is one way to think of this.

Each of these is a complex and difficult problem in its own right, and while we can see signs of progress, it is going to take quite a few years for much of this to reach maturity.

The second big reason is, human-centered mobility is not in the interests of the mobile boxes industry. The box makers are battling for architectural control, for developer mindshare, for unique and distinctive end user experiences. Network operators are fighting for subscribers, for value added services, for regulatory relaxation. Mobile content creators are fighting for their lives. And everyone is fighting for advertising revenue. Interoperability, open standards, and sharing data across devices, apps and services are given lip service, but only just barely. Nice ideas, but not good for business.

So here’s a closing thought for you. Maybe human-centered mobility won’t come from the mobile industry at all. Maybe, despite his kidding about the iPad, it will come from Mark Zuckerberg.  Maybe Facebook will be the mobile platform that transcends boxes and puts humans at the center. Wouldn’t that be interesting?

HR

je mixe ce soir!

November 3rd, 2010 | Gene | Comments Off on je mixe ce soir!

In honor of Facebook’s announcements today about making mobile more social, I’d like to remind you of this visionary portrayal of what it will be like when Facebook is truly mobile. Looks like we’ve got a long way to go.

orelsan-toxic

That’s right AR fans, it’s the Toxic Avenger feat. Orelsan performing last summer’s monster Internet dance hit, N’Importe Comment. So slip on your mindglasses, turn up the bass in your earplants, and prepare to “Like” this french fratboy fantasy from the future. Watch carefully, because this is a precious, fleeting snapshot of the way our connected culture felt, circa mid-2010. Someday, cyborg anthropologists are going to have a field day with this thing. Je mixe ce soir!

HR

Ozzie to MSFT execs: you’re doomed kthxbye

October 26th, 2010 | Gene | Comments Off on Ozzie to MSFT execs: you’re doomed kthxbye

Ray_Ozzie_Wired-250px

I paraphrase, obviously. But seriously, did you read Ray Ozzie’s Dawn of a New Day? It’s his manifesto for the post-PC era, and a poignant farewell letter to Microsoft executives as he unwinds himself from the company. In Ozzie’s post, frequent readers of this space will recognize what I’ve been calling ‘the new revolution in personal computing’, the rise of a connected world of mobile, embedded and ubiquitous devices, services, sensors & actuators, and contextual transmedia; a physical, social, immersive Internet of People, Places & Things.

“All these new services will be cloud-centric ‘continuous services’ built in a way that we can all rely upon.  As such, cloud computing will become pervasive for developers and IT – a shift that’ll catalyze the transformation of infrastructure, systems & business processes across all major organizations worldwide.  And all these new services will work hand-in-hand with an unimaginably fascinating world of devices-to-come.  Today’s PC’s, phones & pads are just the very beginning; we’ll see decades to come of incredible innovation from which will emerge all sorts of ‘connected companions’ that we’ll wear, we’ll carry, we’ll use on our desks & walls and the environment all around us.  Service-connected devices going far beyond just the ‘screen, keyboard and mouse’:  humanly-natural ‘conscious’ devices that’ll see, recognize, hear & listen to you and what’s around you, that’ll feel your touch and gestures and movement, that’ll detect your proximity to others; that’ll sense your location, direction, altitude, temperature, heartbeat & health.”

– Ray Ozzie, Dawn of a New Day

Frankly, there’s nothing especially surprising about this vision of the future; many of us (including Gates and Ozzie) have been working toward similar ideas for at least 20 years. Former HP Labs head Joel Birnbaum was predicting a world of appliance/utility computing (pdf) in the ’90s. I’m sure that many of these ideas are actively being researched in Microsoft’s own labs.

What I find really interesting is that Ozzie is speaking to (and for) Microsoft, one of the largest companies in tech and also the one company that stands to be most transformed and disrupted by the future he describes. He’s giving them a wake-up call, and letting them know that no matter how disruptive the last 5 years may have seemed to the core Windows and Office franchises, despite the wrenching transition to a web-centric world, the future is here and you ain’t seen nothing yet.

And now at “the dawn of a new day – the sun having now arisen on a world of continuous services and connected devices”, Ray Ozzie is riding off into the sunset. I don’t see how that can be interpreted as a good sign.

(photo credit: WIRED)

HR

toward virtuosity, reflection and a conscious computing experience

October 22nd, 2010 | Gene | 3 Comments

@lindastone published this short post titled The Look & Feel of Conscious Computing, which I found compelling and resonant with thoughts that have been rattling around in my head for awhile:

“With a musical instrument, it’s awkward at first. All thumbs. Uncomfortable. Noise. With practice, instrument and musician become as one. Co-creating music. So it will be with personal technology. Now, a prosthetic of mind, it will become a prosthetic of being. A violinist with a violin. Us with our gadgets, embodied, attending as we choose.”

For context, Linda also pointed me toward another of her posts, A new era of post-productivity computing? where she closes with the question

“How do we usher in an era of Conscious Computing? What tools, technologies, and techniques will it take for personal technologies to become prosthetics of our full human potential?”

I’ve wrestled with similar questions in the past:

“In the arts, we speak of a talented and communicative practitioner as a virtuoso. The virtuous performer combines technical mastery of her medium with a great depth of human expressiveness, communicating with her audience at symbolic, intuitive and emotional levels. Can we imagine a similar kind of virtuosity of communication, applied to domains that are not traditionally considered art? Can we further make this possibility accessible to more people, allowing a richer level of discourse in the walks of everyday life?

“When groups of musicians play together, they establish communication channels among themselves through the give and take of listening and leading. Great ensemble players know how to establish a state of flow, a groove, where the music takes on a vitality and life of its own, greater than the sum of the individual rhythms, pitches and timbres. What are the conditions that make such a group ‘chemistry’ possible? Could we capture that essence and apply it to the work of organizations, the building of communities, the life of families?

“As information technologies increasingly become integral to our activities, the information we use, even to our ways of thinking and perceiving, we must confront some difficult, elusive notions about the relationships between people and their tools. For instance, in what sense can the technology enhance creative, playful thinking — are we having fun? What about beauty, inspiration, spirituality, mystery? These are qualities for which humans have striven over our entire history; shall we subjugate them in the name of efficiency, convenience and immediacy? Do the artifacts we make allow people space for reflection and insight, or merely add to the numbing cacophony of digital voices demanding our attention? Is it strange to ask such questions? Not at all. The economics of information technologies seem to dictate a future where more and more of our lives will be mediated by networks and interfaces and assorted other paraphernalia of progress. We must recognize the importance of such uniquely human concerns and integrate them into our vision, or risk further dehumanization in our already fractured society.”

Okay, that was from 1994, so where are we on this? I have to say, it seems like mainstream computing has advanced very little in these areas. Apple has good intentions, and the iPad actually does a nice job of getting out of the way, letting you interact directly and physically with individually embodied apps. It’s the best of the bunch, but the iPad is no violin, no instrument of human expression. Certainly the current crop of PCs, netbooks and phones are no better.

There are a few non-mainstream computing paradigms that give me hope for a conscious computing experience. The Nike+ running system, my favorite example of embodied ubi-media, creates an inherently physical experience augmented with media and social play. Nike+ doesn’t have a broadly expressive vocabulary, but it does bring your whole body into the equation and closes the feedback loop with contextually suitable music and audio prompts.

At its best, Twitter starts to feel like a global jam session between connected minds. The rapid fire exchange of ideas, the riffing off others’ posts, the flow of a well-curated stream can sometimes feel uniquely expressive. Yes, it is primarily a mental activity and mostly disembodied, but the occasional flashes of genuine group chemistry are wonderfully suggestive of the potential for an interconnected humanity.

For me, the most interesting possibilities arise from games. There are the obvious examples of physical interaction and expression that the Wii and Kinect deliver primarily for action games now, but with time a broader range of immersive and reflective experiences (is Wii Yoga any good?). I’m also thinking of the emerging genre of out-in-the-world games like SF0, SCVNGR and Top Secret Dance Off that send you on creative, social missions involving physicality, play, performance and discovery. Finally there is the next generation of “gameful” games, as proposed by Jane McGonigal:

What is Gameful?

We invented the word gameful! It means to have the spirit, or mindset, of a gamer: someone who is optimistic, curious, motivated, and always up for a tough challenge. It’s like the word “playful” — but gamier! Gameful games are games that have a positive impact on our real lives, or on the real world. They’re games that make us:

  • happier
  • smarter
  • stronger
  • healthier
  • more collaborative
  • more creative
  • better connected to our friends and family
  • more resilient
  • better problem-solvers
  • and better at WHATEVER we love to do when we’re not playing games.

I think the future of expressive, improvisational, conscious computing will be found at the intersection of personal sensing tools like Nike+ and Kinect, collective action tools like Twitter, and the playful engagement of gameful games. It won’t look like computing, and it won’t come in a box. It won’t be dumbed down for ‘ease of use’, it will be flowing experiences designed to make us more complex, capable and creative. It will augment our humanity, as embodied individuals embedded in a physical and social world.

HR