Polysocial Reality and the Enspirited World

February 28th, 2012 | Gene | Comments Off on Polysocial Reality and the Enspirited World

I’m doing a talk at SxSW in a couple of weeks, with my exceedingly smart friend Sally Applin (@anthropunk). Our topic is “Polysocial Reality and the Enspirited World” reflecting Sally’s PhD work on PoSR and our related interests in ubicomp, social media and augmented reality. Originally Bruce Sterling was part of our panel, but sadly the SxSW people overruled that as Bruce is already doing another keynote. Apparently nobody gets to speak twice, not even the venerable @bruces. It’s too bad, as he would have injected a great contrapuntal viewpoint. Nonetheless we are going to have a lot of fun, and I hope that a few people will show up to hear us. If you’re in Austin on the 13th, come by and say hello, we would love to see you!

Polysocial Reality and the Enspirited World

The merging of the physical and digital into a blended reality is a profound change to our world that demands examination. In this session we will explore this concept through the lenses of technology, anthropology and cyberculture. We will debate the ideas of PolySocial Reality, which describes our multiple, multiplexed, synchronous and ansynchronous data connections; Augmented Reality; and the Enspirited World of people, places and things suffused with digital information energy.



innovation & the corporate immune system

January 19th, 2012 | Gene | Comments Off on innovation & the corporate immune system

There’s an interesting article right now on CNN/Money/Fortune: The Kodak Lie. Author Larry Keeley challenges the common view of Kodak as a textbook example of former market leader (in film photography) that failed to recognize disruptive innovation (digital photography) until it was too late to recover. Instead, the story is much more complex and nuanced.

Kodak knew all about the impending disruption of digital technology. As many have noted, they own the primary patents on digital photography and built one of the world’s first digital cameras in 1975. As The Economist reported recently, a report circulated among senior executives in 1979 detailed how the market would shift permanently from film to digital by 2010. This disruption was no surprise.

In fact, Kodak invested in a range of digital media technologies over the next three decades, even as it focused on maintaining and growing its core consumer and professional film businesses. But the new digital businesses were impeded by a dynamic that I like to call the ‘corporate immune system’, in which the healthy, profitable and dominant units of the company are extremely good at maintaining focus, monopolizing annual budgets and controlling the company’s brand, messaging and channels. In the healthy body of the company, small-scale ventures with uncertain potential represent a distraction, and distractions are things to be avoided, ignored or eliminated. When the distraction also threatens to interfere with the established business as in Kodak’s case, the immune response is further amplified.

What’s worse, innovative businesses can easily fall victim to corporate portfolio management practices. When corporate leaders prioritize investments across a large portfolio of businesses, they frequently focus on metrics like revenue potential, growth rate, gross margin and NPV which tend to favor larger, established units. In order to be seen as relevant, smaller ventures are essentially coerced into juicing their forecasts and committing to over-large targets, despite high levels of market uncertainty. In this scenario the small venture is very likely to underperform its targets, hurting morale and exposing itself to the risk of restructuring or shutdown.

As Keeley points out, Kodak’s decline is a very complicated story that played out over decades. No easy analogy can explain the sequence of events that led to the company’s bankruptcy. But despite considerable foresight, an early technology advantage and tremendous financial, marketing and brand assets, Kodak failed to turn the corner from its traditional position of success to leadership in the digital era. It’s a cautionary tale for market leaders who consider themselves forward-looking and innovative.


another year, another new adventure

January 19th, 2012 | Gene | Comments Off on another year, another new adventure

Next week marks the beginning of the New Year 4710, a Yang Water Dragon year in Chinese astrology. According to traditional practitioners, this will be a year for innovation and a time for new ideas to be born. Yang Water is like a flowing river and things will move, ideas flow, creativity abound, economies boom, and love blossom.

I’m happy to report that as a card-carrying Rat in good standing, my outlook also appears positive:

The Rat and Dragon have good affiliation with each other; you can expect things to be favourable and smooth. People born in the year of the Rat are energetic beings and they shall enjoy the dynamic pace of 2012. For those who are hardworking and opportunistic, new career opportunities and jobs await. Relationships progress well this year, however, they need to be more sensitive to their partners and stay honest with themselves at the same time.

In other words, as long as I’m hardworking and opportunistic, sensitive and honest then good things should come. Good advice under any stars, right?


AR for Poets Workshop

October 23rd, 2011 | Gene | Comments Off on AR for Poets Workshop

I ran a new version of my AR for Poets workshop yesterday at THATcamp on the Google campus. We had a good group of around 20-25 folks, with lots of very good questions; overall it seemed to be well-received. Unfortunately I encountered massive difficulties with network connectivity and discovering that this site was hacked, so I was unable to demonstrate as much of the actual AR creation experience as I had planned. We were able to get through a live demo of some location-based layers, a basic demo of creating POIs using BuildAR, and a live demo of Layar Vision using the Occupy George layer, so all was not lost.

I’ve started to keep teaching resources on a page called AR for Poets Workshop Materials. Maybe useful for you?



October 23rd, 2011 | Gene | Comments Off on Housekeeping

Just a short housekeeping note — I discovered yesterday that this site had been hacked, probably through an SQL injection attack. It was one of those clever hidden spam installations, so I never even knew there was a problem until some of the folks at THATcamp pointed it out to me. I believe I’ve recovered everything and cleaned out all the covert code and backdoors, but if you notice anything awry, please send me email via gene-at-this-domain.

If this ever happens to your wordpress site, I found that this and this were useful sources of information. Hope you never need them though.


Regarding “Strong AR” and “Weak AR”

May 25th, 2011 | Gene | 1 Comment

[Crossposted from the Layar blog.]

At the end of his otherwise lovely keynote at ARE2011, Microsoft’s Blaise Aguera y Arcas proposes a distinction between “strong AR” and “weak AR”. Aguera’s obviously a very talented technologist, but in my opinion he’s done the AR industry a disservice by framing his argument in a narrow, divisive way:

“I’ll leave you with just one or two more thoughts. One is that, consider, there’s been a lot of so called augmented reality on mobile devices over the…past couple of years, but most of it really sucks. And most of it is what I would call weak augmented reality, meaning it’s based on the compass and the GPS and some vague sense of how stuff out there in the world might relate to your device, based on those rather crude sensors. Strong AR is when you, when some little gremlin is actually looking through the viewfinder at what you’re seeing, and it’s saying ah yeah that’s, this is that, that’s that and that’s the other and everything is stable and visual, that’s strong AR. Of course the technical requirements are so much greater than just using the compass and the GPS, but the potential is so much greater as well.”

Aguera’s choice of words invokes the old cognitive / computer science argument about “strong AI” and “weak AI” which was first posed by John Searle in the early heyday of 1980’s artificial intelligence research [Searle 1980: Minds, Brains and Programs (pdf)]. However, Searle’s formulation was a philosophical statement intended to tease out the distinction between an artificially intelligent system simulating a mind, or actually having a mind. Searle’s interest had nothing to do with how impressive the algorithms were, or how much computational power was required to produce AI. Instead, he was focused on the question of whether a computational system could ever achieve consciousness and true understanding, and Searle believed the concept of strong AI was fundamentally misguided.

In contrast, Aguera’s framing is fueled by technical machismo. He uses strong and weak in the common schoolyard sense, and calls out “so-called augmented reality” that is “vague”, “crude”, and “sucks” in comparison to AR that is based on (gremlins, presumably shorthand for) sophisticated machine vision algorithms backed with terabytes of image data and banks of servers in the cloud. “Strong AR is on the way”, he says, with the unspoken promise that it will save the day from the weak AR we’ve had to endure until now.

OK, I get it. Smart technology people are competitive, they have egos, and they like to toss out some red meat now and then to keep the corporate execs salivating and the funding rolling in. Been there, done that, understand completely. And honestly, I love to see good technical work happen, as it obviously is happening in Blaise’s group (check out minute 17:20 in the video to hear the entire ARE crowd gasp at his demo).

But here’s where I think this kind of thinking goes off the rails. The most impressive technical solution does not equate to the best user experience; locative precision does not equal emotional resonance; smoothly blended desktop flythroughs are not the same as a compelling human experience. I don’t care if your system has centimeter-level camera pose estimation or a 20 meter uncertainty zone; if you’re doing AR from a technology-centered agenda instead of a human-centered motivation, you’re doing it wrong.

Bruce Sterling said it well at ARE2010: “You are the world’s first pure play experience designers.” We are creating experiences for people in the real world, in their real lives, in a time when reality itself is sprouting a new, digital dimension, and we really should try to get it right. That’s a huge opportunity and a humbling responsibility, and personally I’d love to see the creative energies of every person in our industry focused on enabling great human experiences, rather than posturing about who has stronger algorithms and more significant digits. And if you really want to have an argument, let’s make it about “human AR” vs. “machine AR”. I think Searle might like that.


AR photography

April 26th, 2011 | Gene | Comments Off on AR photography

Last week at Where 2.0 and Wherecamp, the air was full of AR augments. Between the locative photos in the Instagram layer, the geotagged tweets in TweepsAround, and the art/protest layer called freespace, there were many highly visual, contextually interesting AR objects being generated, occupying and flowing through the event spaces. These were invisible of course, until viewed through the AR lens. I found myself becoming very aware of this hidden dimension, wondering what new objects might have appeared, what I might encounter if I peered through the looking glass right here, right now. And then I found myself taking pictures in AR, because I was discovering moments that seemed worth capturing and sharing.

Larry and Mark weren’t physically at Where 2.0, but their perceived presence loomed large over the proceedings. Those are clever mashups on the Obey Giant theme as well; what are they trying to say here?

At Wherecamp on the Stanford campus, locative social media were very much in evidence. Here, camp organizer @anselm and AR developer @pmark were spotted in physical/digital space.

The freespace cabal apparently thought the geo community would be receptive to their work, although it seemed some of the messages were aimed at a different audience. The detention of Chinese artist Ai Wei Wei is a charged topic, certainly.

So you’ll note that although these are all screenshots from the AR view in Layar, I’m referring to them as photographs in their own right. It’s a subtle shift, but an interesting one. For me, this new perspective is driven by several factors: the emergence of visually interesting and contextually relevant AR content, the idea that AR objects are vectors for targeted messages, and the new screenshot and share functions which make Layar seem more like a social digital camera app. I’m finding myself actively composing AR photos, and thinking about what content I could create that would make good AR pictures other people would want to take. Oh, and that awkward AR holding-your-phone-up gesture? I’m taking pictures, what could be more natural?

AR photography feels like it might be important. What do you think?



augmented hypersocial media

April 25th, 2011 | Gene | Comments Off on augmented hypersocial media

Christopher and I had this funny exchange the other day. Physical, digital and social worlds interwoven, with many border crossings; I guess this would be an example of what @anthropunk calls “polysocial reality.”

It started when I found @jewelia‘s Instagram pic from the Where 2.0 stage in the new Instagram AR layer in Layar. I took a screenshot:

and shared it on Twitter:

A bit later, I saw my tweet in the TweepsAround layer, and I took a screenshot:

and shared that one to Twitter too:

Then Christopher @endurablegoods got in on the fun:

Of course that was bait, so I snapped a photo in Color:

and shared it on Twitter:

But Christopher was not to be outdone:

And in the end:

We live in interesting times.



hacking space and time

April 19th, 2011 | Gene | Comments Off on hacking space and time

[cross-posted from the Layar blog]

In my recent Ignite talk Hijacking the Here and Now: Adventures in Augmented Reality, I showed examples of how creative people are using AR in ways that modify our perceptions about time and space. Now, Ignite talks are only 5 minutes long and I think this is a big idea that’s worth a deeper look. So here’s my claim: I assert that one of the most natural and important uses of AR as a creative medium is hacking space and time to explore and make sense of the emerging physical+digital world.

When you look at who the true AR enthusiasts are, who is doing the cutting edge creative work in AR today, it’s artists, activists and digital humanities geeks. Their projects explore and challenge the ideas of ownership and exclusivity of physical space, and the flowing irreversibility of time. They are starting to see AR as the emergence of a new construction of reality, where the physical and digital are no longer distinct but instead are irreversibly blended. Artist Sander Veenhof is attracted to the “infinite dimensions” of AR. Stanford Knight Fellow Adriano Farano sees AR ushering in an era of “multi-layer journalism”. Archivist Rick Prelinger says “History should be like air,” immersive, omnipresent and free. And in their recent paper Augmented Reality and the Museum Experience, Schavemaker et al write:

In the 21st century the media are going ambient. TV, as Anna McCarthy pointed out in Ambient Television (2001), started this great escape from domesticity via the manifold urban screens and the endless flat screens in shops and public transportation. Currently the Internet is going through a similar phase as GPS technology and our mobile devices offer via the digital highway a move from the purely virtual domain to the ‘real’ world. We can collect our data everywhere we desire, and thus at any given moment transform the world around us into a sort of media hybrid, or ‘augmented reality’. [emphasis mine]

When the team behind PhillyHistory.org augments the city of Philadelphia with nearly 90,000 historical photographs in AR, they are actively modifying our experience of the city’s space and connecting us to moments in time long past. With its ambitious scope and scale, this seems a particularly apt example of transforming the world into a media hybrid.

In their AR piece US/Iraq War Memorial, artists Mark Skwarek and John Craig Freeman transpose the locative datascape of casualties in the Iraq War from Wikileaks onto the northeastern United States, with the location of Baghdad mapped onto the coordinates of Washington DC. In addition to spatial hackery evocative of Situationist psychogeographic play, this work makes a strong political statement about control of information, nationalist perspectives and the cultural abstraction of war.


Now let’s talk about this word, ‘hacking’. Actually, you’ll note that I used the term ‘hijacking’ as well, so let’s include that too. My intent is to evoke the tension of multiple meanings: Hacking in the sense of gaining deep understanding and mastery of a system in order to modify and improve it, and as a visible demonstration of a high degree of proficiency. Also, hacking in the sense of making unauthorized intrusions into a system, including both white hat and black hat variations. I use ‘hijacking’ in the sense of a mock takeover, like the Black Eyed Peas playfully hijacking the myspace.com website for publicity purposes, but also hijacking as an antagonistic, possibly malign, and potentially unlawful attack. In the physical+digital augmented world, I expect we will see a wide variety of hacking and hijacking behaviors, with both positive and negative effects. For example, in Skwarek’s piece with Joseph Hocking, the leak in your hometown, the corporate logo of BP becomes the trigger for an animated re-creation of the iconic broken pipe at the Macondo wellhead, spewing AR oil into your location. It is possible to see this as an inspired spatial hack and a biting social commentary, but I have no doubt BP executives would consider it a hijacking of their brand in the worst way.

In his book Smart Things, ubicomp experience designer Mike Kuniavsky asks us to think of digital media about physical entities as ‘information shadows’; I believe the work of these AR pioneers points us toward a future where digital information is not a subordinate ‘shadow’ of the physical, but rather a first-class element of our experience of the world. Even at this early stage in the development of the underlying technology, AR is a consequential medium of expression that is being used to tell meaningful stories, make critical statements, and explore the new dimensionality of a blended physical+digital world. Something important is happening here, and hacking space and time through AR is how we’re going to understand and make sense of it.


hello world: Mobile AR with Layar & Hoppala

March 28th, 2011 | Gene | 1 Comment

Welcome! This tutorial will show you how to create mobile AR using the Layar platform and Hoppala CMS service, with no programming required. I’ve kept it simple on purpose — both Layar and Hoppala have additional capabilities you should take the time to explore; for the technically inclined, the Layar developer wiki is a good place to start.

Mobile Augmented Reality for Non-Programmers
A Simple Tutorial for Layar and Hoppala

1. What you need to create your first mobile AR layer:

* A smartphone that supports the Layar AR browser. This means an iPhone 3GS or 4, or an equivalent Android device that has built-in GPS and compass. As of March 2011, Symbian S3 and S60 devices should also work, as should the Apple iPad2.
* The Layar app, downloaded onto your device from the appropriate app store.
* A computer with web access.

2. Get connected:

You’ll need to create a developer account with Layar and an account on the Hoppala Augmentation content management system (CMS). This should only take a few minutes:

* The Hoppala website: http://augmentation.hoppala.eu
* The Layar developer website: http://layar.com/publishing

Once you have your accounts, sign in to both sites and to the Layar app on your device.

3. Get started:

When you log into Hoppala, you should see the Dashboard, a simple list of your layers with Titles, Names and Overlay URLs.

Hoppala dashboard

At the bottom right of the page, click Add Overlay to create a new layer. A new entry will be added to the list, with Untitled, noname and a long, ugly URL. On the far right of that entry line, click the pencil icon to edit and give your layer a new title and name. The name needs to be all lowercase alphanumeric. Click the Save button.

Next, click on the name of your new layer. This will open a Google Map-based page. Use the map controls or enter your address to navigate to your current location and zoom in.

Hoppala map view

To add a point of interest (POI), click Add augment at bottom right of the page. This will add a basic POI called Untitled in the center of the map. You can drag it to the location you want.

To customize your new POI, click on the red map pin and a popup will open. The popup has 4 tabs, labeled General, Assets, Actions and Location. Each tab is a form we will use to enter data about the POI. For now, don’t worry about the Location tab.

Hoppala POI menu (click for larger view)


* The title and description fields can be whatever text you want. The title is limited to 60 characters, and each description line can be 35 characters. Note that long text strings may not display fully on a small device screen. Try typing HELLO WORLD as your title.
* Thumbnail is the picture that is displayed in the POI’s information panel in the mobile app view. You can upload your own thumbnail from your computer by using Choose File and then Add.
* You can ignore the Footnote and Filter value fields for now.
* BE SURE TO CLICK THE SAVE BUTTON and wait for the confirmation.


* Icons are the small graphics that show up in the AR view for basic POIs. Choose default (you can create custom icons later if you like).
* Assets are 2D images or 3D objects that appear in the AR view. You can upload your own assets using Choose File and then Add. Images can be .jpg or .png; 3D objects must be in Layar’s .l3d format.
* Note that Hoppala supports some non-Layar AR browsers. You can ignore any sections for “junaio” and “Wikitude”.
* BE SURE TO CLICK THE SAVE BUTTON and wait for the confirmation.


* In the Layar browser, you can have actions triggered from POIs. These can include going to a website, playing an audio or video, sending a tweet, an email or text, and making a phone call.
* Hoppala allows you to include up to 8 actions per POI.
* Actions can appear as buttons for the user to click, or they can be auto-triggered based on the user’s proximity to the POI location.
* Try adding a link to a website. For Label, type Google. Select ‘Website’ in the pulldown menu. Type http://google.com for the URL.
* BE SURE TO CLICK THE SAVE BUTTON and wait for the confirmation.

You can add more POIs, or move on to configuring and testing the layer.

4. Configure your layer:

Log into the Layar developer site. At the top right of the page, click My Layers and you will see a table of your existing layers, if any.

Layar Developer Site (click for larger view)

To add your new layer, click the Create a layer button. You will see a popup form.

* Layer name must be exactly the same as the name you chose in Hoppala.
* Title can be a friendly name of your choosing.
* Layer type should be whichever type you have made. If you used a 2D image or 3D model as an asset, select ‘3D and 2D objects in 3D space’.
* API endpoint URL is the URL for your layer, which you can copy from the Hoppala dashboard (the long ugly one).
* Short description is just some text.

Click Create layer and you should be done!

(There are lots more editing options, but you can safely ignore them for now).

5. Test your layer:

Start up the Layar app on your mobile device. Be sure you are logged in to your Layar developer account, or you will not see your unpublished test layer. Select LAYERS, and then TEST. You should see your test layer listed. Note: older versions of the Layar app may put the TEST listing in different places, so you may need to poke around a bit. Select your layer and LAUNCH it. Now look for your POIs and see if they came out looking the way you had expected.

Congratulations, you are now an AR author!