Ten reasons why AR is going to fail

October 12th, 2012 | Gene | Comments Off on Ten reasons why AR is going to fail

Slides from my talk at the OARN 2012 conference on augmented reality.


I’m a Seoul Man

October 11th, 2012 | Gene | Comments Off on I’m a Seoul Man

Just got back from Sammy Global HQ in Seoul. It was a great trip & the weather’s perfect right now. Here’s a morning shot from the hotel:

An arresting QR code seen in the subway:

And here’s a nighttime street scene, Gangnam style, yo.

Glad to be back on terra recognita, although I have no idea what time zone the rest of me is in right now.


Concert catchup

October 11th, 2012 | Gene | Comments Off on Concert catchup

As long-time TCW watchers know, we do love the concert scene. In the last 3 months we’ve had the good fortune to be up close for 3 very different shows:

Red Hot Chili Peppers in Oakland, CA. This was originally slated for April but was postponed to August after Anthony Kiedis broke his foot. Fantastic show, hey oh!

The Tedeschi-Trucks Band in Saratoga, CA. They brought a big band and a big noise. Some of Susan’s material, some of Derek’s and some from the togetherness band. DTB is one of my favorites this year, and this show did not disappoint.

Peter Gabriel Back to Front Tour, San Jose, CA. Still one of the most creative artists in the world, even if he does look like a portly elder statesman these days. Fabulous show, as seen from 3rd row center. Bonus tech-relevant item: each musician had a Kinect mounted at their position, used to capture & project new-aesthetic-vibe imagery on the screen behind.


Moggridge was here

September 10th, 2012 | Gene | Comments Off on Moggridge was here

RoomWizard at IDEO

Photo from May 19, 2005: Came across this old RoomWizard at IDEO. It was designed by my former HP Labs colleagues Bill Sharpe & Ray Crispin at the Appliance Studio, based on the HPL RoomLooker prototype conference room appliance.

I remembered this picture today when I heard the sad news that Bill Moggridge had passed from this earth. Farewell to a legendary designer…


the densuke aesthetic

April 10th, 2012 | Gene | Comments Off on the densuke aesthetic

This is:

1. A blog post of an image of a tweet of an instagram of a screenshot of an image of an augmented reality dog from an animated Japanese television show in a simulated gilt picture frame on a simulated wall of a simulated 3D gallery in an augmented reality experience launched in California from a hidden URL printed in UV ink in an advertisement for an augmented reality company in Amsterdam in a comic book designed in London as “an investigation into perception, storytelling and optical experimentation.”

2. Densuke.



Google’s Project Glass is a new model for personal computing

April 6th, 2012 | Gene | 2 Comments

The concept video of Google’s Project Glass has whipped up an Internet frenzy since it was released earlier this week, with breathless coverage (and more than a little skepticism) about the alpha-stage prototype wearable devices. Most of the reporting has focused on the ‘AR glasses’ angle with headlines like “Google Shows Off, Teases Augmented Reality Spectacles“, but I don’t think Project Glass is about augmented reality at all. The way I see it, Glass is actually about creating a new model for personal computing.

Think about it. In the concept video, you see none of the typical AR tropes like 3D animated characters, pop-up object callouts and video-textured building facades. And tellingly, there’s not even a hint of Google’s own Goggles AR/visual search product. Instead, what we see is a heads-up, hands-free, continuous computing experience tightly integrated with the user’s physical and social context. Glass posits a new use model based on a novel hardware platform, new interaction modalities and new design patterns, and it fundamentally alters our relationship to digital information and the physical environment.

This is a much more ambitious idea than AR or visual search. I think we’re looking at Sergey’s answer to Apple’s touch-based model of personal computing. It’s audacious, provocative and it seems nearly impossible that Google could pull it off, which puts it squarely in the realm of things Google most loves to do. Unfortunately in this case I believe they have tipped their hand too soon.

Photo: Thomas Hawk / Flickr

Let’s suspend disbelief for a moment and consider some of the implications of Glass-style computing. There’s a long list of quite difficult engineering, design, cultural and business challenges that Google has to resolve. Of these, I’m particularly interested in the aspects related to experience design:

Continuous computing

The rapid adoption of smartphones is ample evidence that people want to have their digital environment with them constantly. We pull them out in almost any circumstance, we allow people and services to interrupt us frequently and we feed them with a steady stream of photos, check-ins, status updates and digital footprints. An unconsciously wearable heads-up device such as Glass takes the next step, enabling a continuous computing experience interwoven with our physical senses and situation. It’s a model that is very much in the Bush/Engelbart tradition of augmenting human capabilities, but it also has the potential to exacerbate the problematic complexity of interactions as described by polysocial reality.

A continuous computing model needs to be designed in a way that complements human sensing and cognition. Transferring focus of attention between cognitive contexts must be effortless; in the Glass video, the subject shifts his attention between physical and digital environments dozens of times in a few short vignettes. Applications must also respect the unique properties of the human visual system. Foveal interaction must co-exist and not interfere with natural vision. Our peripheral vision is highly sensitive to motion, and frequent graphical activity will be an undesirable distraction. The Glass video presents a simplistic visual model that would likely fail as a continuous interface.

Continuous heads-up computing has the potential to enable useful new capabilities such as large virtual displays, telepresent collaboration, and enhanced multi-screen interactions. It might also be the long-awaited catalyst for adoption of locative and contextual media. I see continuous computing as having enormous potential and demanding deep insight and innovation; it could easily spur a new wave of creativity and economic value.

Heads-up, hands-free interaction

The interaction models and mechanisms for heads-up, hands-free computing will be make-or-break for Glass. Speech recognition, eye tracking and head motion modalities are on display in the concept video, and their accuracy and responsiveness is idealized. The actual state of these technologies is somewhat less than ideal today, although much progress has been made in the last few years. Our non-shiny-happy world of noisy environments, sweaty brows and unreliable network performance will present significant challenges here.

Assuming the baseline I/O technologies can be made to work, Glass will need an interaction language. What are the hands-free equivalents of select, click, scroll, drag, pinch, swipe, copy/paste, show/hide and quit? How does the system differentiate between an interface command and a nod, a word, a glance meant for a friend?


Physical and social context can add richness to the experience of Glass. But contextual computing is a hard problem, and again the Glass video treats context in a naïve and idealized way. We know from AR that the accuracy of device location and orientation is limited and can vary unpredictably in urban settings, and indoor location is still an unsolved problem. We also know that geographic location (i.e., latitude & longitude) does not translate to semantic location (e.g., “in Strand Books”).

On the other hand, simple contextual information such as time, velocity of travel, day/night, in/outdoors is available and has not been exploited by most apps. Google’s work in sensor fusion and recognition of text, images, sounds & objects could also be brought to bear on the continuous computing model.

Continuous capture

With its omnipresent camera, mic and sensors, Glass could be the first viable life recorder, enabling the “life TiVo” and total recall capabilities explored by researchers such as Steve Mann and Gordon Bell. Continuous capture will be a tremendous accelerant for participatory media services, from YouTube and Instagram-style apps to citizen journalism. It will also fuel the already heated discussions about privacy and the implications of mediated interpersonal relationships.

Of course there are many other unanswered questions here. Will Glass be an open development platform or a closed Google garden? What is the software model — are we looking at custom apps? Some kind of HTML5/JS/CSS rendering? Will there be a Glass equivalent to CocoaTouch? Is it Android under the hood? How much of the hard optical and electrical engineering work has already been done? And of course, would we accept an even more intimate relationship with a company that exists to monetize our every act and intention?

The idea of a heads-up, hands-free, continuous model of personal computing is very interesting, and done well it could be a compelling advance. But even if we allow that Google might have the sophistication and taste required, it feels like there’s a good 3-5+ years of work to be done before Glass could evolve from concept prototype into a credible new computing experience. And that’s why I think Google has tipped their hand far too soon.


from human-centered to thing-centered?

April 5th, 2012 | Gene | Comments Off on from human-centered to thing-centered?

That was quick. No sooner do I say:

On further reflection, I suppose there’s some long-term risk that by focusing on human-centricity I’m making a Ptolemaic error. Maybe at some point we’ll realize that information is the central concern of the universe, and we humans are just a particularly narcissistic arrangement of information quanta.

Than I come across Ian Bogost’s new book, Alien Phenomenology:

Humanity has sat at the center of philosophical thinking for too long. The recent advent of environmental philosophy and posthuman studies has widened our scope of inquiry to include ecosystems, animals, and artificial intelligence. Yet the vast majority of the stuff in our universe, and even in our lives, remains beyond serious philosophical concern.
In Alien Phenomenology, or What It’s Like to Be a Thing, Ian Bogost develops an object-oriented ontology that puts things at the center of being—a philosophy in which nothing exists any more or less than anything else, in which humans are elements but not the sole or even primary elements of philosophical interest. And unlike experimental phenomenology or the philosophy of technology, Bogost’s alien phenomenology takes for granted that all beings interact with and perceive one another.

Not sure if this will change anything for me, but I’m open to the journey. Click.


on human-centered personal computing

April 2nd, 2012 | Gene | 1 Comment

In my last post, I wrote:

So if there’s a major transition this industry needs to go through, it’s the shift from a box-centered view of personal computers, to a human-centered view of personal computing.

Folks, it’s 2012. I shouldn’t even have to say this, right? We all know that personal computing has undergone a radical redefinition, and is no longer something that can be delivered on a single device. Instead, personal computing is defined by a person’s path through a complex melange of physical, social and information contexts, mediated by a dynamic collection of portable, fixed and embedded digital devices and connected services.

‘Personal computing’ is what happens when you live your life in the connected world.

‘Human-centered’ means the central concern of personal technology is enabling benefits and value for people in their lives, across that messy, diverse range of devices, services, media and contexts. Note well, this is not the same as ‘customer-centered’ or ‘user-centered’, because customer and user are words that refer to individual products made by individual companies. Humans live in the diverse ecosystem of the connected world.

If you haven’t made the shift from designing PCs, phones and tablets to enabling human-centered personal computing, you’d better get going.



a word about this ‘post-PC’ notion

April 2nd, 2012 | Gene | Comments Off on a word about this ‘post-PC’ notion

Lately we have seen a spate of pronouncements from industry executives, analysts & pundits about the so-called ‘post-PC’ era. Now, I completely understand the competitive, editorial and PR value of declaring The Next New Thing. But in using the term ‘post-PC’, these nominal thought leaders are making a faulty generalization and committing a category error in order to serve up a simplistic, attention-getting headline.

The faulty generalization is obvious. We’re no more ‘post-PC’ than we are ‘post-radio’, ‘post-book’, or ‘post-friends’. Are new devices and software platforms displacing some usage of desktop and notebook PCs? Of course. Are some companies going to rise and fall as a result? Sure, but are we witnessing the demise of PCs as a product category? Not even close.

Of more concern to me is the category error inherent in ‘post-PC’ thinking. ‘Post-PC’ is a narrative about boxes: PC-shaped boxes being superseded by phone- and tablet-shaped boxes. It’s understandable, since most PC, phone and tablet companies define and structure themselves around the boxes they produce; analysts count boxes and reviewers write articles about the latest boxes. But people who buy and use these devices don’t want boxes per se; they want to listen to music, play games, connect with friends, find someplace to eat, write some code or get their work done, whenever and wherever and on whatever device it makes the most sense to do so. This ‘post-PC’ notion is disconnected from the real value that people are seeking from their investment in technology products.

So if there’s a major transition this industry needs to go through, it’s the shift from a box-centered view of personal computers, to a human-centered view of personal computing. If I was running a PC company, that’s the technical, operational and cultural transformation I’d be driving in my every waking moment.


in digital anima mundi

March 17th, 2012 | Gene | Comments Off on in digital anima mundi

My SxSW session with Sally Applin, PolySocial Reality and the Enspirited World, seemed to be well received. The group that attended was well-engaged and we had a fertile Q&A discussion. Sally focused her keen anthropological lens on the study of our increasingly complex communications with her model of PolySocial Reality; for more on PoSR see Sally’s site. [Update 3/20: Sally posted her slides on PolySocial Reality]. My bit was about the proximate future of pervasive computing, as seen from a particular viewpoint. These ideas are not especially original here in 02012, but hopefully they can serve as a useful nudge toward awareness, insight and mindful action.

What follows is a somewhat pixelated re-rendering of my part of the talk.

This talk is titled “in digital anima mundi (the digital soul of the world).” As far as I know Latin doesn’t have a direct translation for ‘digital’, so this might not be perfect usage. Suggestions welcomed. Anyway, “the digital soul of the world” is my attempt to put a name to the thing that is emerging, as the Net begins to seep into the very fabric of the physical world. I’m using terms like ‘soul’ and ‘enspirited’ deliberately — not because I want to invoke a sacred or supernatural connection, but rather to stand in sharp contrast to technological formulations like “the Internet of Things”, “smart cities”, “information shadows” and the like.

The image here is from Transcendenz, the brilliant thesis project of Michaël Harboun. Don’t miss it.


The idea of anima mundi, a world soul, has been with us for a long time. Here’s Plato in the 4th century BC.


Fast forward to 1969. This is a wonderful passage from P.K. Dick’s novel Ubik, where the protagonist Joe Chip has a spirited argument with his apartment door. So here’s a vision of a world where physical things are animated with some kind of lifelike force. Think also of the dancing brooms and talking candlesticks from Disney’s animated films.


In 1982, William Gibson coined the term ‘cyberspace’ in his short story Burning Chrome, later elaborated in his novel Neuromancer. Cyberspace was a new kind of destination, a place you went to through the gateway of a console and into the network. We thought about cyberspace in terms of…


Cities of data…


Worlds of Warcraft…


A Second Life.


Around 1988, Mark Weiser and a team of researchers at Xerox PARC invented a new computing paradigm they called ubiquitous computing, or ubicomp. The idea was that computing technologies would become ubiquitous, embedded in the physical world around us. Weiser’s group conceived of and built systems of inch-scale, foot-scale and yard-scale computers; these tabs, pads and boards have come to life in today’s iPods, smartphones, tablets and flat panel displays, in form factor if not entirely in function.


In 1992 Rich Gold, a member of the PARC research team, gave a talk titled Art in the Age of Ubicomp. This sketch from Gold’s talk describes a world of everyday objects enspirited with ubicomp. More talking candlesticks, but with a very specific technological architecture in mind.


Recently, Gibson described things this way: cyberspace has everted. It has turned inside out, and we no longer go “into the network”.


Instead, the network has gone into us. Digital data and services are embedded in the fabric of the physical world.


Digital is emerging as a new dimension of reality, an integral property of the physical world. Length, width, height, time, digital.


Since we only have this world, It’s worth exploring the question of whether this is the kind of world we want to live in.


A good place to begin is with augmented reality, the idea that digital data and services are overlaid on the physical world in context, visible only when you look through the right kind of electronic window. Today that’s smartphones and tablets; at some point that might be through a heads-up display, the long-anticipated AR glasses.


Game designers are populating AR space around us with ghosts and zombies.


Geolocative data are being visualized in AR, like this crime database from SpotCrime.


History is implicit in our world; historical photos and media can make these stories explicit and visible, like this project on the Stanford University quad.


Here’s a 3D reconstruction, a simulation of the Berlin Wall as it ran through the city of Berlin.


Of course AR has been applied to a lot of brand marketing campaigns in the last year or two, like this holiday cups app from Starbucks.


AR is also being adopted by artists and culture jammers, in part as a way to reclaim visual space from the already pervasive brand encroachment we are familiar with.


We also have the Internet of Things, the notion that in just a few years there will be 20, 30, 50 billion devices connected to the Net. Companies like Cisco and Intel see huge commercial opportunities and a wide range of new applications.


An Internet of Things needs hyperlinks, and you can think of RFID tags, QR codes and the like as physical hyperlinks. You “click” on them  in some way, and they invoke a nominally relevant digital service.


RFID and NFC have seen significant uptake in transit and transportation. In London, your Will and Kate commemorative Oyster card is your ticket to ride the Underground. In Japan, your Octopus or Suica card not only lets you ride the trains, but also purchase items from vending machines and pay for your on-street parking. In California we have FasTrak for our cars, allowing automated payment at toll booths. These systems improve efficiency of the infrastructure sevices and provide convenience to citizens. However, they are also becoming control points for access to public resources, and vast amounts of data are generated and mined based on the digital footprints we leave behind.


Sensors are key to the IoT. Botanicalls is a product from a few years ago, a communicating moisture sensor for your houseplants. When the soil gets dry, the Botanicall sends you a tweet to let you know your plant is thirsty.


More recently, the EOS Talking Tree is an instrumented tree that has a Facebook page and a Twitter account with more than 4000 followers. That’s way more than me.


This little gadget is the Rymble, billed by its creators as an emotional Internet device. You connect it with your Facebook profile, and it responds to activity by spinning around, playing sounds and flashing lights in nominally meaningful ways. This is interesting; not only are physical things routinely connected to services, but services are sprouting physical manifestations.


This is a MEMS sensor, about 1mm across, an accelerometer & gyroscope that measures motion. If you have a smartphone or tablet, you have these inside to track the tilt, rotation and translation of the device. These chips are showing up in a lot of places.


Some of you probably have a FitBit, Nike+, FuelBand, WiThings scale. Welcome to the ‘quantified self’ movement. These devices sense your physical activity, your sleep and so on, and feed the data into services and dashboards. They can be useful, fun and motivating, but know also that your physical activities are being tracked, recorded, gamified, shared and monetized.


Insurance companies are now offering sensor modules you can install on your car. They will provide you with metered, pay-as-you-drive insurance, with variable pricing based on the risk of when, where and how safely you drive.


Green Goose wants you to brush your teeth. If you do a good job, you’ll get a nice badge.


How about the Internet of Babies? This is a real product, announced a couple of weeks ago at Mobile World Congress. Sensors inside the onesie detect baby’s motion and moisture content.


Here’s a different wearable concept from Philips Design, the Bubelle Dress that senses your mood and changes colors and light patterns in response.


So physical things, places and people are becoming gateways to services, and services are colonizing the physical world. Microsoft’s Kinect is a great example of a sensor that bridges physical and digital; the image is from a Kinect depth camera stream. This is how robots see us.


If a was a service, I think I’d choose some of these robots for my physical instantiation. You’ve probably seen these — DARPA’s Alpha Dog all-terrain robotic pack horse, DARPA’s robot hummingbird, Google’s self-driving cars. You might not think of cars as robots, but these are pretty much the same kinds of things.


Robots also come in swarms. This is a project called Electronic Countermeasures by Liam Young. A swarm of quadrotor drones forms a dynamic pirate wireless network, bringing connectivity to spaces where the network has failed or been jammed. When the police drones come to shoot them down, they disperse and re-form elsewhere in the city.


A team at Harvard is creating Robobees. This is a flat multilayer design that can be stamped out in volume. It is designed so that the robot bee pops up and folds like origami into the shape at top right. I wonder what kind of service wants to be a swarm of robotic bees?


On a larger scale, IBM wants to build you a smarter city. There are large smart city projects around the globe, being built by companies like IBM, Cisco and Siemens. They view the city as a collection of networks and systems – energy, utilities, transportation etc – to be measured, monitored, managed and optimized. Operational efficiency for the city, and convenience for citizens.


But we as individuals don’t experience the city as a stack of infrastructures to be managed. Here’s Italo Calvino in his lovely book Invisible Cities. “Cities, like dreams, are made of desires and fears…the thread of their discourse is secret, their rules absurd.”


Back at ground level in the not-so-smart city of today, displays are proliferating. Everywhere you turn, public screens are beaming messages from storefronts, billboards and elevators.


We’re getting close to the point where inexpensive, flexible plastic electronics and displays will be available. When that happens, every surface will be a potential site for displays.


We’re also seeing cameras becoming pervasive in public places. When you see a surveillance camera, do you think it’s being monitored by a security guard sitting in front of a bank of monitors as seen in so many movies? More likely, what’s behind the camera is a sophisticated computer vision system like this one from Quividi, that is constantly analyzing the scene to determine things like the gender, age and attention of people passing by.


A similar system from Intel called Cognovision is being used in a service called SceneTap, which monitors the activity in local nightclubs  to let you know where the hottest spots are at any given moment.


You’ve probably seen something like this. It’s worth remembering that our technologies are all too brittle, and you should expect to see more of this kind of less-than-graceful degradation.


In case the city isn’t big enough, IBM wants to bring us a smarter planet. HP wants to deploy a trillion sensors to create a central nervous system for the earth. “The planet will be instrumented, interconnected and intelligent. People want it.” But do we? Maybe yes, maybe no?


So we come back to the question, what kind of world do you want to live in? Almost everything I’ve talked about is happening today. The world is becoming digitally transformed through technology.


Many of these technologies hold great promise and will add tremendous value to our lives. But digital technology is not neutral — it has inherent affordances and biases that influence what gets built. These technologies are extremely good at concrete, objective tasks: calculating, connecting, distributing and storing, measuring and analyzing, transactions and notifications, control and optimization. So these are often fundamental characteristics of the systems that we see deployed; they reflect the materials from which they are made.


We are bringing the Internet into the physical world. Will the Internet of people, places and things be open like the Net, a commons for the good of all? Or will it be more like a collection of app stores? Will there be the physical equivalents of spam, cookies, click-wrap licensing and contextual advertising? Will Apple, Google, Facebook and Amazon own your pocket, your wallet and your identity?


And what about the abstract, subjective qualities that we value in our lives? Technology does not do empathy well. What about reflection, emotion, trust and nuance? What about beauty, grace and soul? In digital anima mundi?


In conclusion, I’d like to share two quotes. First, something Bruce Sterling said at an AR conference two years ago. You are the world’s first pure play experience designers. We are remaking our world, and this a very different sort of design than we are used to.


What it is, is up to us. Howard first said it more than 25 years ago, and it has never been more true than today.


I want to acknowledge these sources for many of the images herein.