Archive for the ‘the connected world’ Category

from human-centered to thing-centered?

Thursday, April 5th, 2012

That was quick. No sooner do I say:

On further reflection, I suppose there’s some long-term risk that by focusing on human-centricity I’m making a Ptolemaic error. Maybe at some point we’ll realize that information is the central concern of the universe, and we humans are just a particularly narcissistic arrangement of information quanta.

Than I come across Ian Bogost’s new book, Alien Phenomenology:

Humanity has sat at the center of philosophical thinking for too long. The recent advent of environmental philosophy and posthuman studies has widened our scope of inquiry to include ecosystems, animals, and artificial intelligence. Yet the vast majority of the stuff in our universe, and even in our lives, remains beyond serious philosophical concern.
In Alien Phenomenology, or What It’s Like to Be a Thing, Ian Bogost develops an object-oriented ontology that puts things at the center of being—a philosophy in which nothing exists any more or less than anything else, in which humans are elements but not the sole or even primary elements of philosophical interest. And unlike experimental phenomenology or the philosophy of technology, Bogost’s alien phenomenology takes for granted that all beings interact with and perceive one another.

Not sure if this will change anything for me, but I’m open to the journey. Click.

on human-centered personal computing

Monday, April 2nd, 2012

In my last post, I wrote:

So if there’s a major transition this industry needs to go through, it’s the shift from a box-centered view of personal computers, to a human-centered view of personal computing.

Folks, it’s 2012. I shouldn’t even have to say this, right? We all know that personal computing has undergone a radical redefinition, and is no longer something that can be delivered on a single device. Instead, personal computing is defined by a person’s path through a complex melange of physical, social and information contexts, mediated by a dynamic collection of portable, fixed and embedded digital devices and connected services.

‘Personal computing’ is what happens when you live your life in the connected world.

‘Human-centered’ means the central concern of personal technology is enabling benefits and value for people in their lives, across that messy, diverse range of devices, services, media and contexts. Note well, this is not the same as ‘customer-centered’ or ‘user-centered’, because customer and user are words that refer to individual products made by individual companies. Humans live in the diverse ecosystem of the connected world.

If you haven’t made the shift from designing PCs, phones and tablets to enabling human-centered personal computing, you’d better get going.

 

a word about this ‘post-PC’ notion

Monday, April 2nd, 2012

Lately we have seen a spate of pronouncements from industry executives, analysts & pundits about the so-called ‘post-PC’ era. Now, I completely understand the competitive, editorial and PR value of declaring The Next New Thing. But in using the term ‘post-PC’, these nominal thought leaders are making a faulty generalization and committing a category error in order to serve up a simplistic, attention-getting headline.

The faulty generalization is obvious. We’re no more ‘post-PC’ than we are ‘post-radio’, ‘post-book’, or ‘post-friends’. Are new devices and software platforms displacing some usage of desktop and notebook PCs? Of course. Are some companies going to rise and fall as a result? Sure, but are we witnessing the demise of PCs as a product category? Not even close.

Of more concern to me is the category error inherent in ‘post-PC’ thinking. ‘Post-PC’ is a narrative about boxes: PC-shaped boxes being superseded by phone- and tablet-shaped boxes. It’s understandable, since most PC, phone and tablet companies define and structure themselves around the boxes they produce; analysts count boxes and reviewers write articles about the latest boxes. But people who buy and use these devices don’t want boxes per se; they want to listen to music, play games, connect with friends, find someplace to eat, write some code or get their work done, whenever and wherever and on whatever device it makes the most sense to do so. This ‘post-PC’ notion is disconnected from the real value that people are seeking from their investment in technology products.

So if there’s a major transition this industry needs to go through, it’s the shift from a box-centered view of personal computers, to a human-centered view of personal computing. If I was running a PC company, that’s the technical, operational and cultural transformation I’d be driving in my every waking moment.

in digital anima mundi

Saturday, March 17th, 2012

My SxSW session with Sally Applin, PolySocial Reality and the Enspirited World, seemed to be well received. The group that attended was well-engaged and we had a fertile Q&A discussion. Sally focused her keen anthropological lens on the study of our increasingly complex communications with her model of PolySocial Reality; for more on PoSR see Sally’s site. [Update 3/20: Sally posted her slides on PolySocial Reality]. My bit was about the proximate future of pervasive computing, as seen from a particular viewpoint. These ideas are not especially original here in 02012, but hopefully they can serve as a useful nudge toward awareness, insight and mindful action.

What follows is a somewhat pixelated re-rendering of my part of the talk.

This talk is titled “in digital anima mundi (the digital soul of the world).” As far as I know Latin doesn’t have a direct translation for ‘digital’, so this might not be perfect usage. Suggestions welcomed. Anyway, “the digital soul of the world” is my attempt to put a name to the thing that is emerging, as the Net begins to seep into the very fabric of the physical world. I’m using terms like ‘soul’ and ‘enspirited’ deliberately — not because I want to invoke a sacred or supernatural connection, but rather to stand in sharp contrast to technological formulations like “the Internet of Things”, “smart cities”, “information shadows” and the like.

The image here is from Transcendenz, the brilliant thesis project of Michaël Harboun. Don’t miss it.

 

The idea of anima mundi, a world soul, has been with us for a long time. Here’s Plato in the 4th century BC.

 

Fast forward to 1969. This is a wonderful passage from P.K. Dick’s novel Ubik, where the protagonist Joe Chip has a spirited argument with his apartment door. So here’s a vision of a world where physical things are animated with some kind of lifelike force. Think also of the dancing brooms and talking candlesticks from Disney’s animated films.

 

In 1982, William Gibson coined the term ‘cyberspace’ in his short story Burning Chrome, later elaborated in his novel Neuromancer. Cyberspace was a new kind of destination, a place you went to through the gateway of a console and into the network. We thought about cyberspace in terms of…

 

Cities of data…

 

Worlds of Warcraft…

 

A Second Life.

 

Around 1988, Mark Weiser and a team of researchers at Xerox PARC invented a new computing paradigm they called ubiquitous computing, or ubicomp. The idea was that computing technologies would become ubiquitous, embedded in the physical world around us. Weiser’s group conceived of and built systems of inch-scale, foot-scale and yard-scale computers; these tabs, pads and boards have come to life in today’s iPods, smartphones, tablets and flat panel displays, in form factor if not entirely in function.

 

In 1992 Rich Gold, a member of the PARC research team, gave a talk titled Art in the Age of Ubicomp. This sketch from Gold’s talk describes a world of everyday objects enspirited with ubicomp. More talking candlesticks, but with a very specific technological architecture in mind.

 

Recently, Gibson described things this way: cyberspace has everted. It has turned inside out, and we no longer go “into the network”.

 

Instead, the network has gone into us. Digital data and services are embedded in the fabric of the physical world.

 

Digital is emerging as a new dimension of reality, an integral property of the physical world. Length, width, height, time, digital.

 

Since we only have this world, It’s worth exploring the question of whether this is the kind of world we want to live in.

 

A good place to begin is with augmented reality, the idea that digital data and services are overlaid on the physical world in context, visible only when you look through the right kind of electronic window. Today that’s smartphones and tablets; at some point that might be through a heads-up display, the long-anticipated AR glasses.

 

Game designers are populating AR space around us with ghosts and zombies.

 

Geolocative data are being visualized in AR, like this crime database from SpotCrime.

 

History is implicit in our world; historical photos and media can make these stories explicit and visible, like this project on the Stanford University quad.

 

Here’s a 3D reconstruction, a simulation of the Berlin Wall as it ran through the city of Berlin.

 

Of course AR has been applied to a lot of brand marketing campaigns in the last year or two, like this holiday cups app from Starbucks.

 

AR is also being adopted by artists and culture jammers, in part as a way to reclaim visual space from the already pervasive brand encroachment we are familiar with.

 

We also have the Internet of Things, the notion that in just a few years there will be 20, 30, 50 billion devices connected to the Net. Companies like Cisco and Intel see huge commercial opportunities and a wide range of new applications.

 

An Internet of Things needs hyperlinks, and you can think of RFID tags, QR codes and the like as physical hyperlinks. You “click” on them  in some way, and they invoke a nominally relevant digital service.

 

RFID and NFC have seen significant uptake in transit and transportation. In London, your Will and Kate commemorative Oyster card is your ticket to ride the Underground. In Japan, your Octopus or Suica card not only lets you ride the trains, but also purchase items from vending machines and pay for your on-street parking. In California we have FasTrak for our cars, allowing automated payment at toll booths. These systems improve efficiency of the infrastructure sevices and provide convenience to citizens. However, they are also becoming control points for access to public resources, and vast amounts of data are generated and mined based on the digital footprints we leave behind.

 

Sensors are key to the IoT. Botanicalls is a product from a few years ago, a communicating moisture sensor for your houseplants. When the soil gets dry, the Botanicall sends you a tweet to let you know your plant is thirsty.

 

More recently, the EOS Talking Tree is an instrumented tree that has a Facebook page and a Twitter account with more than 4000 followers. That’s way more than me.

 

This little gadget is the Rymble, billed by its creators as an emotional Internet device. You connect it with your Facebook profile, and it responds to activity by spinning around, playing sounds and flashing lights in nominally meaningful ways. This is interesting; not only are physical things routinely connected to services, but services are sprouting physical manifestations.

 

This is a MEMS sensor, about 1mm across, an accelerometer & gyroscope that measures motion. If you have a smartphone or tablet, you have these inside to track the tilt, rotation and translation of the device. These chips are showing up in a lot of places.

 

Some of you probably have a FitBit, Nike+, FuelBand, WiThings scale. Welcome to the ‘quantified self’ movement. These devices sense your physical activity, your sleep and so on, and feed the data into services and dashboards. They can be useful, fun and motivating, but know also that your physical activities are being tracked, recorded, gamified, shared and monetized.

 

Insurance companies are now offering sensor modules you can install on your car. They will provide you with metered, pay-as-you-drive insurance, with variable pricing based on the risk of when, where and how safely you drive.

 

Green Goose wants you to brush your teeth. If you do a good job, you’ll get a nice badge.

 

How about the Internet of Babies? This is a real product, announced a couple of weeks ago at Mobile World Congress. Sensors inside the onesie detect baby’s motion and moisture content.

 

Here’s a different wearable concept from Philips Design, the Bubelle Dress that senses your mood and changes colors and light patterns in response.

 

So physical things, places and people are becoming gateways to services, and services are colonizing the physical world. Microsoft’s Kinect is a great example of a sensor that bridges physical and digital; the image is from a Kinect depth camera stream. This is how robots see us.

 

If a was a service, I think I’d choose some of these robots for my physical instantiation. You’ve probably seen these — DARPA’s Alpha Dog all-terrain robotic pack horse, DARPA’s robot hummingbird, Google’s self-driving cars. You might not think of cars as robots, but these are pretty much the same kinds of things.

 

Robots also come in swarms. This is a project called Electronic Countermeasures by Liam Young. A swarm of quadrotor drones forms a dynamic pirate wireless network, bringing connectivity to spaces where the network has failed or been jammed. When the police drones come to shoot them down, they disperse and re-form elsewhere in the city.

 

A team at Harvard is creating Robobees. This is a flat multilayer design that can be stamped out in volume. It is designed so that the robot bee pops up and folds like origami into the shape at top right. I wonder what kind of service wants to be a swarm of robotic bees?

 

On a larger scale, IBM wants to build you a smarter city. There are large smart city projects around the globe, being built by companies like IBM, Cisco and Siemens. They view the city as a collection of networks and systems – energy, utilities, transportation etc – to be measured, monitored, managed and optimized. Operational efficiency for the city, and convenience for citizens.

 

But we as individuals don’t experience the city as a stack of infrastructures to be managed. Here’s Italo Calvino in his lovely book Invisible Cities. “Cities, like dreams, are made of desires and fears…the thread of their discourse is secret, their rules absurd.”

 

Back at ground level in the not-so-smart city of today, displays are proliferating. Everywhere you turn, public screens are beaming messages from storefronts, billboards and elevators.

 

We’re getting close to the point where inexpensive, flexible plastic electronics and displays will be available. When that happens, every surface will be a potential site for displays.

 

We’re also seeing cameras becoming pervasive in public places. When you see a surveillance camera, do you think it’s being monitored by a security guard sitting in front of a bank of monitors as seen in so many movies? More likely, what’s behind the camera is a sophisticated computer vision system like this one from Quividi, that is constantly analyzing the scene to determine things like the gender, age and attention of people passing by.

 

A similar system from Intel called Cognovision is being used in a service called SceneTap, which monitors the activity in local nightclubs  to let you know where the hottest spots are at any given moment.

 

You’ve probably seen something like this. It’s worth remembering that our technologies are all too brittle, and you should expect to see more of this kind of less-than-graceful degradation.

 

In case the city isn’t big enough, IBM wants to bring us a smarter planet. HP wants to deploy a trillion sensors to create a central nervous system for the earth. “The planet will be instrumented, interconnected and intelligent. People want it.” But do we? Maybe yes, maybe no?

 

So we come back to the question, what kind of world do you want to live in? Almost everything I’ve talked about is happening today. The world is becoming digitally transformed through technology.

 

Many of these technologies hold great promise and will add tremendous value to our lives. But digital technology is not neutral — it has inherent affordances and biases that influence what gets built. These technologies are extremely good at concrete, objective tasks: calculating, connecting, distributing and storing, measuring and analyzing, transactions and notifications, control and optimization. So these are often fundamental characteristics of the systems that we see deployed; they reflect the materials from which they are made.

 

We are bringing the Internet into the physical world. Will the Internet of people, places and things be open like the Net, a commons for the good of all? Or will it be more like a collection of app stores? Will there be the physical equivalents of spam, cookies, click-wrap licensing and contextual advertising? Will Apple, Google, Facebook and Amazon own your pocket, your wallet and your identity?

 

And what about the abstract, subjective qualities that we value in our lives? Technology does not do empathy well. What about reflection, emotion, trust and nuance? What about beauty, grace and soul? In digital anima mundi?

 

In conclusion, I’d like to share two quotes. First, something Bruce Sterling said at an AR conference two years ago. You are the world’s first pure play experience designers. We are remaking our world, and this a very different sort of design than we are used to.

 

What it is, is up to us. Howard first said it more than 25 years ago, and it has never been more true than today.

 

I want to acknowledge these sources for many of the images herein.

 

Polysocial Reality and the Enspirited World

Tuesday, February 28th, 2012

I’m doing a talk at SxSW in a couple of weeks, with my exceedingly smart friend Sally Applin (@anthropunk). Our topic is “Polysocial Reality and the Enspirited World” reflecting Sally’s PhD work on PoSR and our related interests in ubicomp, social media and augmented reality. Originally Bruce Sterling was part of our panel, but sadly the SxSW people overruled that as Bruce is already doing another keynote. Apparently nobody gets to speak twice, not even the venerable @bruces. It’s too bad, as he would have injected a great contrapuntal viewpoint. Nonetheless we are going to have a lot of fun, and I hope that a few people will show up to hear us. If you’re in Austin on the 13th, come by and say hello, we would love to see you!

Polysocial Reality and the Enspirited World

The merging of the physical and digital into a blended reality is a profound change to our world that demands examination. In this session we will explore this concept through the lenses of technology, anthropology and cyberculture. We will debate the ideas of PolySocial Reality, which describes our multiple, multiplexed, synchronous and ansynchronous data connections; Augmented Reality; and the Enspirited World of people, places and things suffused with digital information energy.

 

AR photography

Tuesday, April 26th, 2011

Last week at Where 2.0 and Wherecamp, the air was full of AR augments. Between the locative photos in the Instagram layer, the geotagged tweets in TweepsAround, and the art/protest layer called freespace, there were many highly visual, contextually interesting AR objects being generated, occupying and flowing through the event spaces. These were invisible of course, until viewed through the AR lens. I found myself becoming very aware of this hidden dimension, wondering what new objects might have appeared, what I might encounter if I peered through the looking glass right here, right now. And then I found myself taking pictures in AR, because I was discovering moments that seemed worth capturing and sharing.

Larry and Mark weren’t physically at Where 2.0, but their perceived presence loomed large over the proceedings. Those are clever mashups on the Obey Giant theme as well; what are they trying to say here?

At Wherecamp on the Stanford campus, locative social media were very much in evidence. Here, camp organizer @anselm and AR developer @pmark were spotted in physical/digital space.

The freespace cabal apparently thought the geo community would be receptive to their work, although it seemed some of the messages were aimed at a different audience. The detention of Chinese artist Ai Wei Wei is a charged topic, certainly.

So you’ll note that although these are all screenshots from the AR view in Layar, I’m referring to them as photographs in their own right. It’s a subtle shift, but an interesting one. For me, this new perspective is driven by several factors: the emergence of visually interesting and contextually relevant AR content, the idea that AR objects are vectors for targeted messages, and the new screenshot and share functions which make Layar seem more like a social digital camera app. I’m finding myself actively composing AR photos, and thinking about what content I could create that would make good AR pictures other people would want to take. Oh, and that awkward AR holding-your-phone-up gesture? I’m taking pictures, what could be more natural?

AR photography feels like it might be important. What do you think?

 

augmented hypersocial media

Monday, April 25th, 2011

Christopher and I had this funny exchange the other day. Physical, digital and social worlds interwoven, with many border crossings; I guess this would be an example of what @anthropunk calls “polysocial reality.”

It started when I found @jewelia‘s Instagram pic from the Where 2.0 stage in the new Instagram AR layer in Layar. I took a screenshot:

and shared it on Twitter:

A bit later, I saw my tweet in the TweepsAround layer, and I took a screenshot:

and shared that one to Twitter too:

Then Christopher @endurablegoods got in on the fun:

Of course that was bait, so I snapped a photo in Color:

and shared it on Twitter:

But Christopher was not to be outdone:

And in the end:

We live in interesting times.

 

hacking space and time

Tuesday, April 19th, 2011

[cross-posted from the Layar blog]

In my recent Ignite talk Hijacking the Here and Now: Adventures in Augmented Reality, I showed examples of how creative people are using AR in ways that modify our perceptions about time and space. Now, Ignite talks are only 5 minutes long and I think this is a big idea that’s worth a deeper look. So here’s my claim: I assert that one of the most natural and important uses of AR as a creative medium is hacking space and time to explore and make sense of the emerging physical+digital world.

When you look at who the true AR enthusiasts are, who is doing the cutting edge creative work in AR today, it’s artists, activists and digital humanities geeks. Their projects explore and challenge the ideas of ownership and exclusivity of physical space, and the flowing irreversibility of time. They are starting to see AR as the emergence of a new construction of reality, where the physical and digital are no longer distinct but instead are irreversibly blended. Artist Sander Veenhof is attracted to the “infinite dimensions” of AR. Stanford Knight Fellow Adriano Farano sees AR ushering in an era of “multi-layer journalism”. Archivist Rick Prelinger says “History should be like air,” immersive, omnipresent and free. And in their recent paper Augmented Reality and the Museum Experience, Schavemaker et al write:

In the 21st century the media are going ambient. TV, as Anna McCarthy pointed out in Ambient Television (2001), started this great escape from domesticity via the manifold urban screens and the endless flat screens in shops and public transportation. Currently the Internet is going through a similar phase as GPS technology and our mobile devices offer via the digital highway a move from the purely virtual domain to the ‘real’ world. We can collect our data everywhere we desire, and thus at any given moment transform the world around us into a sort of media hybrid, or ‘augmented reality’. [emphasis mine]

When the team behind PhillyHistory.org augments the city of Philadelphia with nearly 90,000 historical photographs in AR, they are actively modifying our experience of the city’s space and connecting us to moments in time long past. With its ambitious scope and scale, this seems a particularly apt example of transforming the world into a media hybrid.

In their AR piece US/Iraq War Memorial, artists Mark Skwarek and John Craig Freeman transpose the locative datascape of casualties in the Iraq War from Wikileaks onto the northeastern United States, with the location of Baghdad mapped onto the coordinates of Washington DC. In addition to spatial hackery evocative of Situationist psychogeographic play, this work makes a strong political statement about control of information, nationalist perspectives and the cultural abstraction of war.

us-iraq-maps

Now let’s talk about this word, ‘hacking’. Actually, you’ll note that I used the term ‘hijacking’ as well, so let’s include that too. My intent is to evoke the tension of multiple meanings: Hacking in the sense of gaining deep understanding and mastery of a system in order to modify and improve it, and as a visible demonstration of a high degree of proficiency. Also, hacking in the sense of making unauthorized intrusions into a system, including both white hat and black hat variations. I use ‘hijacking’ in the sense of a mock takeover, like the Black Eyed Peas playfully hijacking the myspace.com website for publicity purposes, but also hijacking as an antagonistic, possibly malign, and potentially unlawful attack. In the physical+digital augmented world, I expect we will see a wide variety of hacking and hijacking behaviors, with both positive and negative effects. For example, in Skwarek’s piece with Joseph Hocking, the leak in your hometown, the corporate logo of BP becomes the trigger for an animated re-creation of the iconic broken pipe at the Macondo wellhead, spewing AR oil into your location. It is possible to see this as an inspired spatial hack and a biting social commentary, but I have no doubt BP executives would consider it a hijacking of their brand in the worst way.

In his book Smart Things, ubicomp experience designer Mike Kuniavsky asks us to think of digital media about physical entities as ‘information shadows’; I believe the work of these AR pioneers points us toward a future where digital information is not a subordinate ‘shadow’ of the physical, but rather a first-class element of our experience of the world. Even at this early stage in the development of the underlying technology, AR is a consequential medium of expression that is being used to tell meaningful stories, make critical statements, and explore the new dimensionality of a blended physical+digital world. Something important is happening here, and hacking space and time through AR is how we’re going to understand and make sense of it.

Ozzie to MSFT execs: you’re doomed kthxbye

Tuesday, October 26th, 2010

Ray_Ozzie_Wired-250px

I paraphrase, obviously. But seriously, did you read Ray Ozzie’s Dawn of a New Day? It’s his manifesto for the post-PC era, and a poignant farewell letter to Microsoft executives as he unwinds himself from the company. In Ozzie’s post, frequent readers of this space will recognize what I’ve been calling ‘the new revolution in personal computing’, the rise of a connected world of mobile, embedded and ubiquitous devices, services, sensors & actuators, and contextual transmedia; a physical, social, immersive Internet of People, Places & Things.

“All these new services will be cloud-centric ‘continuous services’ built in a way that we can all rely upon.  As such, cloud computing will become pervasive for developers and IT – a shift that’ll catalyze the transformation of infrastructure, systems & business processes across all major organizations worldwide.  And all these new services will work hand-in-hand with an unimaginably fascinating world of devices-to-come.  Today’s PC’s, phones & pads are just the very beginning; we’ll see decades to come of incredible innovation from which will emerge all sorts of ‘connected companions’ that we’ll wear, we’ll carry, we’ll use on our desks & walls and the environment all around us.  Service-connected devices going far beyond just the ‘screen, keyboard and mouse’:  humanly-natural ‘conscious’ devices that’ll see, recognize, hear & listen to you and what’s around you, that’ll feel your touch and gestures and movement, that’ll detect your proximity to others; that’ll sense your location, direction, altitude, temperature, heartbeat & health.”

– Ray Ozzie, Dawn of a New Day

Frankly, there’s nothing especially surprising about this vision of the future; many of us (including Gates and Ozzie) have been working toward similar ideas for at least 20 years. Former HP Labs head Joel Birnbaum was predicting a world of appliance/utility computing (pdf) in the ’90s. I’m sure that many of these ideas are actively being researched in Microsoft’s own labs.

What I find really interesting is that Ozzie is speaking to (and for) Microsoft, one of the largest companies in tech and also the one company that stands to be most transformed and disrupted by the future he describes. He’s giving them a wake-up call, and letting them know that no matter how disruptive the last 5 years may have seemed to the core Windows and Office franchises, despite the wrenching transition to a web-centric world, the future is here and you ain’t seen nothing yet.

And now at “the dawn of a new day – the sun having now arisen on a world of continuous services and connected devices”, Ray Ozzie is riding off into the sunset. I don’t see how that can be interpreted as a good sign.

(photo credit: WIRED)

the new revolution in personal computing

Friday, February 19th, 2010

One of our core themes for the connected world, is that we are living through an unprecedented confluence of new technologies that unleashes innovation and fundamentally transforms industries. In keeping with this view, in the last few months we have seen a tremendous wave of new technology products and developments from a wide range of companies. Taken separately, many of these announcements are significant and a few are game-changing, not so unusual for our industry. However, when viewed collectively they add up to nothing less than a new revolution in personal computing.

Innovation is happening at every level of personal systems, from processor architecture and devices to social media and advertising. The very idea of a personal system is broadening rapidly to encompass mobile, embedded and cloud systems, identity, context and the physical world.

In core hardware, Qualcomm’s Snapdragon platform garnered significant design wins at HP, Google and many others, while Apple has developed their own ARM-based A4 system on a chip. Nvidia also apparently gained support for their new Tegra2 mobile processor.

In system software, Google released the open source Chromium OS while their Android platform continued to gather design wins. Microsoft announced a completely redesigned Windows Phone 7 OS, Intel and Nokia merged Moblin and Maemo into the MeeGo linux platform, Symbian3 launched, and Apple extended their iPhone OS to a major new platform.

In devices, Apple announced the iPad, Google jumped into the hardware business with the Nexus One, HP and others previewed tablet PCs, and a stack of new E-readers launched from Barnes & Noble, Hearst, Plastic Logic, and more.

Application ecosystems continued to heat up as Apple’s AppStore crossed 100,000 apps and 3 billion downloads. Intel introduced the AppUp store for netbooks, and a broad consortium of carriers and device makers launched the oddly named Wholesale Applications Community for mobile apps.

The social media frenzy continued unabated, with Facebook hitting 400 million users (third in country population behind China and India!), Twitter passing 1 billion tweets per month, and Google’s Buzz launching like a rocket with millions of users before running into a buzzsaw of criticism for their tone-deaf approach to privacy and usability. A recent analysis showed Facebook driving more traffic to major web destinations than Google, signalling a dramatic shift from organic search to friend recommendations for finding information online.

Google acquired AdMob while Apple bought Quattro Wireless, pointing to a major battle for mobile advertising as well as a very provocative business model play for Apple.

Mobile social location-based gamers Foursquare, a favorite of the early-adopter tribe, inked deals with major media properties including Bravo TV, Conde Nast’s Lucky Magazine, Zagat guides, HBO, Warner and the New York Times.

The race to capture, index and augment the physical world further intensified. Microsoft’s Bing Maps and Google’s Street View each showed major new features, including integrating users’ photographs seamlessly into their visual canvases. Street View now has capture operations in 30 countries on 6 continents, and they are managing a fast-growing multi-petabyte store of image and lidar data (1 PB = 1 million GB). Meanwhile NYC startup Everyscape raised $6M from SK Telecom to expand their real-world capture into Asia, and SF-based Earthmine opened their high-resolution 3D city point cloud database to developers.

Google also released Goggles, a mobile app for Android devices that provides visual recognition, identification, OCR and search for physical world objects such as books, products, and landmarks. Nokia began a pilot of their mobile Point & Find service with bus shelter advertising in Colchester UK. Augmented reality startup Layar added $3.4M in funding and a global mobile phone distribution deal, signalling growing commercial interest in overlaying the real world with digital media and experiences.

In the realm of open innovation we saw grass-roots networks mount a groundswell of response to the disastrous earthquake in Haiti. Open source platform Ushahidi, mapping and geoweb experts from Open Street Map, and hackers at worldwide self-organizing Crisis Camps provided tools and expertise to support a wide range of relief efforts on the ground in Port au Prince.

Lastly, in two fascinating signs that the future is upon us, HP announced that it was getting into 3D printers through a deal with Stratasys, while San Diego outfit Organovo announced the first commercial 3D bio-printer for manufacturing human tissue and organs. It really doesn’t get much more personal than that.

In the 40-plus years since Douglas Engelbart created the mother of all demos, the personal computer has fundamentally transformed the way we work, play, create, communicate, shop, learn and live. Now we find ourselves at the cusp of a new revolution, where personal computing is no longer synonymous with the personal computer. The new personal computing is mobile, embedded, networked, virtual, social, contextual, wearable and physical. And it’s here. Are you ready?