Archive for the ‘future computing visions’ Category

Google’s Project Glass is a new model for personal computing

Friday, April 6th, 2012

The concept video of Google’s Project Glass has whipped up an Internet frenzy since it was released earlier this week, with breathless coverage (and more than a little skepticism) about the alpha-stage prototype wearable devices. Most of the reporting has focused on the ‘AR glasses’ angle with headlines like “Google Shows Off, Teases Augmented Reality Spectacles“, but I don’t think Project Glass is about augmented reality at all. The way I see it, Glass is actually about creating a new model for personal computing.

Think about it. In the concept video, you see none of the typical AR tropes like 3D animated characters, pop-up object callouts and video-textured building facades. And tellingly, there’s not even a hint of Google’s own Goggles AR/visual search product. Instead, what we see is a heads-up, hands-free, continuous computing experience tightly integrated with the user’s physical and social context. Glass posits a new use model based on a novel hardware platform, new interaction modalities and new design patterns, and it fundamentally alters our relationship to digital information and the physical environment.

This is a much more ambitious idea than AR or visual search. I think we’re looking at Sergey’s answer to Apple’s touch-based model of personal computing. It’s audacious, provocative and it seems nearly impossible that Google could pull it off, which puts it squarely in the realm of things Google most loves to do. Unfortunately in this case I believe they have tipped their hand too soon.

Photo: Thomas Hawk / Flickr

Let’s suspend disbelief for a moment and consider some of the implications of Glass-style computing. There’s a long list of quite difficult engineering, design, cultural and business challenges that Google has to resolve. Of these, I’m particularly interested in the aspects related to experience design:

Continuous computing

The rapid adoption of smartphones is ample evidence that people want to have their digital environment with them constantly. We pull them out in almost any circumstance, we allow people and services to interrupt us frequently and we feed them with a steady stream of photos, check-ins, status updates and digital footprints. An unconsciously wearable heads-up device such as Glass takes the next step, enabling a continuous computing experience interwoven with our physical senses and situation. It’s a model that is very much in the Bush/Engelbart tradition of augmenting human capabilities, but it also has the potential to exacerbate the problematic complexity of interactions as described by polysocial reality.

A continuous computing model needs to be designed in a way that complements human sensing and cognition. Transferring focus of attention between cognitive contexts must be effortless; in the Glass video, the subject shifts his attention between physical and digital environments dozens of times in a few short vignettes. Applications must also respect the unique properties of the human visual system. Foveal interaction must co-exist and not interfere with natural vision. Our peripheral vision is highly sensitive to motion, and frequent graphical activity will be an undesirable distraction. The Glass video presents a simplistic visual model that would likely fail as a continuous interface.

Continuous heads-up computing has the potential to enable useful new capabilities such as large virtual displays, telepresent collaboration, and enhanced multi-screen interactions. It might also be the long-awaited catalyst for adoption of locative and contextual media. I see continuous computing as having enormous potential and demanding deep insight and innovation; it could easily spur a new wave of creativity and economic value.

Heads-up, hands-free interaction

The interaction models and mechanisms for heads-up, hands-free computing will be make-or-break for Glass. Speech recognition, eye tracking and head motion modalities are on display in the concept video, and their accuracy and responsiveness is idealized. The actual state of these technologies is somewhat less than ideal today, although much progress has been made in the last few years. Our non-shiny-happy world of noisy environments, sweaty brows and unreliable network performance will present significant challenges here.

Assuming the baseline I/O technologies can be made to work, Glass will need an interaction language. What are the hands-free equivalents of select, click, scroll, drag, pinch, swipe, copy/paste, show/hide and quit? How does the system differentiate between an interface command and a nod, a word, a glance meant for a friend?

Context

Physical and social context can add richness to the experience of Glass. But contextual computing is a hard problem, and again the Glass video treats context in a naïve and idealized way. We know from AR that the accuracy of device location and orientation is limited and can vary unpredictably in urban settings, and indoor location is still an unsolved problem. We also know that geographic location (i.e., latitude & longitude) does not translate to semantic location (e.g., “in Strand Books”).

On the other hand, simple contextual information such as time, velocity of travel, day/night, in/outdoors is available and has not been exploited by most apps. Google’s work in sensor fusion and recognition of text, images, sounds & objects could also be brought to bear on the continuous computing model.

Continuous capture

With its omnipresent camera, mic and sensors, Glass could be the first viable life recorder, enabling the “life TiVo” and total recall capabilities explored by researchers such as Steve Mann and Gordon Bell. Continuous capture will be a tremendous accelerant for participatory media services, from YouTube and Instagram-style apps to citizen journalism. It will also fuel the already heated discussions about privacy and the implications of mediated interpersonal relationships.

Of course there are many other unanswered questions here. Will Glass be an open development platform or a closed Google garden? What is the software model — are we looking at custom apps? Some kind of HTML5/JS/CSS rendering? Will there be a Glass equivalent to CocoaTouch? Is it Android under the hood? How much of the hard optical and electrical engineering work has already been done? And of course, would we accept an even more intimate relationship with a company that exists to monetize our every act and intention?

The idea of a heads-up, hands-free, continuous model of personal computing is very interesting, and done well it could be a compelling advance. But even if we allow that Google might have the sophistication and taste required, it feels like there’s a good 3-5+ years of work to be done before Glass could evolve from concept prototype into a credible new computing experience. And that’s why I think Google has tipped their hand far too soon.

Polysocial Reality and the Enspirited World

Tuesday, February 28th, 2012

I’m doing a talk at SxSW in a couple of weeks, with my exceedingly smart friend Sally Applin (@anthropunk). Our topic is “Polysocial Reality and the Enspirited World” reflecting Sally’s PhD work on PoSR and our related interests in ubicomp, social media and augmented reality. Originally Bruce Sterling was part of our panel, but sadly the SxSW people overruled that as Bruce is already doing another keynote. Apparently nobody gets to speak twice, not even the venerable @bruces. It’s too bad, as he would have injected a great contrapuntal viewpoint. Nonetheless we are going to have a lot of fun, and I hope that a few people will show up to hear us. If you’re in Austin on the 13th, come by and say hello, we would love to see you!

Polysocial Reality and the Enspirited World

The merging of the physical and digital into a blended reality is a profound change to our world that demands examination. In this session we will explore this concept through the lenses of technology, anthropology and cyberculture. We will debate the ideas of PolySocial Reality, which describes our multiple, multiplexed, synchronous and ansynchronous data connections; Augmented Reality; and the Enspirited World of people, places and things suffused with digital information energy.

 

je mixe ce soir!

Wednesday, November 3rd, 2010

In honor of Facebook’s announcements today about making mobile more social, I’d like to remind you of this visionary portrayal of what it will be like when Facebook is truly mobile. Looks like we’ve got a long way to go.

orelsan-toxic

That’s right AR fans, it’s the Toxic Avenger feat. Orelsan performing last summer’s monster Internet dance hit, N’Importe Comment. So slip on your mindglasses, turn up the bass in your earplants, and prepare to “Like” this french fratboy fantasy from the future. Watch carefully, because this is a precious, fleeting snapshot of the way our connected culture felt, circa mid-2010. Someday, cyborg anthropologists are going to have a field day with this thing. Je mixe ce soir!

Ozzie to MSFT execs: you’re doomed kthxbye

Tuesday, October 26th, 2010

Ray_Ozzie_Wired-250px

I paraphrase, obviously. But seriously, did you read Ray Ozzie’s Dawn of a New Day? It’s his manifesto for the post-PC era, and a poignant farewell letter to Microsoft executives as he unwinds himself from the company. In Ozzie’s post, frequent readers of this space will recognize what I’ve been calling ‘the new revolution in personal computing’, the rise of a connected world of mobile, embedded and ubiquitous devices, services, sensors & actuators, and contextual transmedia; a physical, social, immersive Internet of People, Places & Things.

“All these new services will be cloud-centric ‘continuous services’ built in a way that we can all rely upon.  As such, cloud computing will become pervasive for developers and IT – a shift that’ll catalyze the transformation of infrastructure, systems & business processes across all major organizations worldwide.  And all these new services will work hand-in-hand with an unimaginably fascinating world of devices-to-come.  Today’s PC’s, phones & pads are just the very beginning; we’ll see decades to come of incredible innovation from which will emerge all sorts of ‘connected companions’ that we’ll wear, we’ll carry, we’ll use on our desks & walls and the environment all around us.  Service-connected devices going far beyond just the ‘screen, keyboard and mouse’:  humanly-natural ‘conscious’ devices that’ll see, recognize, hear & listen to you and what’s around you, that’ll feel your touch and gestures and movement, that’ll detect your proximity to others; that’ll sense your location, direction, altitude, temperature, heartbeat & health.”

– Ray Ozzie, Dawn of a New Day

Frankly, there’s nothing especially surprising about this vision of the future; many of us (including Gates and Ozzie) have been working toward similar ideas for at least 20 years. Former HP Labs head Joel Birnbaum was predicting a world of appliance/utility computing (pdf) in the ’90s. I’m sure that many of these ideas are actively being researched in Microsoft’s own labs.

What I find really interesting is that Ozzie is speaking to (and for) Microsoft, one of the largest companies in tech and also the one company that stands to be most transformed and disrupted by the future he describes. He’s giving them a wake-up call, and letting them know that no matter how disruptive the last 5 years may have seemed to the core Windows and Office franchises, despite the wrenching transition to a web-centric world, the future is here and you ain’t seen nothing yet.

And now at “the dawn of a new day – the sun having now arisen on a world of continuous services and connected devices”, Ray Ozzie is riding off into the sunset. I don’t see how that can be interpreted as a good sign.

(photo credit: WIRED)

toward virtuosity, reflection and a conscious computing experience

Friday, October 22nd, 2010

@lindastone published this short post titled The Look & Feel of Conscious Computing, which I found compelling and resonant with thoughts that have been rattling around in my head for awhile:

“With a musical instrument, it’s awkward at first. All thumbs. Uncomfortable. Noise. With practice, instrument and musician become as one. Co-creating music. So it will be with personal technology. Now, a prosthetic of mind, it will become a prosthetic of being. A violinist with a violin. Us with our gadgets, embodied, attending as we choose.”

For context, Linda also pointed me toward another of her posts, A new era of post-productivity computing? where she closes with the question

“How do we usher in an era of Conscious Computing? What tools, technologies, and techniques will it take for personal technologies to become prosthetics of our full human potential?”

I’ve wrestled with similar questions in the past:

“In the arts, we speak of a talented and communicative practitioner as a virtuoso. The virtuous performer combines technical mastery of her medium with a great depth of human expressiveness, communicating with her audience at symbolic, intuitive and emotional levels. Can we imagine a similar kind of virtuosity of communication, applied to domains that are not traditionally considered art? Can we further make this possibility accessible to more people, allowing a richer level of discourse in the walks of everyday life?

“When groups of musicians play together, they establish communication channels among themselves through the give and take of listening and leading. Great ensemble players know how to establish a state of flow, a groove, where the music takes on a vitality and life of its own, greater than the sum of the individual rhythms, pitches and timbres. What are the conditions that make such a group ‘chemistry’ possible? Could we capture that essence and apply it to the work of organizations, the building of communities, the life of families?

“As information technologies increasingly become integral to our activities, the information we use, even to our ways of thinking and perceiving, we must confront some difficult, elusive notions about the relationships between people and their tools. For instance, in what sense can the technology enhance creative, playful thinking — are we having fun? What about beauty, inspiration, spirituality, mystery? These are qualities for which humans have striven over our entire history; shall we subjugate them in the name of efficiency, convenience and immediacy? Do the artifacts we make allow people space for reflection and insight, or merely add to the numbing cacophony of digital voices demanding our attention? Is it strange to ask such questions? Not at all. The economics of information technologies seem to dictate a future where more and more of our lives will be mediated by networks and interfaces and assorted other paraphernalia of progress. We must recognize the importance of such uniquely human concerns and integrate them into our vision, or risk further dehumanization in our already fractured society.”

Okay, that was from 1994, so where are we on this? I have to say, it seems like mainstream computing has advanced very little in these areas. Apple has good intentions, and the iPad actually does a nice job of getting out of the way, letting you interact directly and physically with individually embodied apps. It’s the best of the bunch, but the iPad is no violin, no instrument of human expression. Certainly the current crop of PCs, netbooks and phones are no better.

There are a few non-mainstream computing paradigms that give me hope for a conscious computing experience. The Nike+ running system, my favorite example of embodied ubi-media, creates an inherently physical experience augmented with media and social play. Nike+ doesn’t have a broadly expressive vocabulary, but it does bring your whole body into the equation and closes the feedback loop with contextually suitable music and audio prompts.

At its best, Twitter starts to feel like a global jam session between connected minds. The rapid fire exchange of ideas, the riffing off others’ posts, the flow of a well-curated stream can sometimes feel uniquely expressive. Yes, it is primarily a mental activity and mostly disembodied, but the occasional flashes of genuine group chemistry are wonderfully suggestive of the potential for an interconnected humanity.

For me, the most interesting possibilities arise from games. There are the obvious examples of physical interaction and expression that the Wii and Kinect deliver primarily for action games now, but with time a broader range of immersive and reflective experiences (is Wii Yoga any good?). I’m also thinking of the emerging genre of out-in-the-world games like SF0, SCVNGR and Top Secret Dance Off that send you on creative, social missions involving physicality, play, performance and discovery. Finally there is the next generation of “gameful” games, as proposed by Jane McGonigal:

What is Gameful?

We invented the word gameful! It means to have the spirit, or mindset, of a gamer: someone who is optimistic, curious, motivated, and always up for a tough challenge. It’s like the word “playful” — but gamier! Gameful games are games that have a positive impact on our real lives, or on the real world. They’re games that make us:

  • happier
  • smarter
  • stronger
  • healthier
  • more collaborative
  • more creative
  • better connected to our friends and family
  • more resilient
  • better problem-solvers
  • and better at WHATEVER we love to do when we’re not playing games.

I think the future of expressive, improvisational, conscious computing will be found at the intersection of personal sensing tools like Nike+ and Kinect, collective action tools like Twitter, and the playful engagement of gameful games. It won’t look like computing, and it won’t come in a box. It won’t be dumbed down for ‘ease of use’, it will be flowing experiences designed to make us more complex, capable and creative. It will augment our humanity, as embodied individuals embedded in a physical and social world.

the history of the future, circa 1994

Thursday, October 21st, 2010

[From the archives, a high level prediction piece that I wrote 2^4 years ago. Some of this came up in a long chat I had with @anthropunk today, and it seemed appropriate to post (apologies to longtime readers who have seen this before). To give you some reference points, in 1994 Intel shipped the 75MHz Pentium processor, Apple shipped the Newton Message Pad, Marc Andreessen and Jim Clark founded Mosaic Communications Corp (soon to become Netscape), and David Filo and Jerry Yang founded Yahoo!  How far we’ve come, and yet…]

Some things about the technological landscape of the future are fairly certain, mapped out by the trends we see today. While we cannot predict the precise manifestations of products or their impact on society, we can extrapolate along fairly straight lines to imagine the lay of the land.

Microprocessors, semiconductor memory and magnetic storage will continue to plunge headlong down the spiral of shrinking dimensions and expanding performance. The central processing element of the personal computers of 15 years ago is now the central processor of your coffeepot. Fifteen years hence, a device of that complexity may well be the central processor in your credit card while the RISC and CISC marvels of today’s desktop workstations power learning toys and portable entertainment products. We understand this trend, and we fully expect it to continue.

Networks for communication among digital devices and systems will continue to proliferate. The imperative to connect and communicate will drive organizations and individuals alike to go ‘on-line’. Islands of disconnected computers will evolve to isthmi, peninsulae, continents of computing. Home PCs will aggregate into community networks. Enterprises will resemble Internets; Internets will become Meganets. Developments occuring in research laboratories right now will lead to low cost, low power wireless components, enabling a fabric of invisible connections among people and between devices.

Information will continue to move toward a digital lingua franca. Images, sounds and words are well on their way; film and video, coming soon. Tactile, olfactory information next, perhaps? Even a semblance of virtual experience is already becoming available in digital form. The physical world literally radiates information, much of it beyond human sensory capabilities; physical, biological and chemical sensors will increasingly translate the world into binary representations. Paper, canvas, real life — these media are not dead, but their roles stand to be augmented and reexamined due to the rapid incursion of digital bits into their traditional domains. In an era of television, radio survives and thrives, movies are still shown in theaters, newspapers are still delivered, books are still read. In the coming era of digital media, we will experience even greater richness and diversity of form.

Many aspects of the future of technology are rather more uncertain, yet they carry vast potential for change. The ability to model, fabricate and manipulate structures at molecular scale leads to new conceptual approaches for the chemical and biological sciences, and indeed for electronics, optics and mechanics as well. The mathematics of nonlinear dynamic systems and complexity, still in its infancy, begins to describe a world view where the future is undetermined but the brushstrokes of the next few seconds might be predictable, and where systems behavior emerges from the undirected interaction of individual entities.

Where does all this lead? The future is uncertain if nothing else. We can merely speculate that this backdrop of pervasive digital technologies and media will weave a dense fabric of information through our lives. Much as electric power snakes invisibly through every wall in the developed world, an information utility may become an expected part of the backdrop of day to day life, with information appliances providing the interface to its users. Perhaps ‘gratuitous computing’ describes a world where ordinary objects sprout features like consumer appliances run amok and mumble to one another in vague digital whispers as we pass. Perhaps the loose associations of people and places, objects and ideas and experiences which make up our identities will coalesce into a tangible web facilitated by technology. Perhaps people’s lives will be markedly improved by technology. Perhaps not. The world will continue to change, and the outcome is far from settled. Our part is to advance the state of the art, to foment change and forward progress, and to maintain a clear perspective on the value of our work to society.

level up your life: the real world as a neverending game

Saturday, February 20th, 2010

Game designer and CMU ETC professor Jesse Schell gave this creative and mesmerizing talk at DICE 2010 that you pretty much need to watch. He starts with the unexpected success of Facebook games, the Wii and Webkins; segues into describing the ways that games are reaching out into the physical world; and moves on to observations about game mechanisms appearing in everything from TV shows to cars. Finally, he launches into a wild extrapolation of what happens when every aspect of the world is instrumented and every action you take in your life has gameplay elements and scoring mechanisms. It’s a vision that’s more than a little dystopian, kind of like the panopticon with points, but I think you’ll find it bracing and thought provoking.  Watch for the sly iPad joke around 17:00 ;-)

+1 Knowledge Sharing to @mikeliebhold and @avantgame for pointing this out!

the massively multiplayer magazine

Friday, February 19th, 2010

This idea is a quick brainstorming sketch that brings together several threads in the spirit of combinatorial innovation. I’d love to have your feedback in the comments.

The future of magazines in a connected world

I love magazines, and I’ll bet you do too. Magazines are perhaps the most vibrant and culturally relevant form of print media, and their diversity mirrors the staggering range of human interests and obsessions. In the connected world, they have the potential to evolve into an incredibly interesting and engaging networked medium. In recent months we have seen two inspiring future design concepts: the lushTime/Sports Illustrated video, and the poeticMag+ conceptcreated by design firm@BERGLondonand publisherBonnier. With high performance, connected e-reader and tablet platforms finally coming to market, it’s clear we are going to see some very exciting developments in this space. However, we should remember that the fundamental nature of connected media is very different from that of print media, and we should be careful about bringing a print-oriented mindset to a new networked medium. The features of electronic magazines should not simply be incremental extensions of the printed version, even if the physical artifacts are roughly similar in size, shape and appearance. With that in mind, I’d like to engage you in a thought experiment about what could happen when we collide digital magazines together with the global social Internet. One possibility we might imagine ismassively multiplayer magazines.

 

Massively multiplayer what?

You’re probably familiar with massively multiplayer online games like World of Warcraft. The massively multiplayer magazine re-imagines the traditional periodical in the context of an online social game environment involving thousands of people. In this scenario, the magazine becomes a gateway into a universe of intertwined stories, knowledge, people and play, with experience design and game mechanics drawn from MMOGs, ARGs and social games. Readers become players who have profiles, scores, achievements and abilities. Players self-assemble into clans, guilds and communities. Game elements encompass traditional magazine fare such as stories, images, features and advertisements, alongside new aspects including collaborative quests, mini-games, social streams, location awareness, augmented reality and physical hyperlinks. Gameplay involves completing missions, defeating bosses, unlocking hidden features, and participating in experiences that add richness, engagement and dimensionality to the magazine’s thematic center.

Magazines are like printed Usenet

Magazines are like printed Usenet

Magazines are like printed Usenet

You may be thinking this is a pretty strange idea, because magazines and online games seem like completely different media with little apparent synergy.  But consider: A well-stocked newsstand’s magazine rack is a glossy reflection of a world of enthusiast niches, each one incredibly narrow and deep. From heavy metal music to needlepoint; from luxury island living to body modification culture, magazines are the proto-Usenet of publishing. Furthermore, every special interest grouping you can imagine has already established itself in some form of online presence, be it a mailing list, web forum, or social network. The inherently social, topical milieu of magazines and their enthusiast communities has much in common with the ecosystem of social, story-oriented worlds and deeply invested players of many online games. It’s not hard to imagine that online gamers and magazine readers would each be attracted to a medium that combined the best of both genres. In fact, there is ample precedent for communities passionately following and participating in stories and games across multiple media. Think of Pokemon, Survivor, and the Star Wars Universe as examples of huge cultural phenomena with stories that span books, television, games, the web and more. For more on that, you might enjoy going down the deep rabbit hole of transmedia storytelling. But let’s continue.

Possible user stories

Clearly, this new kind of magazine/game would be primarily a digital medium. Mobile tablet computers like the just-announced Apple iPad would be excellent platforms to build optimized experiences around. Moreover, a massively multiplayer magazine would also play out across websites, social forums and physical locations, in much the same way that many alternate reality games have done. With the addition of ‘clickable’ links via QR codes and similar physical links, even readers of printed magazines could be drawn into the game through their mobile phones.

So what would a massively multiplayer magazine be like? Here are a few possible user stories that begin to explore the concept; you should definitely add your own ideas in the comments:

* Each story is a context that you “check into”, much like a Foursquare location. This might show up in your Twitter stream as “I’m reading <article> with 5 other people http://j.mp/xG08U“, with a shortened link directly to the article. As you check in and comment about the article in your social stream, you accumulate points in your profile for each new reader that clicks through your link. If you are leafing through a paper copy of the magazine, you might find a QR code printed on the page, and scan it with your smartphone to “check in” and connect to the social stream about the article.

* Stories are customized based on your location. When you are physically in Paris, stories and games with a Parisian context are revealed. Reading those stories in their intended locations around the city earns you a special achievement badge for Paris. Meeting local players face to face grows your social circle and adds to your in-game reputation.

* A rock music magazine works with bands and concert promoters to place printed QR codes on posters at live shows, and readers earn badges by going to the show and scanning the codes.

* A pop culture fan magazine creates a series of 12 monthly challenges, each building on the previous one and taking players progressively deeper into a complex storyline. The challenges can only be worked out through large-scale cooperation by fans; the resolution leads to a hidden plot device in the upcoming season of a hit reality TV series.

* An advertiser sponsors a global treasure hunt, with rabbit holes, missions and puzzles embedded in the digital and print versions of a travel magazine. The prize is significant and the story engaging enough to attract tens of thousands of players and drive millions of social media mentions and impressions over the entire duration. For inspiration, take a look at the ARG calledPerplex City, which offered a $200,000 prize for finding a game artifact called the Receda Cube.

* Collaboratively generated story/soundtrack pairings are recommended by your friends and other readers of the same stories. “One of your friends recommended the Cowboy Junkies channel on Pandora, to accompany this story on musician Townes Van Zandt.” Alternatively, writers and photographers offer their own musical pairings to convey mood and contextual cues for their work, similar to sound design for cinema.

* A media literacy foundation challenges teams to create an entirely new magazine, organized around crowdsourced recommendations and contributions for the best stories, photographs, video, audio and even advertisements. The contributors earn achievements and reputation scores based on readers’ ratings and social metrics. The winning team receives a grant funding the creation of their next 6 issues, and featured placement on a popular media blog.

These are only a few examples of the possibilities of a new kind of massively multiplayer media. There are many open questions here, obviously. Would publishers find this concept attractive? Would readers make the leap to become players in a worldwide game? How hard would they be to develop [see note 2 below], and at what cost? It’s no sure thing, but I am inclined to believe that the well-established cultural familiarity and affection for magazines, combined with the addictive and viral nature of online games like Farmville and Foursquare, and built on the mobile, social, contextual platform of the connected world, would make an incredible creative genre and a very interesting business opportunity.

What do you think? Leave comments, send me email, or tweet some feedback to @genebecker. Thanks for reading, and YMMV as always.


[1] Photo credit: mannobhai

[2] It’s worth noting that designing media to be massively multiplayer will require new skills, tools and workflows, well beyond those employed in today’s magazine publishing ecosystem; our hypothetical project will surely need to address the authoring process. If you are interested, Ben Hammersley makes this point well in a series of eloquent posts that are worth your time.