I am at the TAUS user conference at Seattle’s Edgewater Hotel this week. This morning’s keynote speaker, Chris Pratley from Microsoft Labs, shared his predictions about what’s coming in the next 5 to 10 years. His predictions are based on prototypes already in Microsoft and university labs. While his talk isn’t about translation specifically, its part of the future he sees for mobile, wearable devices.
So what can we expect? Here’s a summary of what we can look forward to. I think we’ll see a lot of these within five years.
Smartphones will go away, and will be replaced by wearable devices and foldable thin-film displays. Google has already demoed this with Google Glass. Microsoft has similar initiatives underway (we can assume Apple does too).
It is possible to project a 3D virtual object into your environment. These devices can track your eye, and project light directly onto your retina. For the viewer, it appears to be a three dimensional object that is part of the environment. Chris emphasized this is real, it works, and only needs to be miniaturized.
Ambient information. Augmented reality is a built in feature in these devices. They’ll project information about your external environment. This information will always be floating ahead of you when you want it. No more fishing into your pocket to check your email, or search for directions. Chris used the example of ambient translation, where the audio in your environment is translated continuously, not to provide perfect translation, but to give you a sense of what’s going on.
Gestures will replace touch screens. If you’ve played with Kinect, you’ve seen a glimpse of what this will be like. Gesture based interfaces enable you to manipulate objects in a three dimensional environment, draw and type, much more efficiently than with touch and pointing devices (a mouse).
Situational interfaces. These devices will also be a lot smarter about figuring out what information is most important at the time based on where you are, your activity, and other factors. Rather that explicitly asking for information, it will highlight what should be most important at the time. For example, if the device senses you are in motion, it will emphasis location and navigation feeds. If you’re sitting at your desk, it will emphasize other sources of information.
All in all, its one of the more interesting technology talks I’ve been to in a while. What’s most interesting to me is that everything is based on real-world devices that are currently in use in the labs, and are well along the way to becoming consumer products.
It’s also good to see that Microsoft is innovating again. As a company, its been the butt of long running jokes, but if what’s in the lab is a sign of things to come, I wouldn’t write them off. Whenever there is a major paradigm shift in technology (the last one was the switch to smartphones with broadband wireless), the pecking order of vendors usually changes as well. It seems like ancient history, but before the iPhone, Nokia and RIM were unbeatable.