By Warren Baird-Oracle on Mar 13, 2015
I've been intrigued by 'Augmented Reality' for a while now, but it seems like Microsoft may be on the way to providing a really solid solution for mixing 3D visualizations and the real world. I haven't been able to try one myself yet, but from the reports of people who have, they were able to see well-defined, well-detailed 3D models accurately overlaid on the real-world images in front of them - and tracking well and accurately as you moved your head and body around. This is very different from most of the existing augmented reality systems, and sounds much more compelling than something like Google Glass.
This could open up some pretty interesting scenarios for enterprise visualization. One of the examples Microsoft uses on their site is actually a product design example, showing a blending of a real product with some 'virtual' parts. I can't think of a better way to review proposed design changes to a product than being able to see those changes overlaid on top of the current model of the product.
The technology also enables hand tracking - so you would be able to interact with the model as well. Perhaps reaching out and tapping on a component and then speaking your comment out-loud would be enough to start a change request process.
Of course, Microsoft hasn't announced a firm release date for this technology, so I think it will be a while before we see this being used in enterprise visualization environments - but it's the first time I've seen something that looks like it could convincingly replace a traditional keyboard/mouse/monitor for some enterprise visualization tasks.