Virtual Reality (VR) and Augmented Reality (AR) will play an important role in this year’s ST Developers Conference, and in preparation for the big event we sat down with Mahesh Chowdhary, who explained how ST makes the virtual so real. The term “Virtual Reality” was coined by the French artist Antonin Artaud in 1938 to describe the fact that the illusions of the theater, with fictitious characters or made-up objects, were nevertheless a reality not unlike the one we experience every day.The term remained popular because Artaud had captured the essence of the human condition, the only one in nature that inherently aims to transcend the physical world by building a virtual world on top of it. And if virtual reality was first the domain of dramatists, science has been working since the 1960s to make the virtual feel more real than we ever thought possible.
Indeed, virtual reality headsets are not new. In 1968, Ivan Sutherland, the “father of computer graphics,” developed the first head-mounted display to run simulations. Since then, the headsets changed, the technology improved, and the applications became more diverse, but the principle has remained the same. Virtual Reality is a phenomenon that happens when a head-mounted display projects an immersive virtual environment that users can more or less interact with. Hence, the main difference between VR and AR is the level of immersion, since the latter only superimposes virtual elements on a capture of the real world. For instance, a mobile app using AR will add virtual representations to a photo or a video of the user’s surroundings like we recently saw with Pokemon Go.
On the other hand, VR generates and displays the entire immersive environment, even if it is identical to the user’s surroundings. Virtual Reality (VR) systems align the reference frames in the virtual world with the physical world. Similarly, Augmented Reality (AR) systems need to maintain accurate tracking between the real world objects and computer generated objects.
When the Virtual Is not Real Enough
AR and VR have in common that they are grounded in the user’s reality. In fact, a virtual environment too far removed from the familiar will lead to a condition known as virtual reality sickness, which manifests itself in various degrees of intensity, from a simple headache and disorientation to vomiting and anxiety attacks. For instance, if users move their head, but the image doesn’t reflect this movement quickly (less than 20 ms) or faithfully (wrong directions), the senses send conflicting messages to the brain, which often results in symptoms akin to motion sickness. AR is obviously exempt from such problems since there is a lot less immersion, but the technology still depends on sensing the world. Indeed, the system won’t know what to display or when if it can’t determine the user’s activity or location.
This is why ST’s inertial sensors and sensor fusion algorithms have been a driving force in this industry, and why Mahesh Chowdhary is one of the best persons to talk about AR and VR. Director and Fellow of Strategic Platforms and IoT Excellence Center at ST Microelectronics, he was a Senior Director of MEMS technology at CSR, where he worked on the integration of MEMS sensors, GPS, and wireless technologies. After a Ph.D. in Applied Science and more than 20 years of experience in signal processing, sensors, and location technologies, he understands that sensors are AR and VR’s hooks to reality.
Devices like the iNemo LSM6DS0, which combines an accelerometer and a gyroscope, offer amazingly detailed and accurate information on the headset’s movements, to ensure the projection remains stable and the system is responsive. However, what many people have yet to realize is that ST holds a lot more pieces of the AR/VR puzzle.
Virtual Reality: The Many Pieces of the Puzzle
As Dr. Chowdhary will also explain during his presentation, ST has developed software packages that help engineers to quickly take advantage of the inertial sensors needed for AR or VR headsets. For instance, the newly updated STM32 ODE Function Packs already include a demo application as well as a sensor fusion library, to quickly teach programmers how to acquire and analyze sensor data. With this knowledge, they can immediately use sensor fusion library to estimate device orientation for the virtual environment.Furthermore, the new MotionFX library is an amazing way to take advantage of “Sensor Fusion”, a process that combines data from accelerometer and gyroscope to accurately determine the orientation of VR devices in the real world.
ST is also the industry leader in micromirrors, which are basically arrays of microscopic reflective surfaces. It is possible to change the orientation of the mirrors by applying a current to each of them, hence adjusting what they reflect to ultimately create an image. This technology is leading contender for display create a virtual image in the field of view of one or both eyes for VR and AR systems.
Virtual Reality at the ST Developers Conference
AR and VR are complex problems and engineers must bring together a lot of different parts to create compelling products. Dr. Chowdhary will guide them to understand all the different technological aspects, allowing developers to leave with a much greater understanding of what makes the virtual so real. There will also be a demo of a Daydream stack, Google’s VR environment, on a 2016 Pixel smartphone, therefore showing one can create greatly immersive environments using resources that fit in our pockets.
Mahesh Chowdhary’s talk is entitled “Virtual & Augmented Reality Systems”. To prepare for it, participants can check out ST’s open source applications dedicated to its MEMS, like the FPS-SNS-ALLMEMS1, or Google’s Daydream Open Source VR platform.
To attend the ST Developers Conference, please register on the event’s website while there are still spots available.