The team at Pelican Imaging has developed revolutionary new array sensor technology that captures in 3D. Pelican's depth-sensing array solution provides highly accurate depth data in real time, both for images and video.
Mobile and augmented reality applications require 3D data to drive the next generation of products. Exciting new user experiences depend on giving devices the ability to "see" like our eyes do.
Next-generation VR and AR headsets demand high quality depth sensing to enhance the user experience and enable natural interaction with virtual objects. Requirements for real-time environment mapping, gesture control, and object tracking are driving the need for very accurate 3D capture.
Pelican's depth-sensing array solution provides multiple benefits for AR applications, including highly accurate near-field and far-field depth data captured in all lighting environments, real time processing, dynamic calibration, and minimal occlusion zones. The layout of the array is flexible and can be customized to meet specific depth resolution requirements.
The following video highlights the vision for depth-enabled imaging, and features Kartik Venkataraman (Pelican co-founder and CTO), Raj Talluri (SVP of Product Management at Qualcomm), and Hao Li (Assistant Professor of Computer Science at USC).