The team at Pelican Imaging has developed revolutionary new array sensor technology that captures in 3D. Pelican's depth-sensing array solution (shown above in augmented reality glasses and a mobile phone) provides highly accurate depth data in real time, both for images and video.
Mobile and augmented reality applications require 3D data to drive the next generation of products. Exciting new user experiences depend on giving devices the ability to "see" like our eyes do.
According to IDC, by 2018, 85% of the global image capture volume will come from mobile devices. Users increasingly want more features and functionality from the camera in their mobile phone, because that’s the ever-present camera in their pocket.
Not only is the Pelican sensor perfectly suited for mobile by virtue of its super-thin form factor (2.5-3mm), but because the sensor has no auto-focus mechanism and no moving parts, it’s capable of extremely fast shooting. When paired with the primary camera in a mobile device, the Pelican array can actually be used to decrease the shutter lag in the primary camera as well, while yielding beautiful, high-res, all-in-focus images.
This unique "array + main camera" approach allows mobile handset manufacturers to choose their own primary camera module (whether it’s 8MP, 13MP, or 20MP), and benefit from excellent image quality paired with the depth data from the scene. See examples above of an image captured with the Pelican array paired with an iPhone 6 - the result is a crisp, high resolution image with a highly accurate depth map. See the full size still image and depth map.
Mobile device manufacturers can differentiate their products from simple stereo camera arrangements by being one of the first to offer customers a meaningful depth capture solution. For a closer look at how mobile handset makers can realize the benefits of the Pelican depth-sensing array, see the recently published IDC report: How Computational Photography Can Drive Profits in the Mobile Device Market.
Still images can be edited with ease. Because the image contains depth information, it’s extremely simple to refocus the photo, or select multiple objects of focus. Pelican’s software also enables users to capture quick distance measurements, adjust lighting, apply filters to all or part of the image, and easily replace backgrounds or combine photos. See Pelican’s 3D Image Viewer to view photos and depth maps captured with the array camera, and try out refocus and motion parallax.
Next-generation VR and AR headsets demand high quality depth sensing to enhance the user experience and enable natural interaction with virtual objects. Requirements for real-time environment mapping, gesture control, and object tracking are driving the need for very accurate 3D capture.
Pelican's depth-sensing array solution provides multiple benefits for AR applications, including highly accurate near-field and far-field depth data captured in all lighting environments, real time processing, dynamic calibration, and minimal occlusion zones. The layout of the array is flexible and can be customized to meet specific depth resolution requirements.
The following video highlights the vision for depth-enabled imaging, and features Kartik Venkataraman (Pelican co-founder and CTO), Raj Talluri (SVP of Product Management at Qualcomm), and Hao Li (Assistant Professor of Computer Science at USC).
In the brief video below, Pelican CEO Chris Pickett explains how the Pelican array camera captures in 3D.