The R-9 has 4 cameras plus an external MIPI interface for external modules. The 4 cameras show up as 3 Android cameras (ID 0, 1 & 2):
- 13 MP camera with auto focus (default)
- 170 degree mono fisheye (used by VR SDK 6DoF tracking system)
- 1080p stereo camera (left) + 1080p stereo camera (right)
The external interface on the top of the R-9 will support a 5th MIPI sensor. Note that only 3 can be active at once, so there will be trade offs on what your use case can do. Also, adding a MIPI device requires MIPI drivers which then must be integrated by ODG into the OS, so that's a bit of a time and money constraint, and requires ODG to integrate the driver into the OS kernel.
The following diagram shows the cameras relative position and angle with respect to the user's view thru the displays. Note that when worn by a typical user, the temples are horizontal, and the display optical axis is 8 degrees down from the horizontal axis.
This diagram shows camera position with respect to the IMU sensors (Gyro/Accel & Mag):
The 13 MP camera is the primary camera with auto focus, highest quality, most features. Good for use in streaming for telepresence, and object recognition. The camera module uses the Sony IMX258 sensor. Diagonal full frame field of view of the camera is ~ 78 degrees.
The Stereo cameras can be used to capture S3D video for recording or streaming, and for depth data / point cloud data collection, mainly for detecting surfaces/planes for AR purposes. Each camera uses the Omnivision OV5675 sensor. The diagonal full frame field of view of the cameras are 90 - 100 degrees.
[w:5184, h:1944, format:RAW_SENSOR(32), min_duration:33333333, stall:100000000],
[w:2560, h:800, format:JPEG(256), min_duration:33333333, stall:49000000],
[w:1280, h:480, format:JPEG(256), min_duration:33333333, stall:45000000],
[w:1280, h:400, format:JPEG(256), min_duration:33333333, stall:45000000],
We are also hoping to work with gesture folks who could use these sensors with their gesture SDKs for better gesture detection. No specific timeline is defined yet on when this might be available.
Initially, the depth data will be made available via a Unity plug-in.
Any Android Application can open any of the cameras, with the following limitations:
- if you intend to use our inside-out tracking (VR SDK), that platform service opens and controls the Fisheye camera, so that won't be available when tracking is enabled. And, the data it collects is not sharable with other Apps.
- if you use our platform depth system (under development right now), then that will have that stereo camera pair open. It is unknown right now if this camera data will be accessible to other Apps while it is running. If it is, it will be via an API other than the usual Android camera API.
We'll have APIs available for you to access the depth data, but not for a month or two will this be available in Alpha.