The R-9 has 4 cameras plus an external MIPI interface for external modules. The 4 cameras show up as 3 Android cameras (ID 0, 1 & 2):
- 13 MP camera with auto focus (default)
- 170 degree mono fisheye (used by VR SDK 6DoF tracking system)
- 1080p stereo camera (left) + 1080p stereo camera (right)
The external interface can support a 5th MIPI sensor. Note that only 3 can be active at once, so there will be trade offs on what your use case can do. Also, adding a MIPI device requires MIPI drivers which then must be integrated by ODG into the OS, so that's a bit of a time and money constraint, and requires ODG to integrate the driver into the OS kernel.
The 13 MP camera is the primary camera with auto focus, highest quality, most features. Good for use in streaming for telepresence, and object recognition.
The stereo cameras can be used to capture S3D video for recording or streaming, and for depth data / point cloud data collection, mainly for detecting surfaces/planes for AR purposes, but we are also hoping to work with gesture folks who could wire their gesture SDKs to use them for better gesture detection. No specific timeline is defined yet on any of this right now.
Initially, the depth data will be made available via a Unity plug-in.
The fisheye camera is used by the 6DoF tracking system, along with the IMU, to provide markerless inside-out tracking via the VR SDK.
[w:5184, h:1944, format:RAW_SENSOR(32), min_duration:33333333, stall:100000000],
[w:2560, h:800, format:JPEG(256), min_duration:33333333, stall:49000000],
[w:1280, h:480, format:JPEG(256), min_duration:33333333, stall:45000000],
[w:1280, h:400, format:JPEG(256), min_duration:33333333, stall:45000000],