2D rendered objects in the glasses are focused at ~ 8 ft. However, it is possible to render objects with different parallax offsets in each eye which appear in perspective to be much closer - arms length - and even stereoscopically as 3D objects. Many of the AR tracking demos show this; you can bring the target up close and the content moves with it. The object will appear to be really close and your eyes will keep them in focus, even thought there will be a slight difference in focus of the rendering and real world objects at the same distance. In these demoes we are typically using the Vuforia for Digital Eyewear (VDE) toolset for tracking which has been highly optimized for our glasses and calculates all of the frustum values for the platform. They do also offer a calibration tool that tweaks the frustum for an individual user to minimize the subtle alignment errors that become more prevalent the closer the object is to the users face. The rendering is typically done in the Unity 3D engine which has been well integrated with the Vuforia tracking plugin.
Note that only the optics control the fixed focus distance, not the openGL frustum value or Vuforia etc. Adjusting the frustum will effect how good the S3D object looks at a given distance, but it will not effect its focus. Changing the frustum with a tool like provided by Vuforia or what we have posted on our developer site is effectively changing the values used to calculate where to render the left and right images to make the object appear 3D, and thus affecting how good the object looks.
Take a look at this early introduction to AR reference : Augmented-Reality-Development-for-R-6-Glasses
This was originally written for the R-6, and describes what we developed as an AR rendering framework before switching over to the VDE with Unity plug-in, which renders most of our S3D content, especially when tracking a marker. You shouldn't need to write code at this level, but we are leaving the reference up as a introductory article.
Also note, the R-7 optics will support (soon) the use of a "focus shift element" (FSM), which will snap in and shift the fixed focus distance in from ~ 8 ft to other distances down to ~ 2 ft. To match the convergence distance to the focus distance for 2D content, the left and right images will need to be digitally shifted laterally by some number of pixels to force the user's eyes to converge at 2ft (which we are looking at doing automatically so the App doesn't have to worry about this). Otherwise, a 3D rending engine like Unity will automatically produce this result if you set up the rendering cameras for a 2-foot distance. In either case, the useable stereoscopic overlap field of view is about 10% smaller. If you are interested in using this feature in the future, all you will need to do is adjust the min/max rendering planes in Unity so nothing is rendered outside of the comfortable depth of field of the modified glasses.
Here is the optics schematic of how the displays are setup:
The diagram provided is an illustration of the optical setup of our system. While the exact numbers for our optomechanical system are still proprietary, what the frustum generation utilities are generating is a calibration matrix that takes into account the actual users IPD, and the alignment of each display as well as the offset and angle of the tracking camera. In the factory we are processing all glasses with a starting calibration for the nominal symmetric IPD location of 63.5mm (31.75mm on each side).
Though the errors of an uncalibrated system do increase the closer you get, what we have found in the practical world of rapid demos on many users is that the default calibration is usually more reliable than a hasty user calibration so we don't usually bother with individual calibrations. How you implement your system in an academic or medical care environment may differ from this subjective experience so we are happy to help you sort through the adjustments you find necessary in your application.