What do you need help with?

SDKs - Vuforia & Unity

Follow

Using Vuforia or Unity for developing AR Experiences

Vuforia's SDK allows tracking of objects and rendering of S3D content with respect to the marker; Unity3d provides an SDK for rendering content, and can do that in 2D or stereoscopic 3D. The tracking vendors usually provide a Unity plugin.

You can get sample code from both Vuforia's web site and Unity3D's website. They both have SDKs and samples. Vuforia released their VDE (Vuforia Digital Eyewear) beta to all developers, and they provide a Unity plugin that interoperates with VDE. They provide a means to calibrate the camera Field-of-View (FOV) with the display field of view, which allows developers to track markers and render and move/rotate S3D objects with respect to the marker, in real time. Our platform is one of two see-thru devices supported by VDE.

You'll need the Android and Unity downloads

AR Unity Samples

Also: See Digital Eyewear section again, get both Android and Unity

We are in the process of adding these details regarding which of our vendors' SDKs to our developer site, so keep an eye on that for upcoming news.

Look at their samples for the most recent version of Unity;  in particular, take a look at their "extended tracking" examples.

Most of our AR examples use Unity. If you sign up for Vuforia Digital Eyewear SDK, there are a number of examples that show both Unity and OpenGL. Here's another reference for Vuforia VDEwhich provides a Unity3D plug in to control rendering. It also has lots of example code that will show you how to render objects without turning on the camera preview. This is called "Optical See-Through" mode, as opposed to "Video See-Through."

If you use "Video See-Through" mode (used on smartphones that are not see thru), you can overlay objects very precisely because you are overlaying it on top of the image. But, on a see thru display, this can be confusing because the video preview image has latency and is not always lined up with the real object you are looking at. This is because of the angles of your eye's FOV through the displays, versus the camera's FOV. Of course, you'd want to use "Optical See-Through" mode with see thru displays, and so there will be a bit of transformation of what is detected in the camera and how that lines up with your view of the world thru the display. VDE has a calibration App that calculates the geometry of the camera view to the display view.

On the R-7, the latency between the marker tracking and the image on the display is much less than R-6; on the order of 10s of milliseconds, which is good.  Accuracy of the real object and the rendered image alignment will depend on how you measure it and at what range. Remember that the glasses angle on your head and how you see thru it will vary from user to user - ears, nose, eye positions, etc.

We are always interested to hear from developers what they think their requirements are for accuracy and latency. You may also take a look at this article for an example of how ODG smartglasses are used for AR applications that demand  accurate alignment.

Have more questions? Submit a request

Comments

Powered by Zendesk