What do you need help with?

Location, GPS in an AR Context

Follow

The Glasses support commercial GPS. The R-7 does support AGPS in so far as if iZAT is enabled, it downloads Ephemeris and Almanac data over IP if the device is connected to the internet, and this speeds getting the first fix. But, the R-7 is not an LTE end point, so it does not benefit from the cell tower triangulation feature that most smartphones support.

The R-7 supports both GNSS (GPS/GLONASS) & iZAT where it is supported on the globe.GNSS accuracy is similar to your average smartphone, say ~ 10-15 ft.

If you enable the iZAT (go to System Settings->Location, check High Accuracy mode, and turn on Accelerated location) and are connected to the internet via WiFi to a LTE phone or MiFi, you should get better performance in terms of time to first fix, and once you have a fix, maintaining it while the number of satellites in view decreases. And, depending on where you are, you can even get a gross location based on WiFi Access Points seen from your location without any satellites in view (this is an iZAT feature, if you are in an area covered in their database). The accuracy using this method is probably less than standard GPS.

Note that you can view GPS information using the Dashboard App. You should see satellites; they show up bottom right, even before you get a fix. If you don't see any, or if you only see 2 or 3, then you are not going to get a first fix. You'll need 4 satellites in view, over a signal to noise ratio (SNR) of about 30 dbm, before you will get a fix. Of course you need to be outside, or by a window that has a view to some satellites.

This is an example of none in view:

After a minute or two, you should see some satellites:

Finally, you should see the upper information filled in once a fix happens:

We are always continuing to work on the GPS sensitivity.  Note, though, that our physical constraints because of the smart glasses design make shielding and tuning quite challenging when trying to minimize weight, particularly due to sideband interference from our display. It will be hard to match that of a phone, but we are doing our best to do so.

Other wireless details: WiFi is 2.4 & 5 Ghz, Bluetooth is BLE, the standard, GPS is the usual; There is no Cellular, no NFC. Antennas are in the temples in the R-7, and in the main glasses chassis on the other glasses. USB connection appears to decrease GPS sensitivity, so it may be necessary to get a fix initially when disconnected from USB.

 

Using GPS Heading, Head Tracking in AR Applications:

When the user is moving, GPS will return heading, and this can be used to determine head position (assuming the user is facing forward while moving). Android also provides an sensor API that uses the IMU to determine the users head position; The least accurate is the head  X rotation orientation position, which is dependent on the magnetometer, and environment can influence this sometimes very dramatically. See this reference:

https://developer.osterhoutgroup.com/hc/en-us/articles/115000210870-IMU-Orientation

For folks who need highly accurate head position, there are a couple of options. One is to use object or marker tracking software (Vuforia and Wikitude are examples) which use the camera and image processing to determine either relative or absolute head position, based on the position of the recognized object.

If the application use case happens to be within a cockpit or vehicle, some developers have mounted a upward facing camera and done tracking of markers on the ceiling / headliner to get very accurate positioning in the X axis.  

For AR Applications, rendering objects based on location and head position can be a goal. Using GPS position with an accuracy of ~ 10-15 ft means that you would typically not try to render objects with accuracy better than what is implied by the geometry of your use case, which depends on how far away and at what angle the object is to be rendered.

For example, if you are standing on a city street, and your location is known at ~ +-10 ft, and your head position is determined via the Android sensor API using the IMU (figure X +-5 degrees, Y+- 1 degree), if you are looking across the street at a building, and your App is designed to render a info card on that building, you should be able to calculate how accurately you can place that rendered card. If it's a big building (the whole block), it should be pretty easy to tag the building. If it is a narrow building, and the street is very wide, it might be a little more difficult to mark the building because of the loose accuracy in the X axis. Some developers use computer vision to help in this case, by developing an algorithm to look for the building or something on the building.

 

Future Head Tracking in AR Applications:

In future products (R-9 & R-8), 6 DoF sensors will improve head tracking accuracy and stability. With the addition of these sorts of sensors, we think for AR, location tracking accuracy will not so much dependent on GPS accuracy, especially since people spend more time indoors away from satellites. Getting approximate location via GPS, WiFi or 4G/5G triangulation is the first step, but then it is more about coming up with methods of mapping interior spaces and then using that information via sensors to determine one's location. That's the evolution of thinking going on right now, and where much R&D is focused. Think of it as "Google Maps" of the 3D space that we humans inhabit. Stay tuned.

 

 

Have more questions? Submit a request

Comments

Powered by Zendesk