Google Glass is a relatively new platform which enables people to view and interact with an optical head mounted display using either touch or voice recognition.
The Idea
We wanted to test the capabilities of Google Glass in combination with iBeacons, and create a simple contextually aware application. The app would look for specific iBeacons and, when detected, display information related to the area in which the found iBeacon was placed.
Where to Start?
When we set out to make Glassware, I had absolutely no experience with Android development, or the Java programming language in general. Luckily, due to extensive exposure to ActionSript 3.0, I had much experience with using a strict, statically typed language. So where to start? RTFM! Yes the documentation for Glass was my first port of call in order to gain more of an understanding of how to approach a Glassware project. There are various routes one can take, two of which being ‘activities’ suggested by the good folks at Google – Live Cards and Immersion.
So before deciding upon which route was right for us, I needed to take a step back and think about what exactly we were hoping to achieve, which in this case was a fairly simple contextually aware application. This would give us a clearer idea of how the app would function and allow me to focus on learning Java via the Android SDK and GDK add-on, based on our end goal.
With that in mind, one of the first questions I asked myself was “has this been done before?”, so I had a look online for similar projects and found a few, most of which being very old, with ‘very old’ in this case meaning a few months old due to the rapidly changing APIs for Glass and the introduction of their new development tool, Android Studio.
There was one project on Github – Glasstimote – which was very closely aligned to what we wanted to achieve, so instead of reinventing the wheel, I forked the project and got to work analysing every little part of the code to gain an understanding of how Android projects are pieced together.
The neat and tidy approach Google have taken with Android in terms of the XML format manifest, layouts, and string values, is very intuitive and I picked it up in no time.
In terms of the activity, I decided on an immersion approach, as we wanted the application to be constantly running and stop the Google Glass from going into sleep mode while in use. In order to ensure the application views would remain open, the android:keepScreenOn
flag was set to ‘true’ within the XML file for each view layout.
Data
We experimented with a version of the app which updated its views by using async tasks to retrieve data from a MySQL database, with the advantage of this being dynamic data; it would be simple to update the view data without recompiling the application. This worked well in very controlled conditions, but suffered with the lack of a simple user experience, as when WiFi dropped out – something that happened frequently – the user would have to rejoin a network and resume the previously failed data update. It felt like this approach wasn’t quite right for our intended purpose, and so we shelved the prototype and went back to using static data.
A great user experience is key; the application could be doing all kinds of fancy things in the background, but if this makes for a lousy user experience then nobody will want to use the app to start with. Generally the users won’t care how the application gets its data if they’ve had a good experience.
Optimisation
Glass is only a small little thing and we want to avoid putting too much computational pressure on it, so optimising our application as much as possible seemed like a good option, since it was already necessary to rewrite much of the logic within the forked project in order for it to meet our requirements.
The original application worked by continually scanning for and displaying information about as many iBeacons as it could find. This behaviour didn’t correspond to our functional goals, for which we only needed to find one of three beacons at a time. Therefore, it made sense for our version to only scan for iBeacons until one specific to the application was found. In order for this to work I set up three distinct spatial regions within the output range of each iBeacon. The application would then stop scanning for iBeacons when in range of one of the regions and display information about that specific region. So, for example, it starts by scanning for three beacons, each with a named overall region – ‘kitchen’, ‘library’ and ‘creative’ – and when one is found, such as ‘kitchen’, and if within the entry region, it then stops scanning for the other two. If the application then detects that it is far enough outside the ‘kitchen’ entry region to be within, or further away than the exit region, it once again starts scanning for all three iBeacons. This simple optimisation saves us tons of unnecessary processing, in turn prolonging battery life of the Google Glass.
Another small optimisation was to use string values defined in the XML for updating views with region location information, instead of using string literals to display this data. This helps avoid unnecessary object allocation.
Testing
iBeacons in general seem to be fairly unreliable in terms of calculating distance. This is due to the changing nature of many environmental and atmospheric conditions, such as the number of people in a room, WiFi interference, and even the weather.
It’s also very difficult to debug Glassware efficiently when iBeacon regions are involved, due to the need for physical testing; much of the time I was walking around armed with a MacBook Pro, just so that I could read the logs in realtime.
I experimented with setting various signal strengths on the iBeacons and in the end found it easiest to use a strong signal in order to emit at a fairly long distance, but to have the application logic look for smaller ranges within this large area; the iBeacons were set to emit at a range of 30m and within this total emitted range a 2m radius was used for entry regions and a 3.5m radius was used for exit regions.
In the end the app worked really well, detecting iBeacons fairly reliably and providing a great experience for TMW Incubator Expo attendees. The final version of the app is open-source and can be found on the TMW Github repo: https://github.com/tmwagency/Glasstimote.
Final Thoughts
With interesting apps such as World Lens, which translates words in realtime both with and without an Internet connection, and the ability to display contextually relevant information based on the user‘s location via iBeacons, Google Glass has a lot of potential. It also seems to be very attractive to Android developers since porting existing apps to Glassware was made much easier after the release of the GDK, but it becomes apparent that there is a tough road ahead for the £1000 glasses when compared to extension-focused wearables such as smart watches from Samsung and the soon-to-be-released Apple iWatch, since the latter – in addition to being much more subtle in appearance – offers a less intrusive option for people wanting to interact with their digital world, as the watches are currently a simplified extension of smartphones and tablets; in contrast to Glass, they are literally not in your face when worn. Yet, Google Glass retains a certain charm, seeing as it is the first of its kind, and we at TMW are hoping the competition will spawn some even greater products from the technology giants.