Introducing the Auto Voice Engine for RealWear

With the release of Auto Voice Engine, JourneyApps drastically simplifies the process of creating voice-enabled apps for use on RealWear devices.

Build-once-run-anywhere capability has been important for JourneyApps developers building apps that need to be accessed across platforms like Android, iOS, Windows and macOS. With our Auto Voice Engine, this capability moves into the terrain of extended reality devices, like the RealWear Navigator® 500.

The challenge

Previously, developers building apps for RealWear needed to follow one of two approaches to allow users to interact with apps using voice:

  1. Rely on WearML numerical indicators that assign numbered voice commands to interactive elements.
  2. Manually add voice commands for each interactive element in their app.

While WearML numerical indicators are assigned automatically, they have significant user experience downsides. The biggest being that users have to map their desired functionality to the numerical indicator displayed briefly when the app view has been initiated. As these indicators essentially act as another UI layer a user needs to make sense of, the experience can feel slower and more tedious than what users expect from modern apps.

Manually added voice commands are on the other side of the spectrum: they can make the user experience great but also make the process of developing voice-compatible apps slow and tedious – especially for larger apps with 100s of voice commands.

A solution for developers and users

Instead of having to register a voice command by hand for each interactive element in your app, the Auto Voice Engine automatically adds voice commands for your UI based on labels, available controls such as pagination and more.

Users benefit from an experience based on intuitive say-what-you-see voice commands: they can see what voice commands are available simply by looking at their micro-display (or say “app help” to see a dynamically-generated list of all accepted voice commands). Developers benefit because they don’t even have to think about how to make their apps voice compatible: apps will work with voice as soon as developers click deploy.

Key functionalities

The Auto Voice Engine has four key functionalities:

  1. Automatically generated voice commands for UI elements (which can be customized).
  2. Visual highlighting of UI elements as they are selected by users, resulting in more intuitive app navigation.
  3. An automated “app help” system that displays the voice commands available to users wherever they are in the app.
  4. A voice debugging window on Windows or macOS for developers to see which voice commands are being registered on each view without having to use a RealWear device.

These functionalities combine to provide significant benefits:

  • Developers no longer need to manually define voice commands.
  • App users can easily see which UI element they’ve selected and false positives are prevented as voice commands not related to the selected element are ignored by the app.
  • Users can see a full range of available voice commands by simply saying “app help” wherever they are in an app.
  • Developers can test voice functionality without needing to repeatedly speak voice commands (even without having a RealWear device).

The video below demonstrates these functionalities:


To learn more about how our Auto Voice Engine works, see our documentation here.


← Back to all posts

JourneyApps:

The development platform

for industrial apps

Try For Free