How to create dynamic voice commands for RealWear®

The information in this post has been updated with the launch of our Auto Voice Engine. Read more here.

Use cases

Dynamic voice commands are generated based on the data that is being displayed to users. This enables developers to follow the say-what-you-see design pattern, where users can look at the screen and speak the words they see in order to control the RealWear HMT device.

Typical use cases for barcode scanning include:

Displaying data in tables: This is useful for many use cases including:

  • Selecting items or parts during work order execution.
  • Selecting team members to call for remote assistance.
  • Selecting installed assets to perform maintenance or inspections on.

More intuitive navigation for integrated apps: Where data is being pulled in from existing systems, such as IBM® Maximo®, SAP®, Salesforce®, CMMS, or DMS systems, dynamic voice commands generate voice commands based on data being pulled in. This allows for more intuitive app navigation where commands can correspond to data being displayed and avoids the need for indicators or generic commands such as “Select item 2”.

What you’ll need

To use dynamic voice commands in your JourneyApps app for RealWear® you’ll need:

  • A RealWear® HMT device (for testing)
  • RealWear Explorer (for installing the JourneyApps APK if you haven’t already done so)
  • A JourneyApps OXIDE account – sign up here if you don’t already have one


Open OXIDE and navigate to the relevant app view

In OXIDE, click the Views tab and select the view where you’d like to implement the barcode scanning component from the list, or create a new view.

Views tab selected.

Example of views displayed in the left panel when the Views tab is selected.

Add dynamic voice command functionality

Dynamic voice commands make use of the data objects and structure defined in your app’s data model, or schema. Dynamic voice commands are implemented in your view’s logic, which is written in either JavaScript or TypeScript, depending on how you’ve set up your app.

Reset voice service

Once you’ve selected the view you want to implement dynamic voice commands on, the first step is to reset the voice service in order to ensure that any old commands are no longer registered. We’ll define this function under the init() function. The init() function is called whenever a view is navigated to by an app user.

For more best practices, refer to our documentation on RealWear® voice commands.

Here’s how we would do this using TypeScript in the view’s .ts file:

async function init() {
  // Reset voice service
  await journey.voice.reset();

Register dynamic voice commands

In our example, we would like to implement dynamic voice commands that allow the user to speak the name attribute of an asset data object in order to select a specific asset from multiple assets displayed in an object table. Once the user says the asset name, we would like to display a photo of that asset stored in the app’s database.

To do this we will define a function that will allow the voice service to listen for voice commands that match the name attributes of the assets being displayed.

We’ll set up this function to populate a constant that we’ll call “unit“ with the asset objects that exist in the app’s database. After that we’ll create another constant called commands that populates a command and callback instruction for the app based on the asset name that was spoken. The app will know to listen for asset names because a command instruction will be created for each asset name that exists in the app’s database. Once an asset’s name has been spoken, we’ll set the callback instruction to call a separate function that we’ll call selectUnit(), which we’ll define later.

Here’s how we would do this using TypeScript in the view’s .ts file:

async function registerUnitListeners() {
   const units = await DB.asset.all().orderBy(name).toArray();
   const commands = DB.asset) => {
     return {
       command: `${}`,
       callback: (unit) => selectUnit(unit);,
   await journey.voice.registerCommands(commands);

Display asset photo based on dynamic voice command

The next step is to create a function that displays a photo of the asset whose name has been spoken. The photo will be displayed through a display-image UI component, defined in the view’s .xml file. The purpose of this function will therefore be to set the view variable that is referenced by the display-image component as equal to the photo of the asset that has been spoken. We’ve called the view variable asset_photo.

Here’s how we would do this using TypeScript in the view’s .ts file:

function selectUnit(unit: DB.asset) {
  view.asset_photo =;

Add registerUnitListeners() function to init() function

The final step is to add the registerUnitListeners() function to the init() function to ensure that the dynamic voice commands are active whenever a user navigates to this view.

The full init() function will now look like this:

async function init() {
  // Reset voice service
  await journey.voice.reset();
  // Run dynamic voice commands function
  await registerUnitListerners();

Need support?



Access OXIDE, our online dev environment.

Last updated: 2021/11/11

JourneyApps provides a rapid way to build custom apps for RealWear® HMT, mobile and desktop. Auto voice commands are simple to set up and manage, we provide offline support out of the box, and deploying apps happens with a single click. Comes with prebuilt ERP integrations. If you are interested, please contact us to schedule a demo. You can also visit our RealWear page to learn more and subscribe for notifications about new blog posts.

← Back to all posts

Subscribe to our blog



Rapidly Create Hands-Free

Voice-Controlled RealWear® Apps

Try For Free