Add a TensorFlow Lite Modular Service

Viam provides an example modular resource written in C++ that extends the ML model service to run any TensorFlow Lite model. The example includes an inference client program as well, which generates audio samples and uses the modular resource to classify the audio samples based on a pre-trained model.

This tutorial walks you through everything necessary to start using these example files with your machine, including building the C++ SDK, configuring your machine and installing viam-server, and generating results with the example inference client program.

The provided example code demonstrates the design, implementation, and usage of a custom module to help you write your own. This code is for instructional purposes only, and is not intended for production use.

You can find the example files in the Viam C++ SDK:

Build the C++ SDK

To build the Viam C++ SDK, you will need a macOS or Linux computer. Follow the instructions below for your platform:

Follow the Viam C++ SDK build instructions to build the SDK on your macOS computer using the brew package manager.

While your specific build steps may differ slightly, your installation should generally resemble the following:

  1. Install all listed dependencies to support the Viam C++ SDK:

    brew install abseil cmake boost grpc protobuf xtensor pkg-config ninja buf
    
  2. Create a new example_workspace directory for this tutorial, and create an opt directory within it to house the build artifacts:

    mkdir -p ~/example_workspace
    cd ~/example_workspace
    mkdir -p opt
    
  3. Within the ~/example_workspace directory, clone the Viam C++ SDK:

    git clone git@github.com:viamrobotics/viam-cpp-sdk.git
    
  4. Change directory into the SDK, and create a build directory to house the build:

    cd viam-cpp-sdk/
    mkdir build
    cd build
    
  5. Create an environment variable PKG_CONFIG_PATH which points to the version of openssl installed on your system:

    export PKG_CONFIG_PATH="`brew --prefix`/opt/openssl/lib/pkgconfig"
    
  6. Build the C++ SDK by running the following commands:

    cmake .. -DVIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE=ON -DCMAKE_INSTALL_PREFIX=~/example_workspace/opt -G Ninja
    ninja all
    ninja install
    

    For this tutorial, the build process uses the following configuration options. See Viam C++ SDK Build Instructions for more information:

    • VIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE to request building the example module for this tutorial.
    • CMAKE_INSTALL_PREFIX to install to ~/example_workspace/opt instead of the default ./install location.

Follow the Viam C++ SDK build instructions to build the SDK on your Linux system.

While your specific build steps may differ slightly, your installation should generally resemble the following:

  1. Clone the Viam C++ SDK to your Linux system:

    git clone git@github.com:viamrobotics/viam-cpp-sdk.git
    
  2. Build and run the bullseye development Docker container included with the SDK. If you haven’t already, first install Docker Engine.

    cd viam-cpp-sdk/
    docker build -t cpp . -f etc/docker/Dockerfile.debian.bullseye
    docker run --rm -it -v "$PWD":/usr/src/viam-cpp-sdk -w /usr/src/viam-cpp-sdk cpp /bin/bash
    

    Alternatively, you can skip running the docker container if you would prefer to use your own development environment.

  3. Install all listed dependencies to support the Viam C++ SDK:

    sudo apt-get install git cmake build-essential libabsl-dev libboost-all-dev libgrpc++-dev libprotobuf-dev pkg-config ninja-build protobuf-compiler-grpc
    
  4. If you are not using the bullseye container included with the SDK, you may need to install a newer version of cmake to build the SDK. Run the following to determine the version of cmake installed on your system:

    cmake --version
    

    If the version returned is 3.25 or later, skip to the next step. Otherwise, download and install cmake 3.25 or later from your system’s package manager. For example, if using Debian, you can run the following commands to add the bullseye-backports repository and install the version of cmake provided there:

    sudo apt-get install software-properties-common
    sudo apt-add-repository 'deb http://deb.debian.org/debian bullseye-backports main'
    sudo apt-get update
    sudo apt-get install -t bullseye-backports cmake
    
  5. Create an opt directory to install the build artifacts to:

    mkdir -p ~/example_workspace/opt
    
  6. Within the viam-cpp-sdk directory, create a build directory to house the build:

    mkdir build
    cd build
    
  7. Build the C++ SDK by running the following commands:

    cmake .. -DVIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE=ON -DCMAKE_INSTALL_PREFIX=~/example_workspace/opt -G Ninja
    ninja all
    ninja install
    

    For this tutorial, the build process uses the following configuration options. See Viam C++ SDK Build Instructions for more information:

    • VIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE to request building the example module for this tutorial.
    • CMAKE_INSTALL_PREFIX to install to ~/example_workspace/opt instead of the default ./install location.

Download the yamnet/classification model file

This example uses the yamnet/classification TensorFlow Lite model for audio classification.

  1. Download the yamnet/classification TensorFlow Lite model file and place it in your example_workspace directory:

    curl -Lo ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite https://tfhub.dev/google/lite-model/yamnet/classification/tflite/1?lite-format=tflite
    

    Alternatively, you may download the model file here: yamnet classification tflite model. If you download in this fashion, move the downloaded file to your ~/example_workspace directory.

  2. Extract the labels file yamnet_label_list.txt from the downloaded model file:

    unzip ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite -d ~/example_workspace/
    

    The labels file provides pre-populated labels for the calculated scores, so that output scores can be associated and returned with their matching labels. You can omit this file if desired, which will cause the inference client to return the computed scores without labels.

Install viam-server

Next, install viam-server on your machine, if you have not done so already:

  1. Navigate to the Viam app in your browser and add a new machine.

  2. Navigate to the CONFIGURE tab and find your machine’s card. An alert will be present directing you to Set up your machine part. Click View setup instructions to open the setup instructions.

  3. Select the platform you want to run viam-server on. Follow the steps listed until you receive confirmation that your machine is connected.

  4. Once complete, verify that step 3 of the setup instructions indicates that your machine has successfully connected.

  5. Stop viam-server by pressing CTL-C on your keyboard from within the terminal window where you entered the commands from step 3 above.

Generate your machine configuration

When you built the C++ SDK, the build process also built the example_audio_classification_client binary, which includes a --generate function that determines and creates the necessary machine configuration to support this example.

To generate your machine’s configuration using example_audio_classification_client:

  1. First, determine the full path to the yamnet/classification model you just downloaded. If you followed the instructions above, this path is: ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite.

  2. Next, determine the full path to the example_mlmodelservice_tflite modular resource example provided with the Viam C++ SDK. If you followed the instructions above, this path is: ~/example_workspace/opt/bin/example_mlmodelservice_tflite.

  3. Run the example_audio_classification_client binary, providing both paths to the --generate function in the following fashion:

    cd ~/example_workspace/opt/bin
    ./example_audio_classification_client --generate --model-path ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite --tflite-module-path ~/example_workspace/opt/bin/example_mlmodelservice_tflite > ~/example_workspace/viam-example-mlmodel-config.json
    
  4. Verify that the resulting configuration file was created successfully:

    cat ~/example_workspace/viam-example-mlmodel-config.json
    
  5. Copy the contents of this file. Then return to your machine’s page on the Viam app, select the CONFIGURE tab, select JSON mode, and add the configuration into the text area.

  6. Click the Save button in the top right corner of the page. Now, when you switch back to Builder mode, you can see the new configuration settings.

This generated configuration features the minimum required configuration to support this tutorial: services parameters for the ML model service and modules parameters for the example_mlmodelservice_tflite module.

Run the inference client

With everything configured and running, you can now run the inference client that connects to viam-server and uses the example_mlmodelservice_tflite module.

  1. First, determine your machine address and API key and API key ID. To do so, navigate to your machine’s CONNECT tab on the Viam app, and copy the machine address from the Code sample page. Go to the API keys page on the CONNECT tab to get an API key. The API key resembles abcdef1g23hi45jklm6nopqrstu7vwx8, the API key ID resembles a1234b5c-678d-9012-3e45-67fabc8d9efa and the machine address resembles my-machine-main.abcdefg123.viam.cloud.

  2. Next, start viam-server once more on your machine, this time as a background process:

    viam-server -config /etc/viam.json
    
  3. Then, run the following to start the inference client, providing the necessary access credentials and the path to the labels file extracted earlier:

    cd ~/example_workspace/opt/bin
    ./example_audio_classification_client --model-label-path ~/example_workspace/yamnet_label_list.txt --robot-host my-machine-main.abcdefg123.viam.cloud --robot-api-key abcdef1g23hi45jklm6nopqrstu7vwx8 --robot-api-key-id a1234b5c-678d-9012-3e45-67fabc8d9efa
    

    The command should return output similar to:

    0: Static                               0.5
    1: Noise                                0.332031
    2: White noise                          0.261719
    3: Cacophony                            0.109375
    4: Pink noise                           0.0585938
    
    Measuring inference latency ...
    Inference latency (seconds), Mean: 0.012795
    Inference latency (seconds), Var : 0.000164449
    

    The labels shown in the example output require that you have provided the yamnet_label_list.txt labels file to example_audio_classification_client using the --model-label-path flag. If you have omitted the labels file, the computed scores will be returned without labels.

Understanding the code

The example_mlmodelservice_tflite module, and the MLModelService modular resource it provides, extends the existing ML model service to run any TensorFlow Lite model. The example_audio_classification_client inference client provides sample audio data and the yamnet/classification TensorFlow Lite model to the MLModelService modular resource and interprets the results it returns.

All example code is provided in the Viam C++ SDK in the src/viam/examples/ directory.

The code in src/viam/examples/mlmodel/example_audio_classification_client.cpp is richly commented to explain each step it takes in generating and analyzing the data provided. What follows is a high-level overview of the steps it takes when executed:

  1. The inference client generates two signals that meet the input requirements of the yamnet/classification model: the first signal is silence, while the second is noise. As written, the example analyzes only the noise signal, but you can change which signal is classified by changing which is assigned to the samples variable in the code.

  2. The client then populates an input tensor named sample as a tensor_view over the provided sample data. The tensor must be named according to the configured value under tensor_name_remappings in your machine configuration. If you followed the instructions above to generate your machine configuration, the value sample was pre-populated for you in your generated machine configuration.

  3. The client invokes the infer method provided by the example_mlmodelservice_tflite module, providing it with the sample input tensor data it generated earlier.

  4. The example_mlmodelservice_tflite module returns a map of response tensors as a result.

  5. The client validates the result, including its expected type: a vector of float values. The expected output must be defined under tensor_name_remappings in your machine configuration for validation to succeed. If you followed the instructions above to generate your machine configuration, the value categories was pre-populated for you in your generated machine configuration.

  6. If a labels file was provided, labels are read in as a vector of string values and the top 5 scores are associated with their labels.

  7. Finally, the client runs 100 rounds of inference using the determined label and score pairs, and returns the results of the rounds, including mean and variance values.

Similarly, the example_mlmodelservice_tflite module can be found at src/viam/examples/modules/example_mlmodelservice_tflite.cpp and also offers rich comments explaining its features and considerations.

Next steps

This tutorial explores audio classification using the yamnet/classification TensorFlow Lite model, but the MLModelService modular resource provided by the example_mlmodelservice_tflite module can accept any TensorFlow Lite model so long as it fulfils the TFLite model constraints.

Once you have run the example and examined the module and client code, you might explore the following next steps:

  • Write a client similar to example_audio_classification_client that generates a different kind of data and provides a suitable TensorFlow Lite model for that data to the MLModelService modular resource. For example, you might find a new pre-trained TensorFlow Lite model that analyzes speech waveforms and write a client to provide these waveform samples to the MLModelService modular resource and interpret the results returned.
  • Write a client similar to example_audio_classification_client that trains its own model on existing or incoming data, as long as that model fulfils the TFLite model constraints. For example, you might add a movement sensor component to your machine that captures sensor readings to the built-in data management service. Then you could write a client that trains a new model based on the collected data, provides the model and new sensor data readings to the MLModelService modular resource, and interprets the results returned.
  • Write a module similar to example_mlmodelservice_tflite that accepts models for other inference engines besides TensorFlow Lite, then write a client that provides a valid model and source data for that inference engine.

Troubleshooting and additional documentation

You can find additional reference material in the C++ SDK documentation.

Have questions, or want to meet other people working on robots? Join our Community Discord.

If you notice any issues with the documentation, feel free to file an issue or edit this file.