Add a TensorFlow Lite Modular Service
Viam provides an example modular resource written in C++ that extends the ML model service to run any TensorFlow Lite model. The example includes an inference client program as well, which generates audio samples and uses the modular resource to classify the audio samples based on a pre-trained model.
This tutorial walks you through everything necessary to start using these example files with your machine, including building the C++ SDK, configuring your machine and installing viam-server
, and generating results with the example inference client program.
The provided example code demonstrates the design, implementation, and usage of a custom module to help you write your own. This code is for instructional purposes only, and is not intended for production use.
You can find the example files in the Viam C++ SDK:
example_mlmodelservice_tflite.cpp
- an example module which provides theMLModelService
modular resource capable of running any TensorFlow Lite model.example_audio_classification_client.cpp
- an example inference client which generates audio samples and invokes theexample_mlmodelservice_tflite
module to classify those samples using theyamnet/classification
TensorFlow Lite model.
Build the C++ SDK
To build the Viam C++ SDK, you will need a macOS or Linux computer. Follow the instructions below for your platform:
Follow the Viam C++ SDK build instructions to build the SDK on your macOS computer using the brew
package manager.
While your specific build steps may differ slightly, your installation should generally resemble the following:
Install all listed dependencies to support the Viam C++ SDK:
brew install abseil cmake boost grpc protobuf xtensor pkg-config ninja buf
Create a new
example_workspace directory for this tutorial, and create anopt directory within it to house the build artifacts:mkdir -p ~/example_workspace cd ~/example_workspace mkdir -p opt
Within the
~/example_workspace directory, clone the Viam C++ SDK:git clone git@github.com:viamrobotics/viam-cpp-sdk.git
Change directory into the SDK, and create a
build directory to house the build:cd viam-cpp-sdk/ mkdir build cd build
Create an environment variable
PKG_CONFIG_PATH
which points to the version ofopenssl
installed on your system:export PKG_CONFIG_PATH="`brew --prefix`/opt/openssl/lib/pkgconfig"
Build the C++ SDK by running the following commands:
cmake .. -DVIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE=ON -DCMAKE_INSTALL_PREFIX=~/example_workspace/opt -G Ninja ninja all ninja install
For this tutorial, the build process uses the following configuration options. See Viam C++ SDK Build Instructions for more information:
VIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE
to request building the example module for this tutorial.CMAKE_INSTALL_PREFIX
to install to~/example_workspace/opt instead of the default./install location.
Follow the Viam C++ SDK build instructions to build the SDK on your Linux system.
While your specific build steps may differ slightly, your installation should generally resemble the following:
Clone the Viam C++ SDK to your Linux system:
git clone git@github.com:viamrobotics/viam-cpp-sdk.git
Build and run the
bullseye
development Docker container included with the SDK. If you haven’t already, first install Docker Engine.cd viam-cpp-sdk/ docker build -t cpp . -f etc/docker/Dockerfile.debian.bullseye docker run --rm -it -v "$PWD":/usr/src/viam-cpp-sdk -w /usr/src/viam-cpp-sdk cpp /bin/bash
Alternatively, you can skip running the docker container if you would prefer to use your own development environment.
Install all listed dependencies to support the Viam C++ SDK:
sudo apt-get install git cmake build-essential libabsl-dev libboost-all-dev libgrpc++-dev libprotobuf-dev pkg-config ninja-build protobuf-compiler-grpc
If you are not using the
bullseye
container included with the SDK, you may need to install a newer version ofcmake
to build the SDK. Run the following to determine the version ofcmake
installed on your system:cmake --version
If the version returned is
3.25
or later, skip to the next step. Otherwise, download and installcmake 3.25
or later from your system’s package manager. For example, if using Debian, you can run the following commands to add thebullseye-backports
repository and install the version ofcmake
provided there:sudo apt-get install software-properties-common sudo apt-add-repository 'deb http://deb.debian.org/debian bullseye-backports main' sudo apt-get update sudo apt-get install -t bullseye-backports cmake
Create an
opt directory to install the build artifacts to:mkdir -p ~/example_workspace/opt
Within the
viam-cpp-sdk directory, create abuild directory to house the build:mkdir build cd build
Build the C++ SDK by running the following commands:
cmake .. -DVIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE=ON -DCMAKE_INSTALL_PREFIX=~/example_workspace/opt -G Ninja ninja all ninja install
For this tutorial, the build process uses the following configuration options. See Viam C++ SDK Build Instructions for more information:
VIAMCPPSDK_BUILD_TFLITE_EXAMPLE_MODULE
to request building the example module for this tutorial.CMAKE_INSTALL_PREFIX
to install to~/example_workspace/opt instead of the default./install location.
Download the yamnet/classification
model file
This example uses the yamnet/classification
TensorFlow Lite model for audio classification.
Download the
yamnet/classification
TensorFlow Lite model file and place it in yourexample_workspace directory:curl -Lo ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite https://tfhub.dev/google/lite-model/yamnet/classification/tflite/1?lite-format=tflite
Alternatively, you may download the model file here: yamnet classification tflite model. If you download in this fashion, move the downloaded file to your
~/example_workspace directory.Extract the labels file
yamnet_label_list.txt from the downloaded model file:unzip ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite -d ~/example_workspace/
The labels file provides pre-populated labels for the calculated scores, so that output scores can be associated and returned with their matching labels. You can omit this file if desired, which will cause the inference client to return the computed scores without labels.
Install viam-server
Next, install viam-server
on your machine, if you have not done so already:
Navigate to the Viam app in your browser and add a new machine.
Navigate to the CONFIGURE tab and find your machine’s card. An alert will be present directing you to Set up your machine part. Click View setup instructions to open the setup instructions.
Select the platform you want to run
viam-server
on. Follow the steps listed until you receive confirmation that your machine is connected.Important
If you are installing
viam-server
within thebullseye
Docker container provided with the C++ SDK, you will need to run the following command instead to Download and install viam-server:curl https://storage.googleapis.com/packages.viam.com/apps/viam-server/viam-server-stable-x86_64.AppImage -o viam-server && chmod 755 viam-server && sudo ./viam-server --appimage-extract-and-run -config /etc/viam.json
Once complete, verify that step 3 of the setup instructions indicates that your machine has successfully connected.
Stop
viam-server
by pressing CTL-C on your keyboard from within the terminal window where you entered the commands from step 3 above.
Generate your machine configuration
When you built the C++ SDK, the build process also built the example_audio_classification_client
binary, which includes a --generate
function that determines and creates the necessary machine configuration to support this example.
To generate your machine’s configuration using example_audio_classification_client
:
First, determine the full path to the
yamnet/classification
model you just downloaded. If you followed the instructions above, this path is:~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite .Next, determine the full path to the
example_mlmodelservice_tflite
modular resource example provided with the Viam C++ SDK. If you followed the instructions above, this path is:~/example_workspace/opt/bin/example_mlmodelservice_tflite .Run the
example_audio_classification_client
binary, providing both paths to the--generate
function in the following fashion:cd ~/example_workspace/opt/bin ./example_audio_classification_client --generate --model-path ~/example_workspace/lite-model_yamnet_classification_tflite_1.tflite --tflite-module-path ~/example_workspace/opt/bin/example_mlmodelservice_tflite > ~/example_workspace/viam-example-mlmodel-config.json
Verify that the resulting configuration file was created successfully:
cat ~/example_workspace/viam-example-mlmodel-config.json
Copy the contents of this file. Then return to your machine’s page on the Viam app, select the CONFIGURE tab, select JSON mode, and add the configuration into the text area.
Important
If you already have other configured components, you will need to add each generated JSON object to the respective
modules
orservices
array. If you do not already have configured components, you can replace the contents in JSON with the generated contents.Click the Save button in the top right corner of the page. Now, when you switch back to Builder mode, you can see the new configuration settings.
This generated configuration features the minimum required configuration to support this tutorial: services
parameters for the ML model service and modules
parameters for the example_mlmodelservice_tflite
module.
Run the inference client
With everything configured and running, you can now run the inference client that connects to viam-server
and uses the example_mlmodelservice_tflite
module.
First, determine your machine address and API key and API key ID. To do so, navigate to your machine’s CONNECT tab on the Viam app, and copy the machine address from the Code sample page. Go to the API keys page on the CONNECT tab to get an API key. The API key resembles
abcdef1g23hi45jklm6nopqrstu7vwx8
, the API key ID resemblesa1234b5c-678d-9012-3e45-67fabc8d9efa
and the machine address resemblesmy-machine-main.abcdefg123.viam.cloud
.Caution
Do not share your API key or machine address publicly. Sharing this information could compromise your system security by allowing unauthorized access to your machine, or to the computer running your machine.
Next, start
viam-server
once more on your machine, this time as a background process:viam-server -config /etc/viam.json
Important
If you are working within the
bullseye
Docker container on Linux, run the following command instead of the above, from within the directory you installedviam-server
to:./viam-server --appimage-extract-and-run -config /etc/viam.json &
Then, run the following to start the inference client, providing the necessary access credentials and the path to the labels file extracted earlier:
cd ~/example_workspace/opt/bin ./example_audio_classification_client --model-label-path ~/example_workspace/yamnet_label_list.txt --robot-host my-machine-main.abcdefg123.viam.cloud --robot-api-key abcdef1g23hi45jklm6nopqrstu7vwx8 --robot-api-key-id a1234b5c-678d-9012-3e45-67fabc8d9efa
The command should return output similar to:
0: Static 0.5 1: Noise 0.332031 2: White noise 0.261719 3: Cacophony 0.109375 4: Pink noise 0.0585938 Measuring inference latency ... Inference latency (seconds), Mean: 0.012795 Inference latency (seconds), Var : 0.000164449
The labels shown in the example output require that you have provided the
yamnet_label_list.txt labels file toexample_audio_classification_client
using the--model-label-path
flag. If you have omitted the labels file, the computed scores will be returned without labels.
Understanding the code
The example_mlmodelservice_tflite
module, and the MLModelService
modular resource it provides, extends the existing ML model service to run any TensorFlow Lite model.
The example_audio_classification_client
inference client provides sample audio data and the yamnet/classification
TensorFlow Lite model to the MLModelService
modular resource and interprets the results it returns.
All example code is provided in the Viam C++ SDK in the
The code in
The inference client generates two signals that meet the input requirements of the
yamnet/classification
model: the first signal is silence, while the second is noise. As written, the example analyzes only the noise signal, but you can change which signal is classified by changing which is assigned to thesamples
variable in the code.The client then populates an input tensor named
sample
as atensor_view
over the provided sample data. The tensor must be named according to the configured value undertensor_name_remappings
in your machine configuration. If you followed the instructions above to generate your machine configuration, the valuesample
was pre-populated for you in your generated machine configuration.The client invokes the
infer
method provided by theexample_mlmodelservice_tflite
module, providing it with thesample
input tensor data it generated earlier.The
example_mlmodelservice_tflite
module returns a map of response tensors as a result.The client validates the result, including its expected type: a vector of
float
values. The expected output must be defined undertensor_name_remappings
in your machine configuration for validation to succeed. If you followed the instructions above to generate your machine configuration, the valuecategories
was pre-populated for you in your generated machine configuration.If a labels file was provided, labels are read in as a vector of
string
values and the top 5 scores are associated with their labels.Finally, the client runs 100 rounds of inference using the determined label and score pairs, and returns the results of the rounds, including mean and variance values.
Similarly, the example_mlmodelservice_tflite
module can be found at
Next steps
This tutorial explores audio classification using the yamnet/classification
TensorFlow Lite model, but the MLModelService
modular resource provided by the example_mlmodelservice_tflite
module can accept any TensorFlow Lite model so long as it fulfils the TFLite model constraints.
Once you have run the example and examined the module and client code, you might explore the following next steps:
- Write a client similar to
example_audio_classification_client
that generates a different kind of data and provides a suitable TensorFlow Lite model for that data to theMLModelService
modular resource. For example, you might find a new pre-trained TensorFlow Lite model that analyzes speech waveforms and write a client to provide these waveform samples to theMLModelService
modular resource and interpret the results returned. - Write a client similar to
example_audio_classification_client
that trains its own model on existing or incoming data, as long as that model fulfils the TFLite model constraints. For example, you might add a movement sensor component to your machine that captures sensor readings to the built-in data management service. Then you could write a client that trains a new model based on the collected data, provides the model and new sensor data readings to theMLModelService
modular resource, and interprets the results returned. - Write a module similar to
example_mlmodelservice_tflite
that accepts models for other inference engines besides TensorFlow Lite, then write a client that provides a valid model and source data for that inference engine.
Troubleshooting and additional documentation
- If you experience issues building the C++ SDK, see C++ SDK: Limitations, Known Issues, and Troubleshooting.
- To customize your C++ build process or make adjustments to fit your platform or deployment requirements, see C++ SDK: Options to Configure or Customize the Build
You can find additional reference material in the C++ SDK documentation.
Have questions, or want to meet other people working on robots? Join our Community Discord.
If you notice any issues with the documentation, feel free to file an issue or edit this file.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!