ML Model Service

The ML Models service allows you to deploy machine learning models to your robots.

Create an ML model service

Navigate to the robot page on the Viam app. Click on the robot you wish to add the ML Model Service to. Select the Config tab, and click on Services.

Scroll to the Create Service section.

  1. Select mlmodel as the Type.
  2. Enter a name as the Name.
  3. Select tflite_cpu as the Model.
  4. Click Create Service.

Create a machine learning models service

You can choose to configure your service with an existing model on the robot or deploy a model onto your robot:

To configure your service with an existing model on the robot, select Path to Existing Model On Robot for the Deployment field.

Then specify the absolute Model Path and any Optional Settings such as the absolute Label Path and the Number of threads.

Create a machine learning models service with an existing model

To configure your service and deploy a model onto your robot, select Deploy Model On Robot for the Deployment field.

Then select the Models and any Optional Settings such as the Number of threads.

Create a machine learning models service with a model to be deployed

Add the tflite_cpu ML model object to the services array in your raw JSON configuration:

"services": [
  {
    "name": "<mlmodel_name>",
    "type": "mlmodel",
    "model": "tflite_cpu",
    "attributes": {
      "model_path": "${packages.<model-name>}/<model-name>.tflite",
      "label_path": "${packages.<model-name>}/labels.txt",
      "num_threads": <number>
    }
  },
  ... // Other services
]
"services": [
  {
    "name": "fruit_classifier",
    "type": "mlmodel",
    "model": "tflite_cpu",
    "attributes": {
      "model_path": "${packages.<model-name>}/<model-name>.tflite",
      "label_path": "${packages.<model-name>}/labels.txt",
      "num_threads": 1
    }
  }
]

The following parameters are available for a "tflite_cpu" model:

ParameterInclusionDescription
model_pathRequiredThe absolute path to the .tflite model file, as a string.
label_pathOptionalThe absolute path to a .txt file that holds class labels for your TFLite model, as a string. The SDK expects this text file to contain an ordered listing of the class labels. Without this file, classes will read as “1”, “2”, and so on.
num_threadsOptionalAn integer that defines how many CPU threads to use to run inference. Default: 1.

Save the configuration and your model will be added to your robot at $HOME/.viam/packages/<model-name>/<file-name>.

You can get the version number from a specific model version by clicking on COPY on the model on the models tab of the DATA page. The model package config looks like this:

{"package":"<model_id>/allblack","version":"1234567891011","name":"<model_name>"}

tflite_cpu Limitations

We strongly recommend that you package your .tflite_cpu model with metadata in the standard form.

In the absence of metadata, your .tflite_cpu model must satisfy the following requirements:

  • A single input tensor representing the image of type UInt8 (expecting values from 0 to 255) or Float 32 (values from -1 to 1).
  • At least 3 output tensors (the rest won’t be read) containing the bounding boxes, class labels, and confidence scores (in that order).
  • Bounding box output tensor must be ordered [x x y y], where x is an x-boundary (xmin or xmax) of the bounding box and the same is true for y. Each value should be between 0 and 1, designating the percentage of the image at which the boundary can be found.

These requirements are satisfied by a few publicly available model architectures including EfficientDet, MobileNet, and SSD MobileNet V1. You can use one of these architectures or build your own.

Next Steps

To make use of your new model, follow the instructions to create:



Have questions, or want to meet other people working on robots? Join our Community Discord.