ML Model Service
The ML Model service allows you to deploy machine learning models to your smart machine.
Create an ML model service
Navigate to your robot’s Config tab on the Viam app.
Click the Services subtab and click Create service in the lower-left corner.
Select the ML Model
type, then select the TFLite CPU
model.
Enter a name for your service and click Create.
You can choose to configure your service with an existing model on the smart machine or deploy a model onto your smart machine:
To configure your service with an existing model on the robot, select Path to Existing Model On Robot for the Deployment field.
Then specify the absolute Model Path and any Optional Settings such as the absolute Label Path and the Number of threads.
To configure your service and deploy a model onto your robot, select Deploy Model On Robot for the Deployment field.
Then select the Models and any Optional Settings such as the Number of threads.
Add the tflite_cpu
ML model object to the services array in your raw JSON configuration:
"services": [
{
"name": "<mlmodel_name>",
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"model_path": "${packages.<model-name>}/<model-name>.tflite",
"label_path": "${packages.<model-name>}/labels.txt",
"num_threads": <number>
}
},
... // Other services
]
"services": [
{
"name": "fruit_classifier",
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"model_path": "${packages.<model-name>}/<model-name>.tflite",
"label_path": "${packages.<model-name>}/labels.txt",
"num_threads": 1
}
}
]
The following parameters are available for a "tflite_cpu"
model:
Parameter | Inclusion | Description |
---|---|---|
model_path | Required | The absolute path to the .tflite model file, as a string . |
label_path | Optional | The absolute path to a .txt file that holds class labels for your TFLite model, as a string . The SDK expects this text file to contain an ordered listing of the class labels. Without this file, classes will read as “1”, “2”, and so on. |
num_threads | Optional | An integer that defines how many CPU threads to use to run inference. Default: 1 . |
Save the configuration and your model will be added to your robot at
Info
If you upload or train a new version of a model, Viam automatically deploys the latest
version of the model to the robot.
If you do not want Viam to automatically deploy the latest
version of the model, you can change the packages
configuration in the Raw JSON robot configuration.
You can get the version number from a specific model version by clicking on COPY on the model on the models tab of the DATA page. The model package config looks like this:
{
"package": "<model_id>/allblack",
"version": "YYYYMMDDHHMMSS",
"name": "<model_name>"
}
tflite_cpu
Limitations
We strongly recommend that you package your .tflite_cpu
model with metadata in the standard form.
In the absence of metadata, your .tflite_cpu
model must satisfy the following requirements:
- A single input tensor representing the image of type UInt8 (expecting values from 0 to 255) or Float 32 (values from -1 to 1).
- At least 3 output tensors (the rest won’t be read) containing the bounding boxes, class labels, and confidence scores (in that order).
- Bounding box output tensor must be ordered [x x y y], where x is an x-boundary (xmin or xmax) of the bounding box and the same is true for y. Each value should be between 0 and 1, designating the percentage of the image at which the boundary can be found.
These requirements are satisfied by a few publicly available model architectures including EfficientDet, MobileNet, and SSD MobileNet V1. You can use one of these architectures or build your own.
Next Steps
To make use of your new model, follow the instructions to create:
Have questions, or want to meet other people working on robots? Join our Community Discord.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!