ML Models
The Viam Registry provides Machine Learning (ML) models that can recognize patterns in your data.
In this page
ML models in the registry
Usage
To use an ML model with a machine, you have to deploy it using the ML model service. Services like the vision service can then use the ML model service to provide your machine with information about its surroundings.
Add support for other models
ML models must be designed in particular shapes to work with the mlmodel
classification or detection model of Viam’s vision service.
See ML Model Design to design modular ML model service with models that work with vision.
Versions
When you deploy a model to a machine, Viam automatically deploys the latest
version of the model to the machine.
This also means that as new version of the ML model become available, the machine will automatically get the latest version.
If you do not want Viam to automatically deploy the latest
version of the model, you can change the packages
configuration in the JSON machine configuration to use a specific version:
{
"package": "<model_id>/<model_name>",
"version": "YYYY-MM-DDThh-mm-ss",
"name": "<model_name>",
"type": "ml_model"
}
For models you have uploaded or traines, you can get the version number from a specific model version by navigating to the models page finding the model’s row, clicking on the right-side menu marked with … and selecting Copy package JSON. For example: 2024-02-28T13-36-51
.
Model framework support
Viam currently supports the following frameworks:
Model Framework | ML Model Service | Hardware Support | System Architecture | Description |
---|---|---|---|---|
TensorFlow Lite | tflite_cpu | Any CPU Nvidia GPU | Linux, Raspbian, MacOS | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the model requirements. |
ONNX | onnx_cpu | Any CPU Nvidia GPU | Android, MacOS, Linux arm-64 | Universal format that is not optimized for hardware inference but runs on a wide variety of machines. |
TensorFlow | triton | Nvidia GPU | Linux (Jetson) | A full framework that is made for more production-ready systems. |
PyTorch | triton | Nvidia GPU | Linux (Jetson) | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers. |
Next steps
Use the ML model service to deploy a model to your machine or learn how to train and deploy models:
To see machine learning in actions, follow one of these tutorials:
Have questions, or want to meet other people working on robots? Join our Community Discord.
If you notice any issues with the documentation, feel free to file an issue or edit this file.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!