Previous
Train other models
The Machine Learning (ML) model service allows you to deploy machine learning models to your machine. The service works with models trained inside and outside the Viam app:
Navigate to the CONFIGURE tab of one of your machine in the Viam app. Add an ML model service that supports the ML model you trained or the one you want to use from the registry.
For configuration information, click on the model name:
Viam currently supports the following frameworks:
Model Framework | ML Model Service | Hardware Support | Description |
---|---|---|---|
TensorFlow Lite | tflite_cpu | linux/amd64, linux/arm64, darwin/arm64, darwin/amd64 | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the model requirements. |
ONNX | onnx-cpu , triton | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | Universal format that is not optimized for hardware inference but runs on a wide variety of machines. |
TensorFlow | tensorflow-cpu , triton | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | A full framework that is made for more production-ready systems. |
PyTorch | torch-cpu , triton | Nvidia GPU, linux/arm64, darwin/arm64 | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers. |
For some models of the ML model service, like the Triton ML model service for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU.
For example,use the ML model / TFLite CPU
service for TFlite ML models.
If you used the built-in training, this is the ML model service you need to use.
If you used a custom training script, you may need a different ML model service.
To deploy a model, click Select model and select the model from your organization or the registry. Save your config.
You can search the machine learning models that are available to deploy on this service from the registry here:
On its own the ML model service only runs the model.
After deploying your model, you need to configure an additional service to use the deployed model.
For example, you can configure an mlmodel
vision service to visualize the inferences your model makes.
Follow our docs to run inference to add an mlmodel
vision service and see inferences.
For other use cases, consider creating custom functionality with a module.
ML models must be designed in particular shapes to work with the mlmodel
classification or detection model of Viam’s vision service.
See ML Model Design to design a modular ML model service with models that work with vision.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!