Previous
Train other models
Use a machine learning (ML) model service to deploy an ML model to your machine.
An ML model service is a Viam service that runs machine learning models on your machine. The service works with models trained on Viam or elsewhere, and supports various frameworks including TensorFlow Lite, ONNX, TensorFlow, and PyTorch.
Viam currently supports the following frameworks:
Model Framework | ML Model Service | Hardware Support | Description |
---|---|---|---|
TensorFlow Lite | tflite_cpu | linux/amd64, linux/arm64, darwin/arm64, darwin/amd64 | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the model requirements. |
ONNX | onnx-cpu , triton | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | Universal format that is not optimized for hardware-specific inference but runs on a wide variety of machines. |
TensorFlow | tensorflow-cpu , triton | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | A full framework designed for more production-ready systems. |
PyTorch | torch-cpu , triton | Nvidia GPU, linux/arm64, darwin/arm64 | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (the model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for open-source models because it is the go-to framework for ML researchers. |
For some ML model services, like the Triton ML model service for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU.
ML model / TFLite CPU
service for TFlite ML models that you trained with Viam’s built-in training.For configuration information, click on the model name:
You can use these publicly available machine learning models:
The service works with models from various sources:
ML models must be designed in particular shapes to work with the mlmodel
classification or detection models of Viam’s vision service.
See ML Model Design to design a modular ML model service with models that work with vision.
When you add a model to the ML model service in the app interface, it automatically uses the latest version. In the ML model service panel, you can change the version in the version dropdown. Save your config to use your specified version of the ML model.
On its own, the ML model service only runs the model.
After deploying your model, you need to configure an additional service to use the deployed model.
For example, you can configure an mlmodel
vision service to visualize the inferences your model makes.
Follow our docs to run inference to add an mlmodel
vision service and see inferences.
For other use cases, consider creating custom functionality with a module.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!