Deploy a model

The Machine Learning (ML) model service allows you to deploy machine learning models to your machine. The service works with models trained inside and outside the Viam app:

Deploy your ML model

Navigate to the CONFIGURE tab of one of your machine in the Viam app. Add an ML model service that supports the ML model you trained or the one you want to use from the registry.

For configuration information, click on the model name:

Model
Description

Model framework support

Viam currently supports the following frameworks:

Model FrameworkML Model ServiceHardware SupportDescription
TensorFlow Litetflite_cpulinux/amd64, linux/arm64, darwin/arm64, darwin/amd64Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the model requirements.
ONNXonnx-cpu, tritonNvidia GPU, linux/amd64, linux/arm64, darwin/arm64Universal format that is not optimized for hardware inference but runs on a wide variety of machines.
TensorFlowtensorflow-cpu, tritonNvidia GPU, linux/amd64, linux/arm64, darwin/arm64A full framework that is made for more production-ready systems.
PyTorchtorch-cpu, tritonNvidia GPU, linux/arm64, darwin/arm64A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers.

For example,use the ML model / TFLite CPU service for TFlite ML models. If you used the built-in training, this is the ML model service you need to use. If you used a custom training script, you may need a different ML model service.

To deploy a model, click Select model and select the model from your organization or the registry. Save your config.

Machine learning models from registry

You can search the machine learning models that are available to deploy on this service from the registry here:

Model
Type
Framework
Description

Next steps

On its own the ML model service only runs the model. After deploying your model, you need to configure an additional service to use the deployed model. For example, you can configure an mlmodel vision service to visualize the inferences your model makes. Follow our docs to run inference to add an mlmodel vision service and see inferences.

For other use cases, consider creating custom functionality with a module.