Deploy a model

Use a machine learning (ML) model service to deploy an ML model to your machine.

What is an ML model service?

An ML model service is a Viam service that runs machine learning models on your machine. The service works with models trained on Viam or elsewhere, and supports various frameworks including TensorFlow Lite, ONNX, TensorFlow, and PyTorch.

Supported frameworks and hardware

Viam currently supports the following frameworks:

Model FrameworkML Model ServiceHardware SupportDescription
TensorFlow Litetflite_cpulinux/amd64, linux/arm64, darwin/arm64, darwin/amd64Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the model requirements.
ONNXonnx-cpu, tritonNvidia GPU, linux/amd64, linux/arm64, darwin/arm64Universal format that is not optimized for hardware-specific inference but runs on a wide variety of machines.
TensorFlowtensorflow-cpu, tritonNvidia GPU, linux/amd64, linux/arm64, darwin/arm64A full framework designed for more production-ready systems.
PyTorchtorch-cpu, tritonNvidia GPU, linux/arm64, darwin/arm64A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (the model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for open-source models because it is the go-to framework for ML researchers.

Deploy your ML model

  1. Navigate to the CONFIGURE tab of one of your machines.
  2. Add an ML model service that supports the ML model you want to use.
    • For example, use the ML model / TFLite CPU service for TFlite ML models that you trained with Viam’s built-in training.
  3. Click Select model and select a model from your organization or the registry.
  4. Save your config.
  5. Use the Test panel to test your model.

Available ML model services

For configuration information, click on the model name:


Model
Description

Available machine learning models

You can use these publicly available machine learning models:

Model
Type
Framework
Description

Model sources

The service works with models from various sources:

  • You can train TensorFlow or TensorFlow Lite or other model frameworks on data from your machines.
  • You can use ML models from the registry.
  • You can upload externally trained models from a model file on the MODELS tab.
  • You can use models trained outside the Viam platform whose files are on your machine. See the documentation for the ML model service you’re using (pick one that supports your model framework) for instructions on this.

Deploy a specific version of an ML model

When you add a model to the ML model service in the app interface, it automatically uses the latest version. In the ML model service panel, you can change the version in the version dropdown. Save your config to use your specified version of the ML model.

Next steps

On its own, the ML model service only runs the model. After deploying your model, you need to configure an additional service to use the deployed model. For example, you can configure an mlmodel vision service to visualize the inferences your model makes. Follow our docs to run inference to add an mlmodel vision service and see inferences.

For other use cases, consider creating custom functionality with a module.