Deploy a model

After training or uploading a machine learning model, use a machine learning (ML) model service to deploy the ML model to your machine.

Deploy your ML model on an ML model service

  1. Navigate to the CONFIGURE tab of one of your machine in the Viam app.
  2. Add an ML model service that supports the ML model you want to use.
    • For example, use the ML model / TFLite CPU service for TFlite ML models that you trained with Viam’s built-in training.

For configuration information, click on the model name:

Model
Description
Want more information about model framework and hardware support for each ML model service? Click here.

Viam currently supports the following frameworks:

Model FrameworkML Model ServiceHardware SupportDescription
TensorFlow Litetflite_cpulinux/amd64, linux/arm64, darwin/arm64, darwin/amd64Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the model requirements.
ONNXonnx-cpu, tritonNvidia GPU, linux/amd64, linux/arm64, darwin/arm64Universal format that is not optimized for hardware inference but runs on a wide variety of machines.
TensorFlowtensorflow-cpu, tritonNvidia GPU, linux/amd64, linux/arm64, darwin/arm64A full framework that is made for more production-ready systems.
PyTorchtorch-cpu, tritonNvidia GPU, linux/arm64, darwin/arm64A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers.
  1. Click Select model and select a model from your organization or the registry.
  2. Save your config.

Models available to deploy on the ML Model service

You can also use these publicly available machine learning models with an ML model service:

Model
Type
Framework
Description

Deploy a specific version of an ML model

When you add a model to the ML model service in the app interface, it automatically grabs the latest version. You can still edit what version of an ML model your machine uses, but not through the UI. To deploy a specific version of an ML model, you must edit the raw JSON of your machine. Go to the Models page on the DATA tab. Click the > icon to expand the versions of a model and click the menu on your desired version. Click Copy package JSON. Then, return to your machine page. Enter JSON mode and find the "packages" section of your config. Replace "version": "latest" with "version" from the package reference you just copied, for example "version": "2024-11-14T15-05-26". Save your config to use your specified version of the ML model.

How the ML model service works

The service works with models trained inside and outside the Viam app:

  • You can train TFlite or other model frameworks on data from your machines.
  • You can use ML models from the Viam Registry.
  • You can upload externally trained models from a model file on the MODELS tab in the DATA section of the Viam app.
  • You can use a model trained outside the Viam platform whose files are on your machine. See the documentation of the model of ML model service you’re using (pick one that supports your model framework) for instructions on this.

On its own the ML model service only runs the model. After deploying your model, you need to configure an additional service to use the deployed model. For example, you can configure an mlmodel vision service to visualize the inferences your model makes. Follow our docs to run inference to add an mlmodel vision service and see inferences.

For other use cases, consider creating custom functionality with a module.