Deploy an ML Model with the ML Model Service

The Machine Learning (ML) model service allows you to deploy machine learning models to your machine. This can mean deploying:

  • a model you trained
  • a model from the registry that another user has shared publicly
  • a model trained outside the Viam platform that you have uploaded to the registry privately or publicly
  • a model trained outside the Viam platform that’s already available on your machine

After deploying your model, you need to configure an additional service to use the deployed model. For example, you can configure an mlmodel vision service and a transform camera to visualize the predictions your model makes.

Supported models

For configuration information, click on the model name:

Model
Description

Used with

Models from registry

You can search the machine learning models that are available to deploy on this service from the registry here:

Model
Description

Versioning for deployed models

If you upload or train a new version of a model, Viam automatically deploys the latest version of the model to the machine. If you do not want Viam to automatically deploy the latest version of the model, you can edit the "packages" array in the JSON configuration of your machine. This array is automatically created when you deploy the model and is not embedded in your service configuration.

You can get the version number from a specific model version by navigating to the models page finding the model’s row, clicking on the right-side menu marked with and selecting Copy package JSON. For example: 2024-02-28T13-36-51. The model package config looks like this:

"packages": [
  {
    "package": "<model_id>/<model_name>",
    "version": "YYYY-MM-DDThh-mm-ss",
    "name": "<model_name>",
    "type": "ml_model"
  }
]

API

The MLModel service supports the following methods:

Method NameDescription
InferTake an already ordered input tensor as an array, make an inference on the model, and return an output tensor map.
MetadataGet the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model.
ReconfigureReconfigure this resource.
DoCommandExecute model-specific commands that are not otherwise defined by the service API.
GetResourceNameGet the ResourceName for this instance of the ML model service with the given name.
CloseSafely shut down the resource and prevent further use.

Infer

Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map.

Parameters:

  • input_tensors (Dict[str, typing.NDArray]) (required): A dictionary of input flat tensors as specified in the metadata.
  • timeout (float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.

Returns:

  • (Dict[str, typing.NDArray]): A dictionary of output flat tensors as specified in the metadata.

Example:

import numpy as np

my_mlmodel = MLModelClient.from_robot(robot=robot, name="my_mlmodel_service")

nd_array = np.array([1, 2, 3], dtype=np.float64)
input_tensors = {"0": nd_array}

output_tensors = await my_mlmodel.infer(input_tensors)

For more information, see the Python SDK Docs.

Parameters:

  • ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.
  • tensors (ml.Tensors): The input map of tensors, as specified in the metadata.

Returns:

  • (ml.Tensors): The output map of tensors, as specified in the metadata, after being run through an inference engine.
  • (error): An error, if one occurred.

Example:

input_tensors := ml.Tensors{"0": tensor.New(tensor.WithShape(1, 2, 3), tensor.WithBacking([]int{1, 2, 3, 4, 5, 6}))}

output_tensors, err := myMLModel.Infer(context.Background(), input_tensors)

For more information, see the Go SDK Docs.

Metadata

Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model.

Parameters:

  • timeout (float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.

Returns:

  • (viam.services.mlmodel.mlmodel.Metadata): The metadata.

Example:

my_mlmodel = MLModelClient.from_robot(robot=robot, name="my_mlmodel_service")

metadata = await my_mlmodel.metadata()

For more information, see the Python SDK Docs.

Parameters:

  • ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.

Returns:

  • (MLMetadata): Name, type, expected tensor/array shape, inputs, and outputs associated with the ML model.
  • (error): An error, if one occurred.

Example:

metadata, err := myMLModel.Metadata(context.Background())

For more information, see the Go SDK Docs.

Reconfigure

Reconfigure this resource. Reconfigure must reconfigure the resource atomically and in place.

Parameters:

  • ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.
  • deps (Dependencies): The resource dependencies.
  • conf (Config): The resource configuration.

Returns:

  • (error): An error, if one occurred.

For more information, see the Go SDK Docs.

DoCommand

Execute model-specific commands that are not otherwise defined by the service API. For built-in service models, any model-specific commands available are covered with each model’s documentation. If you are implementing your own ML model service and add features that have no built-in API method, you can access them with DoCommand.

Parameters:

  • command (Mapping[str, ValueTypes]) (required): The command to execute.
  • timeout (float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.

Returns:

  • (Mapping[str, viam.utils.ValueTypes])

Example:

service = SERVICE.from_robot(robot, "builtin")  # replace SERVICE with the appropriate class

my_command = {
  "cmnd": "dosomething",
  "someparameter": 52
}

# Can be used with any resource, using the motion service as an example
await service.do_command(command=my_command)

For more information, see the Python SDK Docs.

Parameters:

Returns:

Example:

// This example shows using DoCommand with an arm component.
myArm, err := arm.FromRobot(machine, "my_arm")

command := map[string]interface{}{"cmd": "test", "data1": 500}
result, err := myArm.DoCommand(context.Background(), command)

For more information, see the Go SDK Docs.

GetResourceName

Get the ResourceName for this instance of the ML model service with the given name.

Parameters:

  • name (str) (required): The name of the Resource.

Returns:

Example:

# Can be used with any resource, using an arm as an example
my_arm_name = my_arm.get_resource_name("my_arm")

For more information, see the Python SDK Docs.

Close

Safely shut down the resource and prevent further use.

Parameters:

  • None.

Returns:

  • None.

Example:

await component.close()

For more information, see the Python SDK Docs.

Parameters:

  • ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.

Returns:

  • (error): An error, if one occurred.

Example:

// This example shows using Close with an arm component.
myArm, err := arm.FromRobot(machine, "my_arm")

err = myArm.Close(ctx)

For more information, see the Go SDK Docs.

Use the ML model service with the Viam Python SDK

To use the ML model service from the Viam Python SDK, install the Python SDK using the mlmodel extra:

pip install 'viam-sdk[mlmodel]'

You can also run this command on an existing Python SDK install to add support for the ML model service.

See the Python documentation for more information about the MLModel service in Python.

See Program a machine for more information about using an SDK to control your machine.

Next steps

To use your model with your machine, add a vision service or a modular resource: