Deploy an ML Model with the ML Model Service
The Machine Learning (ML) model service allows you to deploy machine learning models to your machine. You can deploy:
- a model you trained
- a model from the registry that another user has shared publicly
- a model trained outside the Viam platform that you have uploaded to the MODELS tab in the DATA section of the Viam app
- a model trained outside the Viam platform that’s already available on your machine
After deploying your model, you need to configure an additional service to use the deployed model.
For example, you can configure an mlmodel
vision service to visualize the predictions your model makes.
Available ML model service models
You must deploy an ML model service to use machine learning models on your machines. Once you have deployed the ML model service, you can select an ML model.
For configuration information, click on the model name:
Add support for other models
If none of the existing models of the ML model service fit your use case, you can create a modular resource to add support for it.
ML models must be designed in particular shapes to work with the mlmodel
classification or detection model of Viam’s vision service.
Follow these instructions to design your modular ML model service with models that work with vision.
Note
For some models of the ML model service, like the Triton ML model service for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU.
Used with
Machine learning models from registry
You can search the machine learning models that are available to deploy on this service from the registry here:
API
Viam Python SDK Support
To use the ML model service from the Viam Python SDK, install the Python SDK using the mlmodel
extra:
pip install 'viam-sdk[mlmodel]'
The MLModel service supports the following methods:
Method Name | Description |
---|---|
Infer | Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map. |
Metadata | Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model. |
Reconfigure | Reconfigure this resource. |
DoCommand | Execute model-specific commands that are not otherwise defined by the service API. |
GetResourceName | Get the ResourceName for this instance of the ML model service with the given name. |
Close | Safely shut down the resource and prevent further use. |
Tip
The following code examples assume that you have a machine configured with an MLModel
service, and that you add the required code to connect to your machine and import any required packages at the top of your code file.
Go to your machine’s CONNECT tab’s Code sample page on the Viam app for sample code to connect to your machine.
Infer
Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map.
Parameters:
input_tensors
(Dict[str, typing.NDArray]) (required): A dictionary of input flat tensors as specified in the metadata.timeout
(float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.
Returns:
- (Dict[str, typing.NDArray]): A dictionary of output flat tensors as specified in the metadata.
Example:
import numpy as np
my_mlmodel = MLModelClient.from_robot(robot=robot, name="my_mlmodel_service")
nd_array = np.array([1, 2, 3], dtype=np.float64)
input_tensors = {"0": nd_array}
output_tensors = await my_mlmodel.infer(input_tensors)
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.tensors
(ml.Tensors): The input map of tensors, as specified in the metadata.
Returns:
- (ml.Tensors): The output map of tensors, as specified in the metadata, after being run through an inference engine.
- (error): An error, if one occurred.
Example:
input_tensors := ml.Tensors{"0": tensor.New(tensor.WithShape(1, 2, 3), tensor.WithBacking([]int{1, 2, 3, 4, 5, 6}))}
output_tensors, err := myMLModel.Infer(context.Background(), input_tensors)
For more information, see the Go SDK Docs.
Metadata
Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model.
Parameters:
timeout
(float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.
Returns:
- (viam.services.mlmodel.mlmodel.Metadata): The metadata.
Example:
my_mlmodel = MLModelClient.from_robot(robot=robot, name="my_mlmodel_service")
metadata = await my_mlmodel.metadata()
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.
Returns:
- (MLMetadata): Name, type, expected tensor/array shape, inputs, and outputs associated with the ML model.
- (error): An error, if one occurred.
Example:
metadata, err := myMLModel.Metadata(context.Background())
For more information, see the Go SDK Docs.
Reconfigure
Reconfigure this resource. Reconfigure must reconfigure the resource atomically and in place.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.deps
(Dependencies): The resource dependencies.conf
(Config): The resource configuration.
Returns:
- (error): An error, if one occurred.
For more information, see the Go SDK Docs.
DoCommand
Execute model-specific commands that are not otherwise defined by the service API.
For built-in service models, any model-specific commands available are covered with each model’s documentation.
If you are implementing your own ML model service and add features that have no built-in API method, you can access them with DoCommand
.
Parameters:
command
(Mapping[str, ValueTypes]) (required): The command to execute.timeout
(float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.
Returns:
- (Mapping[str, viam.utils.ValueTypes])
Example:
service = SERVICE.from_robot(robot, "builtin") # replace SERVICE with the appropriate class
my_command = {
"cmnd": "dosomething",
"someparameter": 52
}
# Can be used with any resource, using the motion service as an example
await service.do_command(command=my_command)
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.cmd
(map[string]interface{}): The command to execute.
Returns:
- (map[string]interface{}): The command response.
- (error): An error, if one occurred.
Example:
// This example shows using DoCommand with an arm component.
myArm, err := arm.FromRobot(machine, "my_arm")
command := map[string]interface{}{"cmd": "test", "data1": 500}
result, err := myArm.DoCommand(context.Background(), command)
For more information, see the Go SDK Docs.
GetResourceName
Get the ResourceName
for this instance of the ML model service with the given name.
Parameters:
name
(str) (required): The name of the Resource.
Returns:
- (viam.proto.common.ResourceName): The ResourceName of this Resource.
Example:
# Can be used with any resource, using an arm as an example
my_arm_name = my_arm.get_resource_name("my_arm")
For more information, see the Python SDK Docs.
Close
Safely shut down the resource and prevent further use.
Parameters:
- None.
Returns:
- None.
Example:
await component.close()
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.
Returns:
- (error): An error, if one occurred.
Example:
// This example shows using Close with an arm component.
myArm, err := arm.FromRobot(machine, "my_arm")
err = myArm.Close(ctx)
For more information, see the Go SDK Docs.
Next steps
The ML model service only runs your model on the machine. To use the inferences from the model, you must use an additional service such as a vision service or a modular resource:
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!