Previous
Vision
The ML model service API allows you to make inferences based on a provided ML model.
The ML Model service supports the following methods:
Method Name | Description |
---|---|
Infer | Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map. |
Metadata | Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model. |
Reconfigure | Reconfigure this resource. |
DoCommand | Execute model-specific commands that are not otherwise defined by the service API. |
GetResourceName | Get the ResourceName for this instance of the ML model service with the given name. |
Close | Safely shut down the resource and prevent further use. |
To use the ML model service from the Viam Python SDK, install the Python SDK using the mlmodel
extra:
pip install 'viam-sdk[mlmodel]'
To get started using Viam’s SDKs to connect to and control your machine, go to your machine’s page on the Viam app, navigate to the CONNECT tab’s Code sample page, select your preferred programming language, and copy the sample code.
To show your machine’s API key in the sample code, toggle Include API key.
We strongly recommend that you add your API key and machine address as an environment variable. Anyone with these secrets can access your machine, and the computer running your machine.
When executed, this sample code creates a connection to your machine as a client.
The following examples assume that you have a machine configured with an MLModel
service called "my_mlmodel_service"
, and that you have installed the mlmodel
extra for the Python SDK.
If your ML model service has a different name, change the name
in the code.
Import the mlmodel package for the SDK you are using:
from viam.services.mlmodel import MLModelClient
import (
"go.viam.com/rdk/services/mlmodel"
)
Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map.
Parameters:
input_tensors
(Dict[str, typing.NDArray]) (required): A dictionary of input flat tensors as specified in the metadata.extra
(Mapping[str, Any]) (optional): Extra options to pass to the underlying RPC call.timeout
(float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.Returns:
Example:
import numpy as np
my_mlmodel = MLModelClient.from_robot(robot=machine, name="my_mlmodel_service")
image_data = np.zeros((1, 384, 384, 3), dtype=np.uint8)
# Create the input tensors dictionary
input_tensors = {
"image": image_data
}
output_tensors = await my_mlmodel.infer(input_tensors)
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.tensors
(ml.Tensors): The input map of tensors, as specified in the metadata.Returns:
Example:
import (
"go.viam.com/rdk/ml"
"gorgonia.org/tensor"
)
myMLModel, err := mlmodel.FromRobot(machine, "my_mlmodel")
input_tensors := ml.Tensors{
"image": tensor.New(
tensor.Of(tensor.Uint8),
tensor.WithShape(1, 384, 384, 3),
tensor.WithBacking(make([]uint8, 1*384*384*3)),
),
}
output_tensors, err := myMLModel.Infer(context.Background(), input_tensors)
For more information, see the Go SDK Docs.
Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model.
Parameters:
extra
(Mapping[str, Any]) (optional): Extra options to pass to the underlying RPC call.timeout
(float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.Returns:
Example:
my_mlmodel = MLModelClient.from_robot(robot=machine, name="my_mlmodel_service")
metadata = await my_mlmodel.metadata()
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.Returns:
Example:
myMLModel, err := mlmodel.FromRobot(machine, "my_mlmodel")
metadata, err := myMLModel.Metadata(context.Background())
For more information, see the Go SDK Docs.
Reconfigure this resource. Reconfigure must reconfigure the resource atomically and in place.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.deps
(Dependencies): The resource dependencies.conf
(Config): The resource configuration.Returns:
For more information, see the Go SDK Docs.
Execute model-specific commands that are not otherwise defined by the service API.
For built-in service models, any model-specific commands available are covered with each model’s documentation.
If you are implementing your own ML model service and add features that have no built-in API method, you can access them with DoCommand
.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.cmd
(map[string]interface{}): The command to execute.Returns:
Example:
myMlmodelSvc, err := mlmodel.FromRobot(machine, "my_mlmodel_svc")
command := map[string]interface{}{"cmd": "test", "data1": 500}
result, err := myMlmodelSvc.DoCommand(context.Background(), command)
For more information, see the Go SDK Docs.
Get the ResourceName
for this instance of the ML model service with the given name.
Parameters:
name
(str) (required): The name of the Resource.Returns:
Example:
my_mlmodel_svc_name = MLModelClient.get_resource_name("my_mlmodel_svc")
For more information, see the Python SDK Docs.
Safely shut down the resource and prevent further use.
Parameters:
Returns:
Example:
my_mlmodel_svc = MLModelClient.from_robot(robot=machine, name="my_mlmodel_svc")
await my_mlmodel_svc.close()
For more information, see the Python SDK Docs.
Parameters:
ctx
(Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.Returns:
Example:
my_mlmodel, err := mlmodel.FromRobot(machine, "my_ml_model")
err := my_mlmodel.Close(context.Background())
For more information, see the Go SDK Docs.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!