Previous
Deploy model
After deploying an ml model, you need to configure an additional service to use the inferences the deployed model makes. You can run inference on an ML model with a vision service or use an SDK to further process inferences.
Vision services work to provide computer vision. They use an ML model and apply it to the stream of images from your camera.
For configuration information, click on the model name:
If none of the existing models fit your use case, you can create a modular resource to add support for it.
Note that many of these services have built in ML models, and thus do not need to be run alongside an ML model service.
One vision service you can use to run inference on a camera stream if you have an ML model service configured is the mlmodel
service.
Add the vision / ML model
service to your machine.
Then, from the Select model dropdown, select the name of the ML model service you configured when deploying your model (for example, mlmodel-1
).
Save your changes.
You can test a deployed vision service by clicking on the Test area of its configuration panel or from the CONTROL tab.
The camera stream shows when the vision service identifies something. Try pointing the camera at a scene similar to your training data.
For more detailed information, including optional attribute configuration, see the mlmodel
docs.
You can also run inference using a Viam SDK.
You can use the Infer
method of the ML Model API to make inferences.
For example:
import numpy as np
my_mlmodel = MLModelClient.from_robot(robot=machine, name="my_mlmodel_service")
image_data = np.zeros((1, 384, 384, 3), dtype=np.uint8)
# Create the input tensors dictionary
input_tensors = {
"image": image_data
}
output_tensors = await my_mlmodel.infer(input_tensors)
input_tensors := ml.Tensors{"0": tensor.New(tensor.WithShape(1, 2, 3), tensor.WithBacking([]int{1, 2, 3, 4, 5, 6}))}
output_tensors, err := myMLModel.Infer(context.Background(), input_tensors)
After adding a vision service, you can use a vision service API method with a classifier or a detector to get inferences programmatically. For more information, see the ML Model and Vision APIs:
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!