Configure an obstacles_pointcloud Segmenter
Changed in RDK v0.2.36 and API v0.1.118
obstacles_pointcloud
is a segmenter that identifies well separated objects above a flat plane.
It first identifies the biggest plane in the scene, eliminates that plane, and clusters the remaining points into objects.
First, make sure your camera is connected to your machine’s computer and both are powered on. Then, configure the service:
Navigate to the CONFIGURE tab of your machine’s page in the Viam app.
Click the + icon next to your machine part in the left-hand menu and select Service.
Select the vision
type, then select the obstacles pointcloud
model.
Enter a name or use the suggested name for your service and click Create.
In your vision service’s panel, fill in the attributes field.
{
"min_points_in_plane": <integer>,
"min_points_in_segment": <integer>,
"max_dist_from_plane_mm": <number>,
"ground_plane_normal_vec": {
"x": <integer>,
"y": <integer>,
"z": <integer>
},
"ground_angle_tolerance_degs": <integer>,
"clustering_radius": <integer>,
"clustering_strictness": <integer>
}
Add the vision service object to the services array in your raw JSON configuration:
"services": [
{
"name": "<segmenter_name>",
"type": "vision",
"namespace": "rdk",
"model": "obstacles_pointcloud"
"attributes": {
"min_points_in_plane": <integer>,
"min_points_in_segment": <integer>,
"max_dist_from_plane_mm": <number>,
"ground_plane_normal_vec": {
"x": <integer>,
"y": <integer>,
"z": <integer>
},
"ground_angle_tolerance_degs": <integer>,
"clustering_radius": <integer>,
"clustering_strictness": <integer>
}
},
... // Other services
]
"services": [
{
"name": "rc_segmenter",
"type": "vision",
"namespace": "rdk",
"model": "obstacles_pointcloud",
"attributes": {
"min_points_in_plane": 1500,
"min_points_in_segment": 250,
"max_dist_from_plane_mm": 10.0,
"ground_plane_normal_vec": {"x": 0, "y":0, "z": 1},
"ground_angle_tolerance_degs": 20.0,
"clustering_radius": 5,
"clustering_strictness": 3
}
}
]
The following parameters are available for a "obstacles_pointcloud"
.
Parameter | Required? | Description |
---|---|---|
min_points_in_plane | Optional | An integer that specifies how many points to put on the flat surface or ground plane when clustering. This is to distinguish between large planes, like the floors and walls, and small planes, like the tops of bottle caps. Default: 500 |
min_points_in_segment | Optional | An integer that sets a minimum size to the returned objects, and filters out all other found objects below that size. Default: 10 |
clustering_radius | Optional | An integer that specifies which neighboring points count as being “close enough” to be potentially put in the same cluster. This parameter determines how big the candidate clusters should be, or, how many points should be put on a flat surface. A small clustering radius is likely to split different parts of a large cluster into distinct objects. A large clustering radius is likely to aggregate closely spaced clusters into one object. Default: 1 |
clustering_strictness | Optional | An integer that determines the probability threshold for sorting neighboring points into the same cluster, or how “easy” viam-server should determine it is to sort the points the machine’s camera sees into this pointcloud. When the clustering_radius determines the size of the candidate clusters, then the clustering_strictness determines whether the candidates will count as a cluster. If clustering_strictness is set to a large value, many small clusters are likely to be made, rather than a few big clusters. The lower the number, the bigger your clusters will be.Default: 5 |
max_dist_from_plane_mm | Optional | A float that determines how much area above and below an ideal ground plane should count as the plane for which points are removed. For fields with tall grass, this should be a high number. The default value is 100 mm. Default: 100 |
ground_plane_normal_vec | Optional | A (x,y,z) vector that represents the normal vector of the ground plane. Different cameras have different coordinate systems. For example, a lidar’s ground plane will point in the +z direction (0, 0, 1) . On the other hand, the intel realsense +z direction points out of the camera lens, and its ground plane is in the negative y direction (0, -1, 0) .Default: (0, 0, 1) |
ground_angle_tolerance_degs | Optional | An integer that determines how strictly the found ground plane should match the ground_plane_normal_vec . For example, even if the ideal ground plane is purely flat, a rover may encounter slopes and hills. The algorithm should find a ground plane even if the found plane is at a slant, up to a certain point.Default: 30 |
Click the Save button in the top right corner of the page and proceed to test your segmenter.
Test your segmenter
The following code uses the GetObjectPointClouds
method to run a segmenter vision model on an image from the machine’s camera "cam1"
:
from viam.services.vision import VisionClient
robot = await connect()
# Grab Viam's vision service for the segmenter
my_segmenter = VisionClient.from_robot(robot, "my_segmenter")
objects = await my_segmenter.get_object_point_clouds("cam1")
await robot.close()
To learn more about how to use segmentation, see the Python SDK docs.
import (
"go.viam.com/rdk/config"
"go.viam.com/rdk/services/vision"
"go.viam.com/rdk/components/camera"
)
cameraName := "cam1" // Use the same component name that you have in your machine configuration
// Get the vision service you configured with name "my_segmenter" from the machine
mySegmenter, err := vision.from_robot(robot, "my_segmenter")
if err != nil {
logger.Fatalf("Cannot get vision service: %v", err)
}
// Get segments
segments, err := mySegmenter.ObjectPointClouds(context.Background(), cameraName, nil)
if err != nil {
logger.Fatalf("Could not get segments: %v", err)
}
if len(segments) > 0 {
logger.Info(segments[0])
}
To learn more about how to use segmentation, see the Go SDK docs.
Tip
To see more code examples of how to use Viam’s vision service, see our example repo.
Next Steps
For general configuration and development info, see:
Have questions, or want to meet other people working on robots? Join our Community Discord.
If you notice any issues with the documentation, feel free to file an issue or edit this file.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!