Configure an obstacles_depth Segmenter

Changed in RDK v0.2.36 and API v0.1.118

The obstacles_depth vision service model is for depth cameras, and is best for motion planning with transient obstacles. Use this segmenter to identify well separated objects above a flat plane.

First, make sure your camera is connected to your machine’s computer and both are powered on. Then, configure an obstacles_depth segmenter:

Navigate to the CONFIGURE tab of your machine’s page in the Viam app. Click the + icon next to your machine part in the left-hand menu and select Service. Select the vision type, then select the obstacles depth model. Enter a name or use the suggested name for your service and click Create.

In your vision service’s panel, fill in the attributes field.

{
  "min_points_in_plane": <integer>,
  "min_points_in_segment": <integer>,
  "max_dist_from_plane_mm": <number>,
  "ground_angle_tolerance_degs": <integer>,
  "clustering_radius": <integer>,
  "clustering_strictness": <integer>
}
{
  "min_points_in_plane": 1500,
  "min_points_in_segment": 250,
  "max_dist_from_plane_mm": 10.0,
  "ground_angle_tolerance_degs": 20,
  "clustering_radius": 5,
  "clustering_strictness": 3
}

Add the following vision service object to the services array in your raw JSON configuration:

"services": [
  {
    "name": "<segmenter_name>",
    "type": "vision",
    "namespace": "rdk",
    "model": "obstacles_depth"
    "attributes": {
      "min_points_in_plane": <integer>,
      "min_points_in_segment": <integer>,
      "max_dist_from_plane_mm": <number>,
      "ground_angle_tolerance_degs": <integer>,
      "clustering_radius": <integer>,
      "clustering_strictness": <integer>
    }
  },
  ... // Other services
]
"services": [
{
  "name": "rc_segmenter",
  "type": "vision",
  "namespace": "rdk",
  "model": "obstacles_depth",
  "attributes": {
    "min_points_in_plane": 1500,
    "min_points_in_segment": 250,
    "max_dist_from_plane_mm": 10.0,
    "ground_angle_tolerance_degs": 20,
    "clustering_radius": 5,
    "clustering_strictness": 3
  }
}
]

The following parameters are available for an "obstacles_depth" segmenter:

ParameterRequired?Description
min_points_in_planeOptionalAn integer that specifies how many points to put on the flat surface or ground plane when clustering. This is to distinguish between large planes, like the floors and walls, and small planes, like the tops of bottle caps.
Default: 500
min_points_in_segmentOptionalAn integer that sets a minimum size to the returned objects, and filters out all other found objects below that size.
Default: 10
max_dist_from_plane_mmOptionalA float that determines how much area above and below an ideal ground plane should count as the plane for which points are removed. For fields with tall grass, this should be a high number. The default value is 100 mm.
Default: 100.0
ground_angle_tolerance_degsOptionalAn integer that determines how strictly the found ground plane should match the ground_plane_normal_vec. For example, even if the ideal ground plane is purely flat, a rover may encounter slopes and hills. The algorithm should find a ground plane even if the found plane is at a slant, up to a certain point.
Default: 30
clustering_radiusOptionalAn integer that specifies which neighboring points count as being “close enough” to be potentially put in the same cluster. This parameter determines how big the candidate clusters should be, or, how many points should be put on a flat surface. A small clustering radius is likely to split different parts of a large cluster into distinct objects. A large clustering radius is likely to aggregate closely spaced clusters into one object.
Default: 1
clustering_strictnessOptionalAn integer that determines the probability threshold for sorting neighboring points into the same cluster, or how “easy” viam-server should determine it is to sort the points the machine’s camera sees into this pointcloud. When the clustering_radius determines the size of the candidate clusters, then the clustering_strictness determines whether the candidates will count as a cluster. If clustering_strictness is set to a large value, many small clusters are likely to be made, rather than a few big clusters. The lower the number, the bigger your clusters will be.
Default: 5

If you want to identify multiple boxes over the flat plane with your segmenter:

  • First, configure your frame system to configure the relative spatial orientation of the components of your machine, including your camera, within Viam’s frame system service.
    • After configuring your frame system, your camera will populate its own Properties with these spatial intrinsic parameters from the frame system.
    • You can get those parameters from your camera through the camera API.
  • The segmenter now returns multiple boxes within the GeometryInFrame object it captures.

Click the Save button in the top right corner of the page and proceed to test your segmenter.

Test your segmenter

The following code uses the GetObjectPointClouds method to run a segmenter vision model on an image from the machine’s camera "cam1":

from viam.services.vision import VisionClient

robot = await connect()

# Grab Viam's vision service for the segmenter
my_segmenter = VisionClient.from_robot(robot, "my_segmenter")

objects = await my_segmenter.get_object_point_clouds("cam1")

await robot.close()

To learn more about how to use segmentation, see the Python SDK docs.

import (
"go.viam.com/rdk/config"
"go.viam.com/rdk/services/vision"
"go.viam.com/rdk/components/camera"
)

cameraName := "cam1" // Use the same component name that you have in your machine configuration

// Get the vision service you configured with name "my_segmenter" from the machine
mySegmenter, err := vision.from_robot(robot, "my_segmenter")
if err != nil {
    logger.Fatalf("Cannot get vision service: %v", err)
}

// Get segments
segments, err := mySegmenter.GetObjectPointClouds(context.Background(), cameraName, nil)
if err != nil {
    logger.Fatalf("Could not get segments: %v", err)
}
if len(segments) > 0 {
    logger.Info(segments[0])
}

To learn more about how to use segmentation, see the Go SDK docs.

Next Steps

For general configuration and development info, see:

Have questions, or want to meet other people working on robots? Join our Community Discord.

If you notice any issues with the documentation, feel free to file an issue or edit this file.