Configure a color_detector

The color_detector vision service model is a heuristic detector that draws boxes around objects according to their hue. Color detectors do not detect black, perfect grays (grays where the red, green, and blue color component values are equal), or white. It only detects hues found on the color wheel.

Navigate to the CONFIGURE tab of your machine’s page in the Viam app. Click the + icon next to your machine part in the left-hand menu and select Service. Select the vision type, then select the color detector model. Enter a name or use the suggested name for your service and click Create.

In your vision service’s panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels):

Color detector panel with color and hue tolerance selection and a field for the segment size

Add the vision service object to the services array in your JSON configuration:

"services": [
  {
    "name": "<service_name>",
    "type": "vision",
    "model": "color_detector",
    "attributes": {
      "segment_size_px": <integer>,
      "detect_color": "#ABCDEF",
      "hue_tolerance_pct": <number>,
      "saturation_cutoff_pct": <number>,
      "value_cutoff_pct": <number>
    }
  },
  ... // Other services
]
"services": [
  {
    "name": "blue_square",
    "type": "vision",
    "model": "color_detector",
    "attributes": {
      "segment_size_px": 100,
      "detect_color": "#1C4599",
      "hue_tolerance_pct": 0.07,
      "value_cutoff_pct": 0.15
    }
  },
  {
    "name": "green_triangle",
    "type": "vision",
    "model": "color_detector",
    "attributes": {
      "segment_size_px": 200,
      "detect_color": "#62963F",
      "hue_tolerance_pct": 0.05,
      "value_cutoff_pct": 0.20
    }
  }
]

The following parameters are available for a color_detector:

ParameterInclusionDescription
segment_size_pxRequiredAn integer that sets a minimum size (in pixels) of a contiguous color region to be detected, and filters out all other found objects below that size.
detect_colorRequiredThe color to detect in the image, as a string of the form #RRGGBB. The color is written as a hexadecimal string prefixed by ‘#’.
hue_tolerance_pctRequiredA number bigger than 0.0 and smaller than or equal to 1.0 that defines how strictly the detector must match to the hue of the color requested. ~0.0 means the color must match exactly, while 1.0 matches to every color, regardless of the input color. 0.05 is a good starting value.
saturation_cutoff_pctOptionalA number > 0.0 and <= 1.0 which defines the minimum saturation before a color is ignored. Defaults to 0.2.
value_cutoff_pctOptionalA number > 0.0 and <= 1.0 which defines the minimum value before a color is ignored. Defaults to 0.3.

Click the Save button in the top right corner of the page. Proceed to test your detector.

Test your detector

You can test your detector with live camera footage or existing images.

Live camera footage

If you intend to use the detector with a camera that is part of your machine, you can test your detector from the Control tab or with code:

  1. Configure a camera component.

  2. (Optional) If you would like to see detections from the Control tab, configure a transform camera with the following attributes:

    {
      "pipeline": [
        {
          "type": "detections",
          "attributes": {
            "confidence_threshold": 0.5,
            "detector_name": "<vision-service-name>",
            "valid_labels": ["<label>"]
          }
        }
      ],
      "source": "<camera-name>"
    }
    
  3. After adding the components and their attributes, click the Save button in the top right corner of the page.

  4. Navigate to the CONTROL tab, click on your transform camera and toggle it on. The transform camera will now show detections with bounding boxes around the object.

    Viam app control tab interface showing bounding boxes around two office chairs, both labeled “chair” with confidence score “0.50.”

  5. To access detections with code, use the Vision Service methods on the camera you configured in step 1. The following code gets the machine’s vision service and then runs a color detector vision model on output from the machine’s camera "cam1":

    from viam.services.vision import VisionClient
    
    robot = await connect()
    camera_name = "cam1"
    
    # Grab camera from the machine
    cam1 = Camera.from_robot(robot, camera_name)
    # Grab Viam's vision service for the detector
    my_detector = VisionClient.from_robot(robot, "my_detector")
    
    detections = await my_detector.get_detections_from_camera(camera_name)
    
    # If you need to store the image, get the image first
    # and then run detections on it. This process is slower:
    img = await cam1.get_image()
    detections_from_image = await my_detector.get_detections(img)
    
    await robot.close()
    

    To learn more about how to use detection, see the Python SDK docs.

    import (
      "go.viam.com/rdk/config"
      "go.viam.com/rdk/services/vision"
      "go.viam.com/rdk/components/camera"
    )
    
    // Grab the camera from the machine
    cameraName := "cam1" // make sure to use the same component name that you have in your machine configuration
    myCam, err := camera.FromRobot(robot, cameraName)
    if err != nil {
      logger.Fatalf("cannot get camera: %v", err)
    }
    
    myDetector, err := vision.from_robot(robot, "my_detector")
    if err != nil {
        logger.Fatalf("Cannot get vision service: %v", err)
    }
    
    // Get detections from the camera output
    detections, err := myDetector.DetectionsFromCamera(context.Background(), myCam, nil)
    if err != nil {
        logger.Fatalf("Could not get detections: %v", err)
    }
    if len(directDetections) > 0 {
        logger.Info(detections[0])
    }
    
    // If you need to store the image, get the image first
    // and then run detections on it. This process is slower:
    
    // Get the stream from a camera
    camStream, err := myCam.Stream(context.Background())
    
    // Get an image from the camera stream
    img, release, err := camStream.Next(context.Background())
    defer release()
    
    // Apply the color classifier to the image from your camera (configured as "cam1")
    detectionsFromImage, err := myDetector.Detections(context.Background(), img, nil)
    if err != nil {
        logger.Fatalf("Could not get detections: %v", err)
    }
    if len(detectionsFromImage) > 0 {
        logger.Info(detectionsFromImage[0])
    }
    

    To learn more about how to use detection, see the Go SDK docs.

Existing images

If you would like to test your detector with existing images, load the images and pass them to the detector:

from viam.services.vision import VisionClient
from PIL import Image

robot = await connect()
# Grab Viam's vision service for the detector
my_detector = VisionClient.from_robot(robot, "my_detector")

# Load an image
img = Image.open('test-image.png')

# Apply the detector to the image
detections_from_image = await my_detector.get_detections(img)

await robot.close()

To learn more about how to use detection, see the Python SDK docs.

import (
  "go.viam.com/rdk/config"
  "go.viam.com/rdk/services/vision"
  "image/jpeg"
  "os"
)

myDetector, err := vision.from_robot(robot, "my_detector")
if err != nil {
    logger.Fatalf("Cannot get Vision Service: %v", err)
}

// Read image from existing file
file, err := os.Open("test-image.jpeg")
if err != nil {
    logger.Fatalf("Could not get image: %v", err)
}
defer file.Close()
img, err := jpeg.Decode(file)
if err != nil {
    logger.Fatalf("Could not decode image: %v", err)
}
defer img.Close()

// Apply the detector to the image
detectionsFromImage, err := myDetector.Detections(context.Background(), img, nil)
if err != nil {
    logger.Fatalf("Could not get detections: %v", err)
}
if len(detectionsFromImage) > 0 {
    logger.Info(detectionsFromImage[0])
}

To learn more about how to use detection, see the Go SDK docs.

Next steps

Have questions, or want to meet other people working on robots? Join our Community Discord.

If you notice any issues with the documentation, feel free to file an issue or edit this file.