Transform a Camera
Use the transform
model to apply transformations to input source images.
The transformations are applied in the order they are written in the pipeline
.
Navigate to the CONFIGURE tab of your machine’s page in the Viam app.
Click the + icon next to your machine part in the left-hand menu and select Component.
Select the camera
type, then select the transform
model.
Enter a name or use the suggested name for your camera and click Create.
Click the {} (Switch to Advanced) button in the top right of the component panel to edit the component’s attributes directly with JSON. Copy and paste the following attribute template into the attributes field. Then remove and fill in the attributes as applicable to your camera, according to the table below.
{
"source" : "<your-camera-name>",
"pipeline": [
{ "type": "<transformation-type>", "attributes": { ... } },
],
"intrinsic_parameters": {
"width_px": <int>,
"height_px": <int>,
"fx": <float>,
"fy": <float>,
"ppx": <float>,
"ppy": <float>
},
"distortion_parameters": {
"rk1": <float>,
"rk2": <float>,
"rk3": <float>,
"tp1": <float>,
"tp2": <float>
}
}
{
"source": "my-webcam",
"pipeline": [
{ "type": "rotate", "attributes": {} },
{ "type": "resize", "attributes": { "width_px": 200, "height_px": 100 } }
]
}
{
"name": "<your-camera-name>",
"model": "transform",
"type": "camera",
"namespace": "rdk",
"attributes" : {
"source" : "<your-source-camera-name>",
"pipeline": [
{ "type": "<transformation-type>", "attributes": { ... } },
],
"intrinsic_parameters": {
"width_px": <int>,
"height_px": <int>,
"fx": <float>,
"fy": <float>,
"ppx": <float>,
"ppy": <float>
},
"distortion_parameters": {
"rk1": <float>,
"rk2": <float>,
"rk3": <float>,
"tp1": <float>,
"tp2": <float>
}
}
}
The following attributes are available for transform
views:
Name | Type | Required? | Description |
---|---|---|---|
source | string | Required | name of the camera to transform. |
pipeline | array | Required | Specify an array of transformation objects. |
intrinsic_parameters | object | Optional | The intrinsic parameters of the camera used to do 2D <-> 3D projections:
|
distortion_parameters | object | Optional | Modified Brown-Conrady parameters used to correct for distortions caused by the shape of the camera lens:
|
debug | boolean | Optional | Enables the debug outputs from the camera if true .Default: false |
The following are the transformation objects available for the pipeline
:
Classifications overlay text from the GetClassifications
method of the vision service onto the image.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "classifications",
"attributes": {
"classifier_name": "<name>",
"confidence_threshold": <float>,
"max_classifications": <int>,
"valid_labels": [ "<label>" ]
}
}
]
}
Attributes:
classifier_name
: The name of the classifier in the vision service.confidence_threshold
: The threshold above which to display classifications.max_classifications
: Optional. The maximum number of classifications to display on the camera stream at any given time. Default:1
.valid_labels
: Optional. An array of labels that you to see detections for on the camera stream. If not specified, all labels from the classifier are used.
The Crop transform crops takes an image and crops it to a rectangular area specified by two points: the top left point ((x_min, y_min)
) and the bottom right point ((x_max, y_max)
).
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "crop",
"attributes": {
"x_min_px": <int>,
"y_min_px": <int>,
"x_max_px": <int>,
"y_max_px": <int>
}
}
]
}
Attributes:
x_min_px
: The x coordinate of the top left point of the rectangular area to crop the image to.y_min_px
: The y coordinate of the top left point of the rectangular area to crop the image to.x_max_px
: The x coordinate of the bottom right point of the rectangular area to crop the image to.y_max_px
: The y coordinate of the bottom right point of the rectangular area to crop the image to.
The Depth Edges transform creates a canny edge detector to detect edges on an input depth map.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "depth_edges",
"attributes": {
"high_threshold_pct": <float>,
"low_threshold_pct": <float>,
"blur_radius_px": <float>
}
}
]
}
Attributes:
high_threshold_pct
: The high threshold value: between 0.0 - 1.0.low_threshold_pct
: The low threshold value: between 0.0 - 1.0.blur_radius_px
: The blur radius used to smooth the image before applying the filter.
Depth Preprocessing applies some basic hole-filling and edge smoothing to a depth map.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "depth_preprocess",
"attributes": {}
}
]
}
Attributes:
- None.
The Depth-to-Pretty transform takes a depth image and turns it into a colorful image, with blue indicating distant points and red indicating nearby points. The actual depth information is lost in the transform.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "depth_to_pretty",
"attributes": {}
}
]
}
Attributes:
- None.
The Detections transform takes the input image and overlays the detections from a given detector configured within the vision service.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "detections",
"attributes": {
"detector_name": string,
"confidence_threshold": <float>,
"valid_labels": ["<label>"]
}
}
]
}
Attributes:
detector_name
: The name of the detector configured in the vision service.confidence_threshold
: Specify to only display detections above the specified threshold (decimal between 0 and 1).valid_labels
: Optional. An array of labels that you to see detections for on the camera stream. If not specified, all labels from the classifier are used.
The Identity transform does nothing to the image. You can use this transform to change the underlying camera source’s intrinsic parameters or stream type, for example.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "identity"
}
]
}
Attributes:
- None
Overlays the depth and the color 2D images. Useful to debug the alignment of the two images.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "overlay",
"attributes": {
"intrinsic_parameters": {
"width_px": <int>,
"height_px": <int>,
"ppx": <float>,
"ppy": <float>,
"fx": <float>,
"fy": <float>,
}
}
}
]
}
Attributes:
intrinsic_parameters
: The intrinsic parameters of the camera used to do 2D <-> 3D projections.width_px
: The width of the image in pixels. Value must be >= 0.height_px
: The height of the image in pixels. Value must be >= 0.ppx
: The image center x point.ppy
: The image center y point.fx
: The image focal x.fy
: The image focal y.
The Resize transform resizes the image to the specified height and width.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "resize",
"attributes": {
"width_px": <int>,
"height_px": <int>
}
}
]
}
Attributes:
width_px
: Specify the expected width for the aligned image. Value must be >= 0.height_px
: Specify the expected width for the aligned image. Value must be >= 0.
The Rotate transformation rotates the image by the angle specified in angle_deg
. Default: 180 degrees.
This feature is useful for when the camera is installed upside down or sideways on your machine.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "rotate",
"attributes": {
"angle_degs": <float>
}
}
]
}
Attributes:
angle_deg
: Rotate the image by a specific angle in degrees.
The Undistort transform undistorts the input image according to the intrinsic and distortion parameters specified within the camera parameters. Currently only supports a Brown-Conrady model of distortion (20 September 2022). For further information, please refer to the OpenCV docs.
{
"source": "<your-source-camera-name>",
"pipeline": [
{
"type": "undistort",
"attributes": {
"intrinsic_parameters": {
"width_px": <int>,
"height_px": <int>,
"ppx": <float>,
"ppy": <float>,
"fx": <float>,
"fy": <float>
},
"distortion_parameters": {
"rk1": <float>,
"rk2": <float>,
"rk3": <float>,
"tp1": <float>,
"tp2": <float>
}
}
}
]
}
Attributes:
intrinsic_parameters
: The intrinsic parameters of the camera used to do 2D <-> 3D projections.width_px
: The expected width of the aligned image in pixels. Value must be >= 0.height_px
: The expected height of the aligned image in pixels. Value must be >= 0.ppx
: The image center x point.ppy
: The image center y point.fx
: The image focal x.fy
: The image focal y.
distortion_parameters
: Modified Brown-Conrady parameters used to correct for distortions caused by the shape of the camera lens.rk1
: The radial distortion x.rk2
: The radial distortion y.rk3
: The radial distortion z.tp1
: The tangential distortion x.tp2
: The tangential distortion y.
Example
{
"name": "camera_name",
"model": "transform",
"type": "camera",
"namespace": "rdk",
"attributes": {
"source": "physical_cam",
"pipeline": [
{ "type": "rotate", "attributes": {} },
{ "type": "resize", "attributes": { "width_px": 200, "height_px": 100 } }
]
}
}
View the camera stream
Once your camera is configured and connected, expand the TEST panel on the CONFIGURE or CONTROL tabs. If everything is configured correctly, you will see the live feed from your camera.
Troubleshooting
If your camera is not working as expected, follow these steps:
- Check your machine logs on the LOGS tab to check for errors.
- Review this camera model’s documentation to ensure you have configured all required attributes.
- Click on the TEST panel on the CONFIGURE or CONTROL tab and test if you can use the camera there.
If none of these steps work, reach out to us on the Community Discord and we will be happy to help.
Common errors
Next steps
For more configuration and usage info, see:
Have questions, or want to meet other people working on robots? Join our Community Discord.
If you notice any issues with the documentation, feel free to file an issue or edit this file.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!