Configure Visual Odometry
Viam provides a viam-visual-odometry
modular resource which uses monocular visual odometry to enable any calibrated camera to function as a movement sensor.
In this way, you can add basic movement sensing to your camera-equipped robot without needing a dedicated hardware movement sensor.
The viam-visual-odometry
module implements the following two methods of the movement sensor API:
Note that GetLinearVelocity()
returns an estimation of the instantaneous linear velocity without scale factor.
Therefore, you should not consider returned unit measurements trustworthy: instead, GetLinearVelocity()
should serve as a direction estimation only.
While viam-visual-odometry
enables you to add movement sensing abilities to your robot without needing specialized hardware, a dedicated movement sensor will generally provide more accurate readings.
If your robot requires precise awareness of its location and its movement, you should consider using a dedicated movement sensor in addition to the viam-visual-odometry
module.
The viam-visual-odometry
module is available from the Viam registry.
See Modular resources for instructions on using a module from the Viam registry on your robot.
The source code for this module is available on the viam-visual-odometry
GitHub repository.
Requirements
Follow the instructions below to download and set up the viam-visual-odometry
module on your robot:
Clone the
viam-visual-odometry
to your system:git clone git@github.com:viamrobotics/viam-visual-odometry.git cd viam-visual-odometry
Install the necessary Python dependencies:
pip install -r requirements.txt
If you haven’t already, install
viam-server
on your robot.
Configuration
To configure the viam-visual-odometry
module on your robot, follow the instructions below:
Navigate to the Config tab of your robot’s page in the Viam app.
Click on the Components subtab and find the Create component pane.
Enter a name for your camera, select the
camera
type, and select thewebcam
model.Click Create Component.
In the resulting
camera
component configuration pane, select a Video path for your camera. If your robot is live, the drop-down menu auto-populates any identified camera stream paths.Switch to the Modules subtab and find the Add module pane.
Enter a name for your visual odometry module, and provide the full path to the
run.sh
file in the Executable path field, then click Add module.Switch back to the Components subtab and find the Create component pane.
Enter a name for your odometry movement sensor, select the
movement_sensor
type, and enterviam:visual_odometry:opencv_orb
formodel
, then click Create component.In the resulting
movement_sensor
component configuration pane, paste the following configuration into the Attributes text window:{ "camera_name": "<your-camera-name>", "time_between_frames_s": <time_seconds>, "lowe_ratio_threshold": <lowe_ratio_threshold> }
Provide the same camera name as you used in step 3. See the Attributes section for more information on the other attributes.
Click Save config at the bottom of the page.
{
"modules": [
{
"name": "<your-visual-odometer-name>",
"executable_path": "</path/to/run.sh>",
"type": "local"
}
],
"components": [
{
"name": "<your-camera-name>",
"type": "camera",
"model": "webcam",
"attributes": {
"video_path": "<path-to-video-stream>",
"height_px": <height>,
"width_px": <width>,
"intrinsic_parameters": {
"ppx": <ppx>,
"ppy": <ppy>,
"fx": <fx>,
"fy": <fy>
},
"distortion_parameters": {
"rk3": <rk3>,
"tp1": <tp1>,
"tp2": <tp2>,
"rk1": <rk1>,
"rk2": <rk2>
}
},
"depends_on": []
},
{
"name": "<your_movement_sensor_name>",
"type": "movement_sensor",
"namespace": "rdk",
"model": "viam:visual_odometry:opencv_orb",
"attributes": {
"camera_name": "<your-camera-name>",
"time_between_frames_s": <time_seconds>,
"lowe_ratio_threshold": <lowe_ratio_threshold>
},
"depends_on": []
}
]
}
{
"modules": [
{
"name": "my-odometry-module",
"executable_path": "/path/to/run.sh",
"type": "local"
}
],
"components": [
{
"name": "my-camera",
"type": "camera",
"model": "webcam",
"attributes": {
"video_path": "FDF90FEB-59E5-4FCF-AABD-DA03C4E19BFB",
"height_px": 720,
"width_px": 1280,
"intrinsic_parameters": {
"ppx": 446,
"ppy": 585,
"fx": 1055,
"fy": 1209
},
"distortion_parameters": {
"rk3": -0.03443,
"tp1": 0.01364798,
"tp2": -0.0107569,
"rk1": -0.1621,
"rk2": 0.13632
}
},
"depends_on": []
},
{
"name": "my_movement_sensor",
"type": "movement_sensor",
"namespace": "rdk",
"model": "viam:visual_odometry:opencv_orb",
"attributes": {
"camera_name": "my-camera",
"time_between_frames_s": 0.2,
"lowe_ratio_threshold": 0.75
},
"depends_on": []
}
]
}
Camera calibration
Once you have configured a camera
component, you need to calibrate it.
Because the viam-visual-odometry
module performs visual odometry calculations, its visual data source (the camera) must be as well defined as possible.
These calibration steps ensure that the video stream data that reaches the module is as uniform as possible when calculating measurements.
- Follow the Calibrate a camera procedure to generate the required intrinsic parameters specific to your camera.
- Copy the resulting intrinsics data into your robot configuration, either in the Config builder or in the Raw JSON. See the JSON Example tab above for an example intrinsics configuration.
Camera calibration results should look similar to the following example, with readings specific to your camera:
Example output:
"intrinsic_parameters": {
"fy": 940.2928257873841,
"height_px": 480,
"ppx": 320.6075282958033,
"ppy": 239.14408757087756,
"width_px": 640,
"fx": 939.2693584627577
},
"distortion_parameters": {
"rk2": 0.8002516496932317,
"rk3": -5.408034254951954,
"tp1": -0.000008996658362365533,
"tp2": -0.002828504714921335,
"rk1": 0.046535971648456166
}
Copy calibration data
When you copy the calibration results into your camera
component configuration, be sure to provide these values to the correct attributes in the target camera
configuration.
Specifically, note that the height_px
and width_px
attributes are not contained within the intrinsic_parameters
array in the camera
configuration, but are located outside of it.
Attributes
The following attributes are available to configure the viam-visual-odometry
module:
Name | Type | Inclusion | Default | Description |
---|---|---|---|---|
camera_name | string | Required | Camera name to be used for inferring the motion. | |
time_between_frames_s | float | Optional | 0.1 | Target time between two successive frames, in seconds. Depending on the inference time and the time to get an image, the sleeping time after each inference will be auto-tuned to reach this target. Additionally, if the time between two successive frame is 5x larger than time_between_frames_s , another frame will be requested. This value depends on the speed of your system. |
orb_n_features | int | Optional | 10000 | Maximum number of features to retain. |
orb_edge_threshold | int | Optional | 31 | Size of the border where the features are not detected. It should roughly match the orb_patch_size attribute. |
orb_patch_size | int | Optional | 31 | Size of the patch used by the oriented BRIEF descriptor. |
orb_n_levels | int | Optional | 8 | Number of pyramid levels. |
orb_first_level | int | Optional | 0 | Level of pyramid to put source image into. |
orb_fast_threshold | int | Optional | 20 | Fast threshold. |
orb_scale_factor | float | Optional | 1.2 | Pyramid decimation ratio, greater than 1. |
orb_WTA_K | int | Optional | 2 | Number of points that produce each element of the oriented BRIEF descriptor. |
matcher | string | Optional | "flann" | Either "flann" for FLANN based matcher or "BF" for brute force matcher. The FLANN matcher will look for the two best matches using the KNN method so Lowe’s ratio test can be performed afterward. The brute force matcher uses Hamming norm. |
lowe_ratio_threshold | float | Optional | 0.8 | Threshold value to check if the best match is significantly better than the second best match. This value will not be used if brute force matcher is chosen. |
ransac_prob | float | Optional | 0.99 | Probability to find a subset without outliers in it. Defines the number of iterations to filter the outliers. The number of iterations is roughly given by $k = \frac{\log(1-p)}{\log(1-w^n)}$ , where $n$ is the number of points and $w$ is the ratio of inliers to total points. |
ransac_threshold_px | float | Optional | 0.5 | Maximum error to be classified as an inlier. |
See the ORB openCV documentation for more details.
Test the movement sensor
After you configure your movement sensor, navigate to the Control tab and select the dedicated movement sensor dropdown panel. This panel presents the data collected by the movement sensor. The sections in the panel include the position, orientation, angular velocity, linear velocity, and linear acceleration.

Have questions, or want to meet other people working on robots? Join our Community Discord.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!