Configure a SCUTTLE Robot with a Camera
Configure a SCUTTLE Robot on the Viam platform.
Configure a SCUTTLE Robot on the Viam platform.
Uses a camera, a video file, or a stream as a camera.
Configure a camera that gets color or depth images frames from a file path.
Configure a camera that uses velodyne lidar.
Configure a standard camera that streams camera data.
Configure a streaming camera with an MJPEG track.
Configure a camera to use for testing."
Configure a camera that streams image data from an HTTP endpoint.
Use the intrinsics of the color and depth camera, as well as the extrinsic pose between them, to align two images.
Combine the streams of two camera servers to create colorful point clouds.
Use a homography matrix to align the color and depth images.
Combine and align the streams of a color and a depth camera.
A camera captures 2D or 3D images and sends them to the computer controlling the robot.
Combine the point clouds from multiple camera sources and project them to be from the point of view of target_frame.
Use the Vision Service in the Viam app to detect a color with the Viam Rover.
How to turn a light on when your webcam sees a person.
Calibrate a camera and extract the intrinsic and distortion parameters.
Use the Vision Service and the Python SDK to send yourself a text message when your webcam detects a person.
Build a line-following robot that relies on a webcam and color detection.
Instructions to run Cartographer SLAM with a Rplidar or sample dataset.
Harness AI to add life to your Viam rover.
Instructions for transforming a camera.
Build a foam dart launcher with a wheeled rover and a Raspberry Pi.