Changelog
Use the following buttons to filter by change type:
Builtin models moved to modules
The following resource models have moved to modules.
Resource | Model |
---|---|
board | customlinux |
board | jetson |
board | pca9685 |
board | odroid |
board | ti |
board | pi |
board | orange-pi |
board | upboard |
motor | tmc5072 |
motor | 28byj-48 |
encoder | ams-as5048 |
movement sensor | adxl345 |
movement sensor | dual-gps-rtk |
movement sensor | gps-nmea-rtk-pmtk |
movement sensor | gps-nmea-rtk-serial |
movement sensor | gps-nmea |
movement sensor | imu-wit |
movement sensor | imu-wit-hwt905 |
movement sensor | mpu6050 |
power sensor | ina219 |
power sensor | ina226 |
sensor | bme280 |
sensor | sensirion-sht3xd |
sensor | pi |
ML model | TFLite CPU |
The following models were removed:
Resource | Model |
---|---|
gripper | softrobotics |
motor | encoded-motor |
motor | gpiostepper |
motor | roboclaw |
sensor | ds18b20 |
Set data retention policies
You can now set how long data collected by a component should remain stored in the Viam Cloud in the component’s data capture configuration. For more information, see Data management service.
Pi models moved to module
The Raspberry Pi 4, 3, and Zero 2 W boards are now supported by viam:raspberry-pi:rpi
.
ESP32 cameras
viam-micro-server
now supports cameras on ESP32s.
For more information, see Configure an esp32-camera.
Micro-RDK now called viam-micro-server
The lightweight version of viam-server
that is built from the micro-RDK is now referred to as viam-micro-server
.
For more information, see viam-micro-server.
Provisioning
You can now configure provisioning for machines with the Viam Agent. For more information, see Configure provisioning with viam-agent.
Data capture for vision
Data capture is now possible for the vision service. For more information, see Supported components and services.
Create custom training scripts
You can now upload custom training scripts to the Viam Registry and use them to train machine learning models. For more information, see Create custom training scripts.
Operators can now view data
The operator role now has view permissions for the data in the respective resource a user has access to. For more information, see Data and machine learning permissions.
Python get_robot_part_logs parameters
The errors_only
parameter has been removed from get_robot_part_logs()
and replaced with log_levels
.
Return type of analog Read
The board analog API Read()
method now returns an AnalogValue
struct instead of a single int.
The struct contains an int representing the value of the reading, min and max range of values, and the precision of the reading.
CaptureAllFromCamera and GetProperties to vision API
The vision service now supports two new methods: CaptureAllFromCamera
and GetProperties
.
Renamed GeoObstacle to GeoGeometry
The motion service API parameter GeoObstacle
has been renamed to GeoGeometry
.
This affects users of the MoveOnGlobe()
method.
Return type of GetImage
The Python SDK introduced a new image container class called ViamImage
.
The camera component’s GetImage()
method now returns a ViamImage
type, and the vision service’s GetDetections()
and GetClassifications()
methods take in ViamImage
as a parameter.
You can use the helper functions viam_to_pil_image
and pil_to_viam_image
provided by the Python SDK to convert the ViamImage
into a PIL Image
and vice versa.
WriteAnalog from Go SDK
The WriteAnalog()
method has been removed from the Go SDK.
Use AnalogByName()
followed by Write()
instead.
Python SDK data retrieval behavior
tabular_data_by_filter()
and binary_data_by_filter()
now return paginated data.
Renamed AnalogReader to Analog
AnalogReader
has been renamed to Analog
.
The functionality remains the same, but code that uses analogs must be updated.
AnalogReaderByName()
and AnalogReaderNames()
have become AnalogByName()
and AnalogNames()
(since deprecated), respectively.
Part online and part offline triggers
You can now configure triggers to execute actions when a machine part comes online or goes offline.
Status from Board API
Viam has removed support for the following board API methods: Status()
, AnalogStatus()
, DigitalInterruptStatus()
, Close()
, Tick()
, AddCallback()
, and RemoveCallback()
.
Removed and replaced camera models
Viam has removed support for following builtin camera models: single_stream
, dual_stream
, align_color_depth_extrinsics
, and align_color_depth_homography
.
Updated GetCloudMetadata response
In addition to the existing returned metadata, the GetCloudMetadata
method now returns machine_id
and machine_part_id
as well.
Viam app interface
the Viam app machine page UI has been updated significantly. In addition to other improvements, your component, service, and other resource config cards are all displayed on one page instead of in separate tabs.
Additional ML models
Viam has added support for the TensorFlow, PyTorch, and ONNX ML model frameworks, expanding upon the existing support for TensorFlow Lite models. You can now upload your own ML model(/registry/ml-models/) using any of these frameworks for use with the Vision service.
Ultrasonic sensor for `viam-micro-server`
You can now use the ultrasonic sensor component with viam-micro-server
to integrate an HC-S204 ultrasonic distance sensor into a machine running viam-micro-server
.
Edit a machine configuration that uses a fragment
You can now edit the configuration of an existing machine that has been configured with a fragment by using the fragment_mods
object in your configuration.
You can use the fragment_mods
objects to be able to deploy a fragment to a fleet of machines, but still be able to make additional per-machine edits as needed.
Dual GPS movement sensor
You can now use the dual GPS movement sensor component to integrate a movement sensor that employs two GPS sensors into your machine. The dual GPS movement sensor calculates a compass heading from both GPS sensors, and returns the midpoint position between the two sensors as its position.
Viam Agent
You can now use the Viam Agent to provision your machine or fleet of machines during deployment.
The Viam Agent is a software provisioning manager that you can install on your machine which manages your viam-server
installation, including installation and ongoing updates, as well as providing flexible deployment configuration options, such as pre-configured WiFi network credentials.
Generic service
You can now use the generic service to define new, unique types of services that do not already have an appropriate API defined for them.
ML models in the registry
You can now upload machine learning (ML) models to the Viam Registry, in addition to modules. You may upload models you have trained yourself using the Viam app, or models you have trained outside of the App. When uploading, you have the option to make your model available to the general public for reuse.
Sensor-controlled base
Viam has added a sensor-controlled base component model, which supports a robotic base that receives feedback control from a movement sensor.
Visualize captured data
You can now visualize your data using many popular third-party visualization tools, including Grafana, Tableau, Google’s Looker Studio, and more. You can visualize any data, such as sensor readings, that you have synced to the Viam app from your machine.
See Visualize data with Grafana for a full walkthrough focused on Grafana specifically.
Use triggers to trigger actions
You can now configure triggers (previously called webhooks) to execute actions when certain types of data are sent from your machine to the cloud.
Filtered camera module
Viam has added a filtered-camera
module that selectively captures and syncs only the images that match the detections of an ML model.
For example, you could train an ML model that is focused on sports cars, and only capture images from the camera feed when a sports car is detected in the frame.
Check out this guide for more information.
Raspberry Pi 5 Support
You can now run viam-server
on a Raspberry Pi 5 with the new board model pi5
.
Role-based access control
Users can now have access to different fleet management capabilities depending on whether they are an owner or an operator of a given organization, location, or machine.
Authenticate with location API key
You can now use API keys for authentication. API keys allow you to assign the minimum required permissions for usage. Location secrets, the previous method of authentication, is deprecated and will be removed in a future release.
Queryable sensor data
Once you have added the data management service and synced data, such as sensor readings, to the Viam app, you can now run queries against both captured data as well as its metadata using either SQL or MQL.
For more information, see Query Data with SQL or MQL.
Model training from datasets
To make it easier to iterate while training machine learning models from image data, you now train models from datasets.
Manage users access
You can now manage users access to machines, locations, and organizations. For more information, see Access Control
Test an ML model in browser
After you upload and train a machine learning model, you can test its results in the Data tab.
This allows you to refine models by iteratively tagging more images for training based on observed performance.
For more information, see Test classification models with existing images in the cloud.
To use this update, the classifier must have been trained or uploaded after September 19, 2023. The current version of this feature exclusively supports classification models.
PLC support
The Viam platform now supports the Revolution Pi line of PLCs from KUNBUS in the form of a module. This collaboration allows you to leverage the Raspberry Pi-based Revolution Pi, which runs on Linux and has a specially designed I/O modules for streamlined interaction with industrial controls, eliminating the need for additional components.
Read the Viam PLC Support blog post for a step-by-step guide on using a PLC with Viam.
SLAM map creation
The Cartographer-module now runs in Viam’s cloud for creating or updating maps. This enhancement allows you to:
- Generate larger maps without encountering session timeouts
- Provide IMU input to improve map quality
- Save maps to the SLAM library
- Create or update maps using previously captured LiDAR and IMU data
- Deploy maps to machines
Modular registry
The Modular Registry enables you to use, create, and share custom modules, extending the capabilities of Viam beyond the components and services that are natively supported.
You can:
- Publish modules on the registry
- Add modules to any machine’s configuration with a few clicks
- Select the desired module version for deployment, make changes at your convenience, and deploy the updates to a single machine or an entire fleet.
Mobile app
You can use a mobile application, available for download now in the Apple and Google Play app stores, to connect to and control your Viam-powered machines directly from your mobile device.
Power sensor component
You now have the capability to use a power sensor component to monitor the voltage, current, and power consumption within your machine’s system.
Filter component’s data before the cloud
Viam has written a module that allows you to filter data based on specific criteria before syncing it to Viam’s cloud. It equips machines to:
- Remove data that is not of interest
- Facilitate high-interval captures while saving data based on your defined metrics
- Prevent the upload of unnecessary data
To learn more, see this tutorial on creating and configuring a data filtration module.
Configure a custom Linux board
You can now use boards like the Mediatek Genio 500 Pumpkin that run Linux operating systems with the customlinux
board model.
Image inspection for ML training
This update enables you to get a closer examination of your image and streamline your image annotation experience by making it easier to add bounding boxes and labels in the Data tab.
With the latest improvements, you can now:
- Navigate between images using the arrow keys in the main image view
- Expand images for a more detailed inspection by clicking the expand button on the right image panel
- Move between full-screen images effortlessly with the <> arrow buttons or arrow keys
- Return to the standard view by using the escape key or collapse button
Duplicate component button
You now have the ability to duplicate any config component, service, module, remote, or process.
To use this feature:
- Click on the duplicate component icon at the top right of any resource
- Optionally, you can modify the component name to distinguish it
- Adjust any attributes, such as motor pin numbers
Apple SSO authentication
Viam now supports sign-up/log-in through Apple Single Sign-On.
Note that currently, accounts from different SSO providers are treated separately, with no account merging functionality.
Arm component API
Arm models now support the GetKinematics
method in the arm API, allowing you to request and receive kinematic information.
View sensor data within Viam
You can now view your sensor data directly in the Viam app to verify data creation and accuracy. If you depend on sensor data to plan and control machine operations, this feature increases access to data and supports a more efficient workflow.
Session management in the Python SDK
The Python SDK now includes sessions, a safety feature that automatically cancels operations if the client loses connection to your machine.
Session management helps you to ensure safer operation of your machine when dealing with actuating controls. Sessions are enabled by default, with the option to disable sessions.
Connect an ODrive motor controller as a Viam module
You can integrate and control ODrive motor controllers with Viam using the odrive
module from the Viam Registry.
See the Odrive module readme to learn how to connect and use an ODrive motor controller with Viam, and view the sample configurations.
Implement custom robotic arms as Viam modules
When prototyping a robotic arm, you can now facilitate movement without creating your own motion planning. This update enables you to implement custom models of an arm component as a modular resource by coding three endpoints of the Arm API:
getJointPositions
movetoJointPositions
GetKinematics
Then, use the motion planning service to specify poses, and Viam handles the rest.
For more information, see this tutorial on creating a custom arm.
Apply a crop transform to camera views
You can now apply a crop transform to the views of your connected cameras in the Viam app.
This feature enables you to focus on a specific area of your camera feed.
For example, crop a video stream of a busy street to just the sidewalk.
Gantry component
To better control gantries with Viam, you can now:
- Specify speed values when calling the
MovetoPosition
method on Gantry components. This allows you to define the speed at which each axis moves to the desired position, providing enhanced precision and control over the gantry’s movement. - Set a home position for Gantry components to facilitate position resetting or maintain consistent starting points.
Optimized Viam-trained object detection models
This update for object detection models trained with the machine learning service brings significant improvements, including:
- 76% faster model inference for camera streams
- 64% quicker model training for object detection
- 46% reduction in compressed model size
TypeScript SDK beta release
The beta release of the TypeScript SDK allows you to create a web interface to work with your machine, as well as create custom components and services.
Train object detection ML models
You now have the capability to directly train object detection models in addition to image classification models from within the Viam app.
This update allows you to:
- Add labels by drawing bounding boxes around specific objects in your images or a single image.
- Create a curated subset of data for training by filtering images based on labels or tags.
Permissions for organizations in Viam
Now when you invite collaborators to join your organization, you can assign permissions to members by setting one of these roles:
Owner: These members can see and edit every tab on the machine page, as well as manage users in the app. This role is best for those on your team who are actively engineering and building machines.
Operator: These members can only see and use the remote control tab. This role is best for those on your team who are teleoperating or remotely controlling machines.
For more information about assigning permissions and collaborating with others on Viam, see Fleet Management.
Control RoboClaw motor controllers with the driver
When using a RoboClaw motor controller without encoders connected to your motors, you now have more direct control over the RoboClaw’s functionality within Viam or through the motor API.
For example, in the Viam app, you can now set Go For values for these motors, utilizing a time-based estimation for the number of revolutions.
Camera webcam names and setting framerates
The updates to the camera component have improved the process of connecting to and using cameras with your machines.
The latest updates enable you to:
- View readable webcam names in the video path of your camera component.
- Specify your preferred framerate by selecting the desired value in the newly added framerate field on the CONFIGURE tab.
Additions to code samples in the Viam app
The updated code samples now includes:
- Options for C++ and TypeScript
- The ability to hide or display your machines’ secrets
Access these samples in the Code sample tab on your machine’s page to connect to your machine in various languages.
Delete data in bulk in the Viam app
You can manage the data synced to Viam’s cloud with the new capability for bulk data deletion on the Data tab.
Vision service
Important: Breaking Change
The vision service became more modular in RDK v0.2.36, API v0.1.118, and Python SDK v0.2.18.
Find more information on each of the changes below.
Use individual vision service instances
You need to create an individual vision service instance for each detector, classifier, and segmenter model. You can no longer be able to create one vision service and register all of your detectors, classifiers, and segmenters within it.
Add and remove models using the machine config
You must add and remove models using the machine config. You will no longer be able to add or remove models using the SDKs.
Add machine learning vision models to a vision service
The way to add machine learning vision models is changing. You will need to first register the machine learning model file with the ML model service and then add that registered model to a vision service.
Machine learning for image classification models
You can now train and deploy image classification models with the data management service and use your machine’s image data directly within Viam. Additionally, you can upload and use existing machine learning models with your machines. For more information on using data synced to the cloud to train machine learning models, read Train a model.
Motion planning with new `constraint` parameter
A new parameter, constraint
, has been added to the Motion service API, allowing you to define restrictions on the machine’s movement.
The constraint system also provides flexibility to specify that obstacles should only impact specific frames of a machine.
Fragments in machine configuration
You can now access fragments in your machine configuration. The configurations you added will now show up automatically in the Builder view on your machine’s CONFIGURE tab. This makes it easier to monitor what fragments you’ve added to your machine and how they’re configured.
For more information, see Fragments.
Sticky GPS keys
GPS keys you enter are now saved in your local storage. This ensures that when you reload the page, your GPS keys remain accessible.
More reliable camera streams
The camera component’s streams are smoother and more reliable with recent improvements.
Additionally, camera streams automatically restart if you momentarily lose internet connection.
UI updates to Logs and History
The latest UI updates enable you to:
- Load a previous configuration for reverting changes made in the past
- Search logs by filtering keywords or log levels such as info or error messages
- Change your timestamp format to ISO or Local depending on your preference.
Rover reuse in Try Viam
You now have the option to reuse a machine config from a previous Try Viam session.
Dynamic code samples
The Viam app Code sample tab now dynamically updates as you add resources to your machine’s config.
The code samples instantiate each resource and include examples of how to call a Get
method on it.
TypeScript SDK
Find more information in the TypeScript SDK docs.
Frame system visualizer
When adding frames to your machine’s config in the Viam app, you can now use the Frame System subtab of the CONFIGURE tab to more easily visualize the relative positions of frames.
Support for microcontrollers
viam-micro-server
is a lightweight version of viam-server
that can run on an ESP32.
Find more information in the viam-micro-server
installation docs.
Remote control power input
On your machine’s CONTROL tab on the Viam app, you can now set the power of a base. The base control UI previously always sent 100% power to the base’s motors.
New encoder model: AMS AS5048
The AMS AS5048 is now supported.
GetLinearAcceleration method
The movement sensor API now includes a GetLinearAcceleration method.
Support for capsule geometry
The motion service now supports capsule geometries.
The UR5 arm model has been improved using this new geometry type.
Modular resources
You can now implement your own custom resources as modular resources.
Important: Breaking Change
All users need to update to the latest version of the RDK (V3.0.0) to access machines using the Viam app.
URDF kinematic file support
You can now supply kinematic information using URDF files when implementing your own arm models.
New movement sensor models
There are two new movement sensor models:
Camera performance and reliability
- Improved server-side logic to choose a mime type based on the camera image type, unless a specified mime type is supplied in the request. The default mime type for color cameras is now JPEG, which improves the streaming rate across every SDK.
- Added discoverability when a camera reconnects without changing video paths. This now triggers the camera discovery process, where previously users would need to manually restart the RDK to reconnect to the camera.
Motion planning with remote components
The motion service is now agnostic to the networking topology of a machine.
- Kinematic information is now transferred over the robot API. This means that the motion service is able to get kinematic information for every component on the machine, regardless of whether it is on a main or remote viam-server.
- Arms are now an input to the motion service.
This means that the motion service can plan for a machine that has an arm component regardless of whether the arm is connected to a main or remote-part instance of
viam-server
.
Motion planning path smoothing
- RRT* paths now undergo rudimentary smoothing, resulting in improvements to path quality with negligible change to planning performance.
- Plan manager now performs direct interpolation for any solution within some factor of the best score, instead of only in the case where the best inverse kinematics solution could be interpolated.
Data synchronization reliability
Previously, data synchronization used bidirectional streaming. Now is uses a simpler unary approach that is more performant on batched unary calls, is easier to load balance, and maintains ordered captures.
Camera configuration
Changed the configuration schemes for the following camera models:
- Webcam
- FFmpeg
- Transform
- Join pointclouds
For information on configuring any camera model, see Camera Component.
New servo model
A new servo model called gpio
supports servos connected to non-Raspberry Pi boards.
RTT indicator in the app
A badge in the Viam app now displays RTT (round trip time) of a request from your client to the machine. Find this indicator of the time to complete one request/response cycle on your machine’s CONTROL tab, in the Operations & Sessions card.
Python 3.8 support
The Python SDK now supports Python 3.8, in addition to 3.9 and 3.10.
New parameter: `extra`
A new API method parameter, extra
, allows you to extend modular resource functionality by implementing the new field according to whatever logic you choose.
extra
has been added to the following APIs: arm, data management, gripper, input controller, motion, movement sensor, navigation, pose tracker, sensor, SLAM, vision.
IMPORTANT: Breaking change
Users of the Go SDK must update code to specify extra
in the arguments that pass into each request.
extra
is an optional parameter in the Python SDK.
Service dependencies
viam-server
now initializes and configures resources in the correct order.
For example, if the SLAM service depends on a LiDAR, it will always initialize the LiDAR before the SLAM service.
IMPORTANT: Breaking change
If you are using the SLAM service, you now need to specify sensors used by the SLAM service in the depends_on
field of the SLAM configuration.
Other service configurations are not affected.
Width and height fields from camera API
Removed width
and height
from the response of the GetImage
method in the camera API.
This does not impact any existing camera models.
If you write a custom camera model, you no longer need to implement the width
and height
fields.