Release Notes

25 April 2023

Vision Service

The vision service is becoming more modular in RDK v0.2.36, API v0.1.118, and Python SDK v0.2.18.

The following breaking changes will take effect:

Use individual vision service instances

You will need to create an individual vision service instance for each detector, classifier, and segmenter model. You will no longer be able to create one vision service and register all of your detectors, classifiers, and segmenters within it.

API calls

Change your existing API calls to get the new vision service instance for your detector, classifier, or segmenter model directly from the VisionClient:

Change your existing API calls to get the new vision service instance for your detector, classifier, or segmenter model directly from the VisionClient:

my_object_detector = VisionClient.from_robot(robot, "find_objects")
img = await cam.get_image()
detections = await my_object_detector.get_detections(img)
vision = VisionServiceClient.from_robot(robot)
img = await cam.get_image()
detections = await vision.get_detections(img, "find_objects")

Color Detector configurations

You can replace existing color detectors by configuring new ones in the UI or you can update the Raw JSON configuration of your robots:

"services": [
    {
        "name": "blue_square",
        "type": "vision",
        "model": "color_detector",
        "attributes": {
            "segment_size_px": 100,
            "detect_color": "#1C4599",
            "hue_tolerance_pct": 0.07,
            "value_cutoff_pct": 0.15
        }
    },
    {
        "name": "green_triangle",
        "type": "vision",
        "model": "color_detector",
        "attributes": {
            "segment_size_px": 200,
            "detect_color": "#62963F",
            "hue_tolerance_pct": 0.05,
            "value_cutoff_pct": 0.20
        }
    },
    ... // other services
]
"services": [
    {
        "name": "vision",
        "type": "vision",
        "attributes": {
            "register_models": [
            {
                "parameters": {
                    "segment_size_px": 100,
                    "detect_color": "#1C4599",
                    "hue_tolerance_pct": 0.07,
                    "value_cutoff_pct": 0.15
                },
                "name": "blue_square",
                "type": "color_detector"
            },
            {
                "parameters": {
                    "segment_size_px": 200,
                    "detect_color": "#62963F",
                    "hue_tolerance_pct": 0.05,
                    "value_cutoff_pct": 0.20
                },
                "name": "green_triangle",
                "type": "color_detector"
            }
            ]
        }
    },
    ... // other services
]

TFLite Detector configurations

You can replace existing TFLite detectors by configuring new ones in the UI or you can update the Raw JSON configuration of your robots:

"services": [
    {
        "name": "person_detector",
        "type": "mlmodel",
        "model": "tflite_cpu",
        "attributes": {
            "model_path": "/path/to/file.tflite",
            "label_path": "/path/to/labels.tflite",
            "num_threads": 1
        }
    },
    {
        "name": "person_detector",
        "type": "vision",
        "model": "mlmodel",
        "attributes": {
            "mlmodel_name": "person_detector"
        }
    },
    ... // other services
]
"services": [
    {
        "name": "vision",
        "type": "vision",
        "attributes": {
            "register_models": [
            {
                "parameters": {
                    "model_path": "/path/to/file.tflite",
                    "label_path": "/path/to/labels.tflite",
                    "num_threads": 1
                },
                "name": "person_detector",
                "type": "tflite_detector"
            }
            ]
        }
    },
    ... // other services
]

TFLite Classifier configurations

You can replace existing TFLite classifiers by configuring new ones in the UI or you can update the Raw JSON configuration of your robots:

"services": [
    {
        "name": "fruit_classifier",
        "type": "mlmodel",
        "model": "tflite_cpu",
        "attributes": {
            "model_path": "/path/to/classifier_file.tflite",
            "label_path": "/path/to/classifier_labels.txt",
            "num_threads": 1
        }
    },
    {
        "name": "fruit_classifier",
        "type": "vision",
        "model": "mlmodel",
        "attributes": {
            "mlmodel_name": "fruit_classifier"
        }
    },
    ... // other services
]
"services": [
    {
        "name": "vision",
        "type": "vision",
        "attributes": {
            "register_models": [
            {
                "parameters": {
                    "model_path": "/path/to/classifier_file.tflite",
                    "label_path": "/path/to/classifier_labels.txt",
                    "num_threads": 1
                },
                "name": "fruit_classifier",
                "type": "tflite_classifier"
            }
            ]
        }
    },
    ... // other services
]

Radius Clustering 3D Segmenter configurations

You can replace existing Radius Clustering 3D segmenters by configuring new ones in the UI or you can update the Raw JSON configuration of your robots:

"services": [
    {
        "name": "rc_segmenter",
        "type": "vision",
        "model": "obstacles_pointcloud"
        "attributes": {
            "min_points_in_plane": 1000,
            "min_points_in_segment": 50,
            "clustering_radius_mm": 3.2,
            "mean_k_filtering": 10
        }
    },
    ... // other services
]
"services": [
    {
        "name": "vision",
        "type": "vision",
        "attributes": {
            "register_models": [
            {
                "parameters": {
                    "min_points_in_plane": 1000,
                    "min_points_in_segment": 50,
                    "clustering_radius_mm": 3.2,
                    "mean_k_filtering": 10
                },
                "name": "rc_segmenter",
                "type": "radius_clustering_segmenter"
            }
            ]
        }
    },
    ... // other services
]

Detector to 3D Segmenter configurations

You can replace existing Radius Clustering 3D segmenters by configuring new ones in the UI or you can update the Raw JSON configuration of your robots:

"services": [
    {
        "name": "my_segmenter",
        "type": "vision",
        "model": "detector_3d_segmenter"
        "attributes": {
            "detector_name": "my_detector",
            "confidence_threshold_pct": 0.5,
            "mean_k": 50,
            "sigma": 2.0
        }
    },
    ... // other services
]
"services": [
    {
        "name": "vision",
        "type": "vision",
        "attributes": {
            "register_models": [
            {
                "parameters": {
                    "detector_name": "my_detector",
                    "confidence_threshold_pct": 0.5,
                    "mean_k": 50,
                    "sigma": 2.0
                },
                "name": "my_segmenter",
                "type": "detector_segmenter"
            }
            ]
        }
    },
    ... // other services
]

Add and remove models using the robot config

You must add and remove models using the robot config. You will no longer be able to add or remove models using the SDKs.

Add machine learning vision models to a vision service

The way to add machine learning vision models is changing. You will need to first register the machine learning model file with the ML model service and then add that registered model to a vision service.

28 February 2023

Release Versions

  • rdk - v0.2.18
  • api - v0.1.83
  • slam - v0.1.22
  • viam-python-sdk - v0.2.10
  • goutils - v0.1.13
  • rust-utils - v0.0.10

(Bold=updated version)

New Features

Reuse rovers in Try Viam

What is it?Users of Try Viam now have the option to reuse a robot config if they want to continue working on a project that they started in a prior session.

Dynamic Code Samples Tab

What is it?The code sample included for each SDK dynamically updates as resources are added to the config. We instantiate each resource and provide an example of how to call a simple Get method so that users can start coding right away without needing to import and provide the name of all of the components and services in their config.

TypeScript SDK

What is it?Users that want to create web interfaces to control their robots can use the new TypeScript SDK as a client. Currently only web browser apps are supported due to how networking is handled. The RDK server running on the robot is able to detect if a given SDK client session has lost communication because it tries to maintain a configurable heartbeat, by default once every 2 seconds. Users can choose to opt-out of this session management.

Frame System Visualizer

What is it?Users can now set up a frame system on their robot using a 3D visualizer located in the **Frame System** tab on the config UI. Setting up the frame system hierarchy of a robot enables the RDK to transform poses between different component reference frames. Users can also give individual components a geometry so that the RDK’s builtin motion planner can avoid obstacles while path planning.

Viam for Microcontrollers

What is it?Micro-RDK is a lightweight version of the RDK that is capable of running on an ESP32. Examples & detailed set up instructions can be found in the Micro-RDK GitHub repo.

Improvements

Base control card UI

What is it?We have improved the UI of the base control card to make it easier to view multiple camera streams while remotely controlling a base. When a robots config contains SLAM, we also now provide a view of the SLAM Map with a dot to indicate where the robot is currently localized within that map.
Base component card UI

Viam Server goes into restart loop instead of using cached config

When restarting a viam-server that was previously connected to the internet and cached the config - it went into a restart loop when it does not have access to the internet.

Never have long-lived I2CHandle objects

Creating an I2CHandle locks the I2C bus that spawned it, and closing the handle unlocks the bus again. That way, only one device can talk over the bus at a time, and you don’t get race conditions. However, if a component creates a handle in its constructor, it locks the bus forever, which means no other component can use that bus.

We have changed components that stored an I2CHandle, so that they instead just store a pointer to the board board.I2C bus itself, create a new handle when they want to send a command, and close it again as soon as they’re done.

Sensor does not show GPS readings

We changed sensor.Readings ["position"] field to return the values of the *geo.Point being accessed.

Add implicit dependencies to servo implementation

All component drivers can now declare dependencies, which are used to infer the order or instantiation.

31 January 2023

Release Versions

  • rdk - v0.2.14
  • api - v0.1.63
  • slam - v0.1.17
  • viam-python-sdk - v0.2.8
  • goutils - v0.1.9
  • rust-utils - v0.0.9

(Bold=updated version)

New Features

Add Power Input to Remote Control

What is it?Users can now set the power of the base from the remote control UI. This sets the power percentage being sent to the motors that are driving the base which determines its overall speed.
Base power control in the UI

New Drivers in the RDK: AMS AS5048 Encoder

What is it?RDK now natively supports the AMS AS5048 encoder. This is the encoder that is included in the SCUTTLE robot.

Improvements

Linear Acceleration

What is it?We added a GetLinearAcceleration method to the movement sensor API. This allows us to represent IMUs that are commonly used by hobbyists using the movement sensor interface.

Capsule Support & Improved UR5 Kinematics

What is it?We have added support for capsule geometries to our motion planning service. Using this new geometry type, we have also improved our representation of the kinematics of a UR5 arm.
Assertion ErrorWe were previously not able to send error messages over webRTC to the python SDK. This meant that users would see an unhelpful error "Assertion Error" message. Now, we are able to surface those errors so that users have more feedback as they program in Python.

28 December 2022

Release Versions

  • rdk - v0.2.9
  • api - v0.1.31
  • slam - v0.1.13
  • viam-python-sdk - v0.2.6
  • goutils - v0.1.6
  • rust-utils - v0.0.6

(Bold=updated version)

New Features

Custom Modular Resources

What is it?This new feature allows users to implement their own custom components or component models using our Go SDK. We are now working to add support in each of our SDKs so that users can create custom resources in a variety of programming languages. Previously, the only way for users to implement custom resources was to use an SDK as a server. This meant that a user needed to run a viam-server for their custom component and add it to their main part as a remote. With custom modular resources, users no longer need to run separate server instances for each custom resource which saves additional network requests.

URDF Support

What is it?Users that are implementing their own arms are now able to supply kinematic information using URDF files. This is a convenience for our users since URDF files are readily available for common hardware.

New Movement Sensors

What is it?We added support for two new movement sensors. Refer to the Movement Sensor topic for more information.
  • ADXL345: A 3 axis accelerometer
  • MPU6050: 6 axis accelerometer + gyroscope

Improvements

Improved Camera Performance/Reliability

What is it?
  1. Improved server-side logic to choose a mime type based on the camera image type, unless a specified mime type is supplied in the request. The default mime type for color cameras is now JPEG, which improves the streaming rate across every SDK.
  2. Added discoverability when a camera reconnects without changing video paths. This now triggers the camera discovery process, where previously users would need to manually restart the RDK to reconnect to the camera.

Motion Planning with Remote Components

What is it?We made several improvements to the motion service that make it agnostic to the networking topology of a users robot.
What does it affect?
  1. Kinematic information is now transferred over the robot API. This means that the motion service is able to get kinematic information for every component on the robot, regardless of whether it is on a main or remote viam-server.
  2. Arms are now an input to the motion service. This means that the motion service can plan for a robot that has an arm component regardless of whether the arm is on a main or remote viam-server.

Motion Planning Path Smoothing

What is it?Various small improvements to follow the last major development.
What does it affect?
  1. Implementation of rudimentary smoothing for RRT* paths, resulting in improvements to path quality, with negligible change to planning performance".
  2. Changes to plan manager behavior to perform direct interpolation for any solution within some factor of the best score, instead of only in the case where the best IK solution could be interpolated.

Improved Data Synchronization Reliability

What is it?We updated how captured data is uploaded from robots to app.viam.com
What does it affect?We previously used bidirectional streaming, with the robot streaming sensor readings to the app and the app streaming acknowledgements of progress back to the robot. We switched to a simpler unary approach which is more performant on batched unary calls, is easier to load balance, and maintains ordered captures.
RDK Shutdown FailureFixed a bug where RDK shutdown requests sometimes failed when connected to serial components.
Python DocumentationFixed issues around documentation rendering improperly in some browsers.

28 November 2022

Release Versions

  • rdk - v0.2.3
  • api - v0.1.12
  • slam - v0.1.9
  • viam-python-sdk - v0.2.0
  • goutils - v0.1.4
  • rust-utils - v0.0.5

(Bold=updated version)

Camera Reconnection Issue

What is it?When a camera loses connection, it now automatically closes the connection to its video path. Previously, when users supplied a video path in their camera configuration, they encountered issues if the camera tried to reconnect because the supplied video path was already being used for the old connection.
What does it affect?On losing their video path connection, cameras now automatically close the video path connection.

Improvements

Camera Configuration Changes

What is it?We updated the underlying configuration schemes for the following camera models. We are also migrating existing camera configurations to align with the new schemas. To learn more about the changes, please refer to our camera documentation.
  • Webcam
  • FFmpeg
  • Transform
  • Join Pointclouds

Robot Details Page

What is it?Based on user feedback, we changed the name of the CONNECT tab to CODE SAMPLE.

15 November 2022

Release Versions

  • rdk - v0.2.0
  • api - v0.1.7
  • slam - v0.1.7
  • viam-python-sdk - v0.2.0
  • goutils - v0.1.4
  • rust-utils - v0.0.5

New Features

New servo model

What is it?We added a new servo model called GPIO. This represents any servo that is connected directly to any board using GPIO pins. We created this component in response to the common practice of connecting servos to separate hats, such as the PCA9685, rather than connecting directly to the board. Our previous implementation required a direct connection from the servo to the Raspberry Pi.
What does it affect?While Viam continues to support the pi model of servo, we encourage users to begin using the GPIO model in all of their robots moving forward because it is board-agnostic.

Added RTT to remote control page

What is it?We added a new badge in the Current Operations card of the remote control page of the Viam app. This badge lists the RTT (round trip time) of a request from your client to the robot (the time to complete one request/response cycle).

Python 3.8 Support

What is it?Our Python SDK now supports Python 3.8, in addition to 3.9 and 3.10. You will need to update the Python SDK to access the new feature.

Improvements

New Parameter: extra

What is it?We added a new API method parameter named, extra, that gives users the option of extending existing resource functionality by implementing the new field according to whatever logic they chose. extra is available to requests for all methods in the following APIs:

  • Arm
  • Data Manager
  • Gripper
  • Input Controller
  • Motion
  • Movement Sensor
  • Navigation
  • Pose Tracker
  • Sensor
  • SLAM
  • Vision
  • What does it affect?Users of the Go SDK must update their code to specify extra in the arguments that pass into each request.

    Add dependencies to services

    What is it?Adding dependencies to services allows Viam to initialize and configure resources in the correct order. For example, if the SLAM service depends on a LiDAR, it will always initialize the LiDAR before the service.
    What does it affect?Breaking Change: This impacts users of the SLAM service. Users must now specify which sensors they are using in the depends_on field of the SLAM configuration. Other service configurations are not affected.

    Removed width & height fields from Camera API

    What is it?We removed two fields (width and height) that were previously part of the response from the GetImage method in the camera API.
    What does it affect?Breaking Change: This does not impact any existing camera implementations. Users writing custom camera API implementations no longer need to implement the width or height fields.


    Have questions, or want to meet other people working on robots? Join our Community Discord.