Tipsy: Create an Autonomous Drink Carrying Robot

Tipsy makes it easy to replenish everyone’s drinks at a party. Designed with ultrasonic sensors and cameras, Tipsy is equipped to detect the presence of obstacles and people in its surrounding area. While avoiding the obstacles with the ultrasonic sensor distance measurements, it identifies the people using an ML model and object detection and moves towards them with ease. Tipsy allows people to grab a drink without ever having to leave their spot by bringing a bucket of ice-cold drinks within arm’s reach.
This tutorial will teach you how to build your own drink-carrying robot.
Requirements
Hardware
To build your own drink-carrying robot, you need the following hardware:
- Raspberry Pi, with microSD card, set up following the Raspberry Pi Setup Guide.
- Assembled SCUTTLE rover with the motors and motor driver that comes with it.
- T-slotted framing: 4 single 4 slot rails, 30 mm square, hollow, 3’ long. These are for the height of the robot.
- T-slotted framing: 2 single 4 slot rail, 30 mm square, hollow, 12 inches long. These are to create a base inside the robot to securely hold the drink box.
- T-slotted framing structural brackets: 30mm rail height.
- Two ultrasonic sensors
- A 12V battery with charger
- DC-DC converter, 12V in, 5V out
- USB camera
- A box to hold drinks
- Optional: Hook-and-loop tape
- Optional: Acrylic panels to cover the sides
- Optional: 3D printer
Software
To build your own drink-carrying robot, you need the following software:
Wire your robot
Follow the wiring diagram below to wire together your Raspberry Pi, buck converter, USB camera, motors, motor driver, ultrasonic sensors, and battery:
The Tipsy robot uses an assembled SCUTTLE Rover base with some modifications: Tipsy does not use the camera that came with the SCUTTLE Rover because the cable is not long enough to allow the camera to be attached to the top of the robot. Additionally, Tipsy also does not use the encoders or the batteries that come with the kit. These changes are reflected in the wiring diagram.
Configure your components
In the Viam app, create a new robot and give it a name like tipsy
.
Follow the instructions on the Setup tab to install viam-server
on your Raspberry Pi and connect to your robot.
Navigate to the Config tab of your robot’s page in the Viam app. Click on the Components subtab.
Configure the board
Add a board component to represent the Raspberry Pi:
Click the Create component button in the lower-left corner of the page. Select type
board
and modelpi
. Enterlocal
as the name, then click Create.You can name your board whatever you want as long as you refer to it by the same name in your code.
Configure the motors
Add your right motor:
Click Create component in the lower-left corner of the page. Select type
motor
, then select modelgpio
. EnterrightMotor
as the name, then click Create.After clicking Create, a panel will pop up with empty sections for Attributes, Component Pin Assignment, and other information.
In the Board drop-down within attributes, choose the name of the board
local
to which the motor is wired. This will ensure that the board initializes before the motor when the robot boots up.Then set Max RPM to 100 and enable direction flip.
In the Component Pin Assignment section, toggle the type to In1/In2. In the drop downs for A/In1 and B/In2, choose
15 GPIO 22
and16 GPIO 23
corresponding to the right motor wiring. Leave PWM (pulse-width modulation) pin blank, because this specific motor driver’s configuration does not require a separate PWM pin.Now let’s add the left motor which is similar to the right motor. Add your left motor with the name “leftMotor”, type
motor
, and modelgpio
. Selectlocal
from the Board drop-down, set Max RPM to100
, and configure the motors pins as A/In1 and B/In2 corresponding to12 GPIO 18
and11 GPIO 17
respectively (according to the wiring diagram), and leave PWM blank.Configure the base
Next, add a base component, which describes the geometry of your chassis and wheels so the software can calculate how to steer the rover in a coordinated way:
Click Create component. Select
base
for type andwheeled
for model. Name your basetipsy-base
, then click Create.In the Right Motors drop-down, select
rightMotor
and in the Left Motors drop-down selectleftMotor
. Enter250
for Wheel Circumference (mm) and400
for Width (mm). The width describes the distance between the midpoints of the wheels. Addlocal
,rightMotor
, andleftMotor
to the Depends on field.Configure the camera
Add the camera component:
Click Create component. Select type
camera
and modelwebcam
. Name itcam
and click Create.In the configuration panel, click the video path field. If your robot is connected to the Viam app, you will see a drop-down populated with available camera names.
Select the camera you want to use. If you are unsure which camera to select, select one, save the configuration, and go to the Control tab to confirm you can see the expected video stream.
Then make it depend on
local
so it initializes after the board component.Configure the ultrasonic sensors
Add a sensor component:
Click Create component. Select type
sensor
and modelultrasonic
. Name your sensorultrasonic
, then click Create.Then fill in the attributes: enter
38
forecho_interrupt_pin
and40
fortrigger_pin
, according to the wiring diagram. Enterlocal
forboard
.You have to configure the other ultrasonic sensors. Tipsy uses 5 in total: two up top underneath the beer box, two on the sides of the robot, and one at the bottom. You can change the amount based on your preference.
For each of the additional ultrasonic sensors, create a new component with the name
ultrasonic2
, typesensor
, and modelultrasonic
. In the attributes textbox, fill in thetrigger_pin
andecho_interrupt_pin
corresponding to the pins your ultrasonic sensors are connected to.
On the Raw JSON
tab, replace the configuration with the following JSON configuration for your board, your motors, your base, your camera, and your ultrasonic sensors:
{
"components": [
{
"model": "pi",
"name": "local",
"type": "board",
"attributes": {
"i2cs": [
{
"bus": "1",
"name": "default_i2c"
}
]
},
"depends_on": []
},
{
"model": "gpio",
"name": "rightMotor",
"type": "motor",
"attributes": {
"max_rpm": 100,
"pins": {
"a": "15",
"b": "16",
"pwm": ""
},
"board": "local",
"dir_flip": true
},
"depends_on": []
},
{
"attributes": {
"max_rpm": 100,
"pins": {
"pwm": "",
"a": "12",
"b": "11"
},
"board": "local"
},
"depends_on": [],
"model": "gpio",
"name": "leftMotor",
"type": "motor"
},
{
"depends_on": ["local", "rightMotor", "leftMotor"],
"model": "wheeled",
"name": "tipsy-base",
"type": "base",
"attributes": {
"wheel_circumference_mm": 250,
"width_mm": 400,
"left": ["leftMotor"],
"right": ["rightMotor"]
}
},
{
"depends_on": ["local"],
"name": "cam",
"type": "camera",
"model": "webcam",
"attributes": {
"video_path": "video4"
}
},
{
"name": "ultrasonic",
"type": "sensor",
"model": "ultrasonic",
"attributes": {
"echo_interrupt_pin": "38",
"board": "local",
"trigger_pin": "40"
},
"depends_on": ["local"]
},
{
"attributes": {
"board": "local",
"trigger_pin": "13",
"echo_interrupt_pin": "7"
},
"depends_on": ["local"],
"name": "ultrasonic2",
"type": "sensor",
"model": "ultrasonic"
},
{
"model": "ultrasonic",
"attributes": {
"board": "local",
"trigger_pin": "35",
"echo_interrupt_pin": "37"
},
"depends_on": ["local"],
"name": "ultrasonic3",
"type": "sensor"
},
{
"attributes": {
"board": "local",
"trigger_pin": "28",
"echo_interrupt_pin": "32"
},
"depends_on": ["local"],
"name": "ultrasonic4",
"type": "sensor",
"model": "ultrasonic"
},
{
"model": "ultrasonic",
"attributes": {
"trigger_pin": "24",
"echo_interrupt_pin": "26",
"board": "local"
},
"depends_on": ["local"],
"name": "ultrasonic5",
"type": "sensor"
},
{
"name": "imu",
"type": "movement_sensor",
"model": "gyro-mpu6050",
"attributes": {
"i2c_bus": "default_i2c",
"board": "local"
},
"depends_on": ["local"]
}
]
}
Click Save config in the bottom left corner of the screen.
Test your components
With the components configured, navigate to the Control tab. On the control tab, you will see panels for each of your configured components.
Motors
Click on both motor panels and check that they run as expected by clicking RUN.
Base
Click on the base panel and enable the keyboard. Then move your rover base around by pressing A, S, W, and D on your keyboard.
You can also adjust the power level to your preference.
Camera
To see your camera working, click on the camera panel and toggle the camera on.
Ultrasonic Sensors
Click on your ultrasonic sensors panels and test that you can get readings from all of them.
Click Get Readings to get the distance reading of the sensor.
Configure your services
This tutorial uses pre-trained ML packages. If you want to train your own, you can train a model.
To use the provided Machine Learning model, copy the
scp effdet0.tflite tipsy@tipsy.local:/home/tipsy/effdet0.tflite
scp labels.txt tipsy@tipsy.local:/home/tipsy/labels.txt
Click on the Services subtab.
Configure the ML model service
Add an mlmodel service:
Click Create service in the lower-left corner of the page. Select type
ML Model
and modelTFLite CPU
. Enterpeople
for the name of your service, then click Create.In the new ML Model service panel, configure your service.
Select the Path to existing model on robot for the Deployment field. Then specify the absolute Model path as
/home/tipsy/effdet0.tflite and any Optional settings such as the absolute Label path as/home/tipsy/labels.txt and the Number of threads as 1.Configure an ML model detector
Add a vision service detector:
Click Create service in the lower-left corner of the page. Select type
Vision
, then select modelmlmodel
. EntermyPeopleDetector
as the name, then click Create.In the new vision service panel, configure your service.
Select
people
from the ML Model drop-down.Configure the detection camera
To be able to test that the vision service is working, add a transform camera which will add bounding boxes and labels around the objects the service detects.
Click on the Components subtab, then click Create component in the lower-left corner of the page. Select type
camera
, then select modeltransform
. EnterdetectionCam
as the name, then click Create.In the new transform camera panel, replace the attributes JSON object with the following object which specifies the camera source that the
transform
camera will be using and defines a pipeline that adds the definedmyPeopleDetector
:{ "source": "cam", "pipeline": [ { "type": "detections", "attributes": { "detector_name": "myPeopleDetector", "confidence_threshold": 0.5 } } ] }
Click Save config in the bottom left corner of the screen.
Your configuration should now resemble the following:
On the Raw JSON
tab, replace the configuration with the following complete JSON configuration which adds the configuration for the ML model service, the vision service, and a transform camera:
{
"services": [
{
"attributes": {
"mlmodel_name": "people"
},
"model": "mlmodel",
"name": "myPeopleDetector",
"type": "vision"
},
{
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"label_path": "/home/tipsy/labels.txt",
"num_threads": 1,
"model_path": "/home/tipsy/effdet0.tflite"
},
"name": "people"
}
],
"components": [
{
"model": "pi",
"name": "local",
"type": "board",
"attributes": {
"i2cs": [
{
"bus": "1",
"name": "default_i2c"
}
]
},
"depends_on": []
},
{
"model": "gpio",
"name": "rightMotor",
"type": "motor",
"attributes": {
"max_rpm": 100,
"pins": {
"a": "15",
"b": "16",
"pwm": ""
},
"board": "local",
"dir_flip": true
},
"depends_on": []
},
{
"attributes": {
"max_rpm": 100,
"pins": {
"pwm": "",
"a": "12",
"b": "11"
},
"board": "local"
},
"depends_on": [],
"model": "gpio",
"name": "leftMotor",
"type": "motor"
},
{
"depends_on": ["local", "rightMotor", "leftMotor"],
"model": "wheeled",
"name": "tipsy-base",
"type": "base",
"attributes": {
"wheel_circumference_mm": 250,
"width_mm": 400,
"left": ["leftMotor"],
"right": ["rightMotor"]
}
},
{
"depends_on": ["local"],
"name": "cam",
"type": "camera",
"model": "webcam",
"attributes": {
"video_path": "video4"
}
},
{
"model": "transform",
"attributes": {
"pipeline": [
{
"type": "detections",
"attributes": {
"confidence_threshold": 0.5,
"detector_name": "myPeopleDetector"
}
}
],
"source": "cam"
},
"depends_on": [],
"name": "detectionCam",
"type": "camera"
},
{
"name": "ultrasonic",
"type": "sensor",
"model": "ultrasonic",
"attributes": {
"echo_interrupt_pin": "38",
"board": "local",
"trigger_pin": "40"
},
"depends_on": ["local"]
},
{
"attributes": {
"board": "local",
"trigger_pin": "13",
"echo_interrupt_pin": "7"
},
"depends_on": ["local"],
"name": "ultrasonic2",
"type": "sensor",
"model": "ultrasonic"
},
{
"model": "ultrasonic",
"attributes": {
"board": "local",
"trigger_pin": "35",
"echo_interrupt_pin": "37"
},
"depends_on": ["local"],
"name": "ultrasonic3",
"type": "sensor"
},
{
"attributes": {
"board": "local",
"trigger_pin": "28",
"echo_interrupt_pin": "32"
},
"depends_on": ["local"],
"name": "ultrasonic4",
"type": "sensor",
"model": "ultrasonic"
},
{
"model": "ultrasonic",
"attributes": {
"trigger_pin": "24",
"echo_interrupt_pin": "26",
"board": "local"
},
"depends_on": ["local"],
"name": "ultrasonic5",
"type": "sensor"
},
{
"name": "imu",
"type": "movement_sensor",
"model": "gyro-mpu6050",
"attributes": {
"i2c_bus": "default_i2c",
"board": "local"
},
"depends_on": ["local"]
}
]
}
Click Save config in the bottom left corner of the screen.
Test your detection camera
Now you can test if the detections work.
Navigate to the Control tab and click on the detectionCam
panel.
Toggle the camera on to start the video stream.

You can also see your physical camera stream and detection camera stream together on the base panel.

At this point, it is a simple detection camera: it will detect any object in the label.txt
file.
When we write the code for the robot, we can differentiate between, say, a person or a chair.

Design your robot

Now that you have all your components wired, configured, and tested, you can assemble your robot.
Add four 3’ long T-slotted framing rails along the corners of the SCUTTLE Rover base to make it a tall structure. Then add two 12 inches long T-slotted framing rails in the middle of the structure at the height that you want to hold the box. Secure them using T-slotted framing structural brackets.
Next, add the wired Raspberry Pi, motor driver, and 12V battery to the base.
You can use the 3D-printed holders that come with the assembled SCUTTLE base for the Raspberry Pi and the motor driver. You can also print holders based on SCUTTLE designs from GrabCAD.
Secure the 12V battery to the bottom using hook-and-loop tape or other tape, and secure the sides using T-slotted brackets.

Secure the buck converter with hook-and-loop tape, double-sided tape, or a 3D printed holder.

Use hook-and-loop fasteners or something else to secure the USB camera to the box that holds the drinks so it faces the front, towards any people who may interact with the robot.
For ultrasonic sensors to fit the framing, we recommend 3D printing enclosures. This step is optional but makes the project look more aesthetically pleasing and ensures that the sensors don’t fall out as your robot moves around.
You can design your own enclosure, or you can use our design:
The STL files we used can be found in our project repository.
SCUTTLE also has a design for a 3D-printed enclosure with a twist bracket that fits the rails.
If you decide not to use a 3D printer, you can tape the ultrasonic sensors to the rails. We recommend that you do so within the enclosure, perhaps under the drink box and above the rover base, so they don’t touch people or obstacles as the robot moves around, as this could cause them to fall off or get damaged.
Info
The photo of the sensor being installed shows two batteries, but you will only use one for this tutorial.
Now we are ready to make Tipsy look pretty! Optionally, add acrylic side panels and cover the electronics.

We drilled and screwed the panels onto the railing. You can also use a laser cutter to cut them into the sizes you prefer if you want different side panels.
Add the robot logic
Download the full code onto your computer.
Let’s take a look at what it does. First, the code imports the required libraries:
from viam.robot.client import RobotClient
from viam.rpc.dial import Credentials, DialOptions
from viam.components.sensor import Sensor
from viam.components.base import Base
from viam.services.vision import VisionClient
Then it connects to our robot using a robot location secret and address. Replace these values with your robot’s own location secret and address, which you can obtain from the Code sample tab:
robot_secret = os.getenv('ROBOT_SECRET') or ''
robot_address = os.getenv('ROBOT_ADDRESS') or ''
# change this if you named your base differently in your robot configuration
base_name = os.getenv('ROBOT_BASE') or 'tipsy-base'
# change this if you named your camera differently in your robot configuration
camera_name = os.getenv('ROBOT_CAMERA') or 'cam'
pause_interval = os.getenv('PAUSE_INTERVAL') or 3
Location secret
By default, the sample code does not include your robot location secret. We strongly recommend that you add your location secret as an environment variable and import this variable into your development environment as needed.
To show your robot’s location secret in the sample code, toggle Include secret on the Code sample tab. You can also see your location secret on the locations page.
Caution
Do not share your location secret, part secret, or robot address publicly. Sharing this information could compromise your system security by allowing unauthorized access to your robot, or to the computer running your robot.
Next, the code defines functions for obstacle detection.
The first method, obstacle_detect()
, gets readings from a sensor, and the second method, obstacle_detect_loop()
, asynchronously loops through the readings to stop the base if it’s closer than a certain distance from an obstacle:
async def obstacle_detect(sensor):
reading = (await sensor.get_readings())["distance"]
return reading
async def obstacle_detect_loop(sensor, sensor2, base):
while (True):
reading = await obstacle_detect(sensor)
reading2 = await obstacle_detect(sensor2)
if reading < 0.4 or reading2 < 0.4:
# stop the base if moving straight
if base_state == "straight":
await base.stop()
print("obstacle in front")
await asyncio.sleep(.01)
Then, we define a person detection loop, where the robot is constantly looking for a person, and if it finds the person, it goes toward them as long as there are no obstacles in front. If it doesn’t find a person, it will continue looking by rotating the robot base 45 degrees at a time and looking again.
Lines 12 and 13 are where it checks specifically for detections with the label Person
and not every object in the labels.txt
file:
async def person_detect(detector, sensor, sensor2, base):
while (True):
# look for a person
found = False
global base_state
print("will detect")
detections = await detector.get_detections_from_camera(camera_name)
for d in detections:
if d.confidence > .7:
print(d.class_name)
# specify it is just the person we want to detect
if (d.class_name == "Person"):
found = True
if (found):
print("I see a person")
# first manually call obstacle_detect - don't even start moving if
# someone in the way
distance = await obstacle_detect(sensor)
distance2 = await obstacle_detect(sensor2)
if (distance > .4 or distance2 > .4):
print("will move straight")
base_state = "straight"
await base.move_straight(distance=800, velocity=250)
base_state = "stopped"
else:
print("I will turn and look for a person")
base_state = "spinning"
await base.spin(45, 45)
base_state = "stopped"
await asyncio.sleep(pause_interval)
Finally, the main()
function initializes the base, the sensors, and the detector.
It also creates two background tasks running asynchronously, one looking for obstacles and avoiding them (obstacle_task
), and one looking for people and moving towards them (person_task
):
async def main():
robot = await connect()
base = Base.from_robot(robot, base_name)
sensor = Sensor.from_robot(robot, "ultrasonic")
sensor2 = Sensor.from_robot(robot, "ultrasonic2")
detector = VisionServiceClient.from_robot(robot, "myPeopleDetector")
# create a background task that looks for obstacles and stops the base if
# it's moving
obstacle_task = asyncio.create_task(
obstacle_detect_loop(sensor, sensor2, base))
# create a background task that looks for a person and moves towards them,
# or turns and keeps looking
person_task = asyncio.create_task(
person_detect(detector, sensor, sensor2, base))
results = await asyncio.gather(obstacle_task,
person_task,
return_exceptions=True)
print(results)
When you run the code, you should see results like this:
will detect
Person
I see a person
will move straight
obstacle in front
will detect
Person
I see a person
will move straight
obstacle in front
will detect
I will turn and look for a person
will detect
I will turn and look for a person
will detect
Person
I see a person
Summary

In this tutorial, you learned how to make your own drink-carrying robot. You no longer have to interrupt your conversations or activities just to grab another drink. Overall, Tipsy is the ultimate drink buddy for any social event. With its people detection and obstacle avoidance technology, convenient autonomous operation, and modern design, it’s sure to impress all your guests.
To make Tipsy even more advanced, you can try to:
- Add more ultrasonic sensors so it doesn’t hit objects at different heights, you can also attach them to a moving gantry along the side rails
- Add a depth camera to detect obstacles and how close they are to Tipsy
- Add an imu to see if Tipsy is tipping backward
- Add a lidar
You can also design another robot for collecting the empty beer cans, or a bartender robot with pumps that can mix some drinks. Till then, sit back, relax, and let Tipsy handle the beer-carrying duties for you!
For more robotics projects, check out our other tutorials.
Have questions, or want to meet other people working on robots? Join our Community Discord.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!