A Person Detection Security Robot That Sends You Photos
Tip
This tutorial is part of our built-in webcam projects series. For a similar project that integrates a Kasa smart plug, see Use Object Detection to Turn Your Lights On.
In this tutorial, you will create a desk security system with no hardware other than your laptop and the built-in webcam.
Maybe you keep a box of chocolates on your desk to snack on when you are hungry. Maybe someone is eating your chocolates when you are away. You’re not sure who, but you suspect Steve. This robot will help you catch the culprit.
When someone comes to your desk, the robot will use the vision service and the ML model service to detect a person, take their photo, and text you an alert with a photo of the person.
Hardware requirements
You need the following hardware for this tutorial:
- Computer with a webcam
- This tutorial uses a MacBook Pro but any computer running macOS or 64-bit Linux will work
- Mobile phone (to receive text messages)
Software requirements
You will use the following software in this tutorial:
- Python 3.8 or newer
viam-server
- Viam Python SDK
- The Viam Python SDK (software development kit) lets you control your Viam-powered robot by writing custom scripts in the Python programming language. Install the Viam Python SDK by following these instructions.
- yagmail
- A Gmail account to send emails. You can use an existing account, or create a new one.
Configure your robot on the Viam app
If you followed the Use Object Detection to Turn Your Lights On tutorial, you already have a robot set up on the Viam app, connected and live, with a webcam configured.
Configure your services
This tutorial uses pre-trained ML packages. If you want to train your own, you can train a model.
To use the provided Machine Learning model, copy the
Click the Services subtab.
Configure the ML model service
Add an mlmodel service:
Click Create service in the lower-left corner of the Services subtab. Select type
mlmodel
, then select modeltflite_cpu
.Enter
people
as the name, then click Create.In the new ML Model service panel, configure your service.
Select the Path to Existing Model On Robot for the Deployment field. Then specify the absolute Model Path as where your tflite file lives and any Optional Settings such as the absolute Label Path as where your labels.txt file lives and the Number of threads as 1.
Configure an mlmodel detector
Add a vision service with the name
myPeopleDetector
, typevision
and modelmlmodel
. Click Create service.In the new vision service panel, configure your service.
Select
people
from the ML Model drop-down.
Configure the detection camera
To be able to test that the vision service is working, add a transform
camera which will add bounding boxes and labels around the objects the service detects.
Click on the Components subtab and click Create component in the lower-left corner.
Create a transform camera with type camera
and model transform
.
Name it detectionCam
and click Create.
In the new transform camera panel, replace the attributes JSON object with the following object which specifies the camera source that the transform
camera will use, and defines a pipeline that adds the defined myPeopleDetector
:
{
"source": "my-camera",
"pipeline": [
{
"type": "detections",
"attributes": {
"detector_name": "myPeopleDetector",
"confidence_threshold": 0.5
}
}
]
}
Click Save config in the lower-left corner of the screen.
How to use yagmail
Install yagmail (Yet Another Gmail/SMTP client) by running the following command in a terminal on your computer:
pip3 install yagmail
Tip
As you are programming the yagmail section of this project, you will be prompted to use your Gmail username and password within the code. If you use 2-step verification for your email, some apps or devices may be blocked from accessing your Google account. You can get an “Application-Specific Password” following this guide.
App Passwords are 16-digit passcodes that allow the app or device access to your Google account. This step is optional.
Then we have to indicate whom to send a message to, the subject, and the contents of the text message (which can be a string, image, or audio). Example code below (though you don’t have to use it yet, this will get used in the next section):
yag.send('phone_number@gatewayaddress.com', 'subject', contents)
You will need access to your phone number through your carrier.
For this tutorial, you are going to send the text to yourself.
You will replace to@someone.com
with your phone number and SMS gateway address.
You can find yours here: Gateway Addresses for Mobile Phone Carrier Text Message.
Some common ones:
- AT&T:
txt.att.net
- T-Mobile:
tmomail.net
- Verizon Wireless:
vtext.com
As an example, if you have T-Mobile your code will look like this:
yag.send('xxxxxxxxxx@tmomail.net', 'subject', contents)
This allows you to route the email to your phone as a text message.
Use the Viam Python SDK to control your security robot
If you followed the Use Object Detection to Turn Your Lights On tutorial, you already set up a folder with some Python code that connects to your robot and gets detections from your camera.
If you are starting with this tutorial, follow these steps to create the main script file and connect the code to the robot. Ignore the step about the Kasa smart plug host address.
Instead of using this person detector to activate a smart plug, you will send yourself a text message.
Make a copy of the
Delete the from kasa import Discover, SmartPlug
line and replace it with the following to import the Yagmail Python library:
import yagmail
Now you need to rewrite the if/else function.
If a person is detected, your robot will print sending a message
, take a photo, and save it to your computer as
Then you will create a yagmail.SMTP
instance to initialize the server connection.
Refer to the code below and the yagmail instructions to edit your
Location secret
By default, the sample code does not include your robot location secret. We strongly recommend that you add your location secret as an environment variable and import this variable into your development environment as needed.
To show your robot’s location secret in the sample code, toggle Include secret on the Code sample tab. You can also see your location secret on the locations page.
Caution
Do not share your location secret, part secret, or robot address publicly. Sharing this information could compromise your system security by allowing unauthorized access to your robot, or to the computer running your robot.
Save your code file.
Run the code
You are ready to test your robot!
From a command line on your computer, navigate to the project directory and run the code with this command:
python3 chocolate_security.py
If you are in front of your computer’s webcam, you should get a text!
Your terminal should look like this as your project runs if you are in front of the camera for a bit, and then move away from the screen:
python3 chocolate_security.py
This is a person!
sending a message
x_min: 7
y_min: 0
x_max: 543
y_max: 480
confidence: 0.94140625
This is a person!
sending a message
x_min: 51
y_min: 0
x_max: 588
y_max: 480
confidence: 0.9375
This is a person!
sending a message
There's nobody here, don't send a message
There's nobody here, don't send a message
Summary and next steps
In this tutorial, you learned how to build a security robot using the vision service, the ML model service, your computer, and your mobile phone, and we all learned not to trust Steve.
Have you heard about the chocolate box thief? He’s always got a few Twix up his Steve.
For more robotics projects, check out our other tutorials.
You can also ask questions in the Community Discord and we will be happy to help.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!