Adding facial recognition to Trilobot

I’ve had my Raspberry Pi Trilobot for a while and have been busy trying to add more features to it. For now, Trilobot is controlled with an 8BitDo Lite gamepad, the camera activated via button and video streamed online. As of today, my bot can also recognise faces.

If you’re just interested in the full code, skip the below and have a look at my Github page. The blog post provides more details on how all the functions were implemented if you’re interested in the specifics.

Running functions in parallel with threading

In my previous article I had already managed to activate the camera using one of the buttons. Another button was assigned to control the bot with a remote. As it turns out, installing the 64-bit Raspberry Pi OS wasn’t a particularly good idea. It doesn’t work well with picamera. So now I’ve started from scratch with a 32-bit OS and revamped my script to include picamera.

As before, start.py is run automatically at startup and takes care of the button assignments. To ensure that all functions run seamlessly in parallel, I’m using threading. The first thread starts def activate_button() and from there each press of a button will activate its function in a separate thread, from remote control to the camera videostream, to facial recognition and shutdown of Trilobot.

The camera videostream and facial recognition both use the camera but stream via different methods, HTTPserver and Flask. I have therefore included a check whether one of the two camera functions is already running. This ensures they aren’t both activated at the same time, which would exceed capacity.

Importantly, we need to be able to exit all threads cleanly after the press of a button. In this case I have assigned button X, but any button works. A stop event has been included, which can either be set via SIGINT (hitting CTRL+C on the terminal) or by pressing button X. This will end the initial thread that controlled button activation and all other threads that might have been activated in the meantime, which run as daemons in the background. It will also switch off Trilobot’s underlighting and button lights.

As before, remote_active() and create_8bitdo_lite_controller() are taken from the Pimoroni Trilobot user guide.

Activating the camera

In this version of the script, the picamera videostream is sent directly to a web browser. There is also no need to mess around with terminal prompts. The code below is based mostly on the picamera documentation with slight tweaks to the StreamingServer. After running it, you should be able to pick up the stream on your browser by typing https://<your bot's IP>:8000/index.html.

import io
import logging
import socketserver
from picamera import PiCamera
from threading import Condition
from http import server


output = 0

PAGE = """\
<html>
<head>
<title>PiCamera MJPEG streaming</title>
</head>
<body>
<h1>PiCamera MJPEG Streaming</h1>
<img src="stream.mjpg" width="640" height="480" />
</body>
</html>
"""


class StreamingOutput(object):
    def __init__(self):
        self.frame = None
        self.buffer = io.BytesIO()
        self.condition = Condition()

    def write(self, buf):
        if buf.startswith(b'\xff\xd8'):
            self.buffer.truncate()
            with self.condition:
                self.frame = self.buffer.getvalue()
                self.condition.notify_all()
            self.buffer.seek(0)
        return self.buffer.write(buf)


class StreamingHandler(server.BaseHTTPRequestHandler):
    def do_GET(self):
        if self.path == '/':
            self.send_response(301)
            self.send_header('Location', '/index.html')
            self.end_headers()
        elif self.path == '/index.html':
            content = PAGE.encode('utf-8')
            self.send_response(200)
            self.send_header('Content-Type', 'text/html')
            self.send_header('Content-Length', len(content))
            self.end_headers()
            self.wfile.write(content)
        elif self.path == '/stream.mjpg':
            self.send_response(200)
            self.send_header('Age', 0)
            self.send_header('Cache-Control', 'no-cache, private')
            self.send_header('Pragma', 'no-cache')
            self.send_header('Content-Type', 'multipart/x-mixed-replace; boundary=FRAME')
            self.end_headers()
            try:
                while True:
                    with output.condition:
                        output.condition.wait()
                        frame = output.frame
                    self.wfile.write(b'--FRAME\r\n')
                    self.send_header('Content-Type', 'image/jpeg')
                    self.send_header('Content-Length', len(frame))
                    self.end_headers()
                    self.wfile.write(frame)
                    self.wfile.write(b'\r\n')
            except Exception as e:
                logging.warning(
                    'Removed streaming client %s: %s',
                    self.client_address, str(e))
        else:
            self.send_error(404)
            self.end_headers()


class StreamingServer(socketserver.ThreadingMixIn, server.HTTPServer):
    allow_reuse_address = True
    daemon_threads = True

    def run(self):
        try:
            self.serve_forever()
        except KeyboardInterrupt:
            pass
        finally:
            self.server_close()


def main():
    global output
    address = ('', 8000)
    server_start = StreamingServer(address, StreamingHandler)

    while True:
        with PiCamera(resolution='640x480', framerate=24) as camera:
            output = StreamingOutput()
            camera.start_recording(output, format='mjpeg')
            print("Start streaming server...")
            server_start.run()


if __name__ == "__main__":
    main()

Facial recognition

Now that we can control Trilobot with a remote and activate the camera, things get more interesting. We can also use picamera to teach Trilobot how to recognise faces.

There are quite a few helpful protocols out there that explain how to do this. The one I picked is an excellent tutorial from Tom’s hardware. It uses OpenCV, face_recognition, and imutils packages to train the Raspberry Pi to recognise a defined set of faces. It also includes a function to send email notifications if a person was recognised. You can find the details on how to install the dependencies and train the facial recognition model on their website.

As your model is trained, it will save all criteria to identify the faces you train it on in a file called encodings.pickle. Once you’re done with training you only need the encodings.pickle and haarcascade_frontalface_default.xml files to enable facial recognition on your Trilobot.

Save them both in a folder called facial_recognition and once you press button Y, your Trilobot should be able to run facial-recognition-with-flask.py without errors.

Get email notifications

Before doing so, however, you should also set up the email notification if you’d like to use it. I found Mailjet‘s Email API quite helpful for this feature. It comes with a free tier for a limited amount of emails and does not require a credit card. Get an API and add your API key and secret into Trilobot as shown in Mailjet’s user guide. Next, add the email addresses and names for sender and recipients. You’re all set.

Once successfully setup, facial-recognition-with-flask.py will stream the video via port 8100 using a Flask application. Identified faces will be marked with bounding boxes, just as in the original tutorial. The name will be shown on top. The index.html file for the Flask application can be found in the templates folder.

So far so good. This concludes our tutorial for additional Trilobot features today.

Stay tuned for more!

10 thoughts on “Adding facial recognition to Trilobot”

  1. I haven’t yet activated my camera or coded anything for it, but I’m curious — if your Trilobot is on the floor, how low does your head have to be for your face to be in the camera’s field of view? And btw, thanks for sharing — 30+ years of programming here, but with my limited experience with Python your code really helps jumpstart my efforts with this guy.

    Reply
  2. Hi Jim, glad you found the code helpful! It’s indeed a bit of an issue that trilobot is so low on the floor and the angle isn’t quite right – I usually have to kneel in front of it or place it on my desk for it to recognice my face properly. It does recognise smaller faces (e.g. in a distance) as well, but with less accuracy. Maybe tilting the camera upwards a bit might help? Hope you’re having fun tinkering with yours.

    Reply
  3. Thanks. Maybe I’ll incorporate a vertical broomstick on mine and then mount the camera on top! I just happen to have a camera ribbon cable-to-HDMI adapter set lying around from an old project. Beats crawling around on the floor, right? 😉

    Reply
  4. Sure will. And of course I was kidding about the broomstick — I’ll be looking for a very lightweight telescoping rod and mount, hopefully with pan & scan capabilities.

    Reply
  5. I was interested in this article and purchased the 8BitDo Lite 2 but your code does not connect to this controller. I have paired it using the RPi desktop bluetooth pairing utility. Apparently, your(Pimoroni) code says 8BitDo Lite not the Lite 2. What changes do you recommend to get this to work?

    Reply
  6. @james martel: Yes, this code was only tested with the 8BitDo Lite. Looks like there are some small changes in the key mapping between this and the new version Lite 2. L3/R3 used to be a cross and is now a stick. http://download.8bitdo.com/Manual/Controller/Lite2/Lite2_Manual.pdf

    You’d need to change that part in controller_mappings.py in the function “def create_8bitdo_lite_controller()”. maybe using controller.register_button() instead of controller.register_axis_as_button() works? let me know how you’re getting on.

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: