Jetson Nano Setup: Drive Servos by Movement of Facial Landmarks


Hardware:

Jetson Nano

Camera: Some cheap USB Webcam by foscam

Servo Driver: PCA9685 with 5-6V Power Supply

Servos: S-7361


Setup:

Etch this image (e.g. with Balena Etcher)

https://developer.nvidia.com/jetson-nano-sd-card-image

Housekeeping:

sudo apt update
sudo apt autoremove -y
sudo apt clean
sudo apt remove thunderbird libreoffice-* -y
sudo apt-get install -y python3-pip
sudo pip3 install -U pip testresources setuptools

Install OpenCV
sudo apt-get install python3-opencv
sudo apt-get remove python3-opencv

You can import cv2 and check the version with

cv2.__version__

You should end up with at least version 4.1.1 (depending on updates)


Install Mediapipe

https://github.com/anion0278/mediapipe-jetson#installation-for-clean-jetpack-46—461—python-wheel

Get binaries there, download into a folder like /home/mediapipe

Go there in terminal and run:

### Preparing pip
sudo apt update
sudo apt install python3-pip
pip3 install --upgrade pip
### Remove previous versions of Mediapipe (if it was installed):
pip3 uninstall mediapipe
### Install from wheel with (run commands from mediapipe dir):
pip3 install protobuf==3.19.4 opencv-python==4.5.3.56 dataclasses mediapipe-0.8.9_cuda102-cp36-cp36m-linux_aarch64.whl
### Note: Building wheel for newer version of opencv-python may take quite some time (up to a few hours)!

Get Circuitpython Servokit

parts from: https://learn.adafruit.com/circuitpython-libraries-on-linux-and-the-nvidia-jetson-nano/initial-setup

Terminal:

sudo pip3 install -U \
adafruit-circuitpython-busdevice==5.1.2 \
adafruit-circuitpython-motor==3.3.5 \
adafruit-circuitpython-pca9685==3.4.1 \
adafruit-circuitpython-register==1.9.8 \
adafruit-circuitpython-servokit==1.3.8 \
Adafruit-Blinka==6.11.1 \
Adafruit-GPIO==1.0.3 \
Adafruit-MotorHAT==1.4.0 \
Adafruit-PlatformDetect==3.19.6 \
Adafruit-PureIO==1.1.9 \
Adafruit-SSD1306==1.6.2

pip3 freeze - local | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 pip3 install -U
sudo bash
pip3 freeze - local | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 pip3 install -U

sudo /opt/nvidia/jetson-io/jetson-io.py

Select Configure 40-pin expansion header at the bottom. Then select Configure header pins manually.

Select spi1 (19, 21, 23, 24, 26) and then select Back

Finally select Save pin changes and then Save and reboot to reconfigure pins. This will create a config file and reboot the Jetson Nano.

adjust permissions

sudo groupadd -f -r gpio
# use your username instead of "benno". If you are unsure, what it is, just run "whoami"
sudo usermod -a -G gpio benno
sudo usermod -a -G i2c benno

cd ~
git clone https://github.com/NVIDIA/jetson-gpio.git
sudo cp ~/jetson-gpio/lib/python/Jetson/GPIO/99-gpio.rules /etc/udev/rules.d
sudo chown root.gpio /dev/gpiochip0
sudo chmod 660 /dev/gpiochip0
sudo udevadm control --reload-rules && sudo udevadm trigger

reboot

Check if you device gets found with

sudo i2cdetect -r -y 1
Download Visual Studio

Download Visual Studio arm64 .deb from:

https://code.visualstudio.com/download

Install by just double clicking.

Launch Visual Studio Code

Select Extensions

install extension „Python“

Ctrl-Shift+P > Select interpreter > Python 3.6

Create New File with .py ending. e.g. OpenCVServos.py

import cv2
import sys, time
from adafruit_servokit import ServoKit
import mediapipe as mp
import math
kit = ServoKit(channels=16)

mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
face_mesh = mp_face_mesh.FaceMesh(static_image_mode=False,
                                  max_num_faces=1,
                                  min_detection_confidence=0.8)


def get_face_landmarks(image):
    results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
    if not results.multi_face_landmarks:
        return []

    return results.multi_face_landmarks[0].landmark  
# Assuming only one face

# Calculate Euclidean distance between two landmarks
def calculate_distance(landmark1, landmark2):
    x1, y1, z1 = landmark1.x, landmark1.y, landmark1.z
    x2, y2, z2 = landmark2.x, landmark2.y, landmark2.z
    distance = math.sqrt((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)
    return distance

# Map a value from one range to another
def map_value(value, from_min, from_max, to_min, to_max):
    return (value - from_min) / (from_max - from_min) * (to_max - to_min) + to_min

def get_face_mesh(image):
    results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
    # Print and draw face mesh landmarks on the image.
    if not results.multi_face_landmarks:
        return image
    annotated_image = image.copy()
    for face_landmarks in results.multi_face_landmarks:
        # uncomment for live printout of all coordiates
        # print(' face_landmarks:', face_landmarks)
        mp_drawing.draw_landmarks(
            image=annotated_image,
            landmark_list=face_landmarks,
            connections=mp_face_mesh.FACEMESH_CONTOURS,
            # Change to FACEMESH_TESSELATION for mesh view
            landmark_drawing_spec=drawing_spec,
            connection_drawing_spec=drawing_spec)
        # uncomment for printout of total number of processed landmarks
        # print('%d facemesh_landmarks' % len(face_landmarks.landmark))
        if len(face_landmarks.landmark) >= 2:
            # Assuming you want to compare the first two landmarks
            landmark1 = face_landmarks.landmark[23]
            landmark2 = face_landmarks.landmark[223]

            # Calculate the distance between the two landmarks
            distance = calculate_distance(landmark1, landmark2)
            print(distance)
            # Map the distance to servo angle (adjust the mapping values as needed)
            if distance<0.03:
                distance=0.03
            if distance>0.08:
                distance=0.08    
            servo_angle = int(map_value(distance, 0.03, 0.08, 0, 180))
            print(servo_angle)
            # Set servo angle
            kit.servo[0].angle = servo_angle

            # use the same more landmarks and attach servo and play with it until you 
            # get the right landmarks to compare the distance print the distance and 
            # set the servo value to map the angle to your desired values
    return annotated_image


font = cv2.FONT_HERSHEY_SIMPLEX
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

if (cap.isOpened() == False):
    print("Unable to read camera feed")
while cap.isOpened():
    s = time.time()
    ret, img = cap.read()
    if ret == False:
        print('WebCAM Read Error')
        sys.exit(0)

    annotated = get_face_mesh(img)

    #you can do the same for the other landmarks and servo

    e = time.time()
    fps = 1 / (e - s)
    cv2.putText(annotated, 'FPS:%5.2f' % (fps), (10, 50), font, fontScale=1, color=(0, 255, 0), thickness=1)
    cv2.imshow('webcam', annotated)
    key = cv2.waitKey(1)
    if key == 27:  # ESC
        break
cap.release()

Mediapipe References:

Overview:

https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md

Face Landmark name list

https://github.com/tensorflow/tfjs-models/blob/838611c02f51159afdd77469ce67f0e26b7bbb23/face-landmarks-detection/src/mediapipe-facemesh/keypoints.ts

Image with landmark numbers overlayed:

https://github.com/rcsmit/python_scripts_rcsmit/blob/master/extras/Gal_Gadot_by_Gage_Skidmore_4_5000x5921_annotated_black_letters.jpg

Explanation of extraction mode for individual landmark coordinates:

https://stackoverflow.com/questions/67141844/python-how-to-get-face-mesh-landmarks-coordinates-in-mediapipe

Camera:

This implementation of mediapipe does not support gstramer. That means: only USB-Cameras!

Here is what I used to get mine to work:

https://github.com/jetsonhacks/USB-Camera/blob/main/usb-camera-simple.py

https://stackoverflow.com/questions/64272731/open-cv-shows-green-screen-on-jetson-nano

further resources:

https://toptechboy.com/starting-the-raspberry-pi-camera-or-a-web-camera-on-the-jetson-nano/

Great tutorials! Thank you very much. These gave me all the basics I needed.