The OCTv2 system supports two methods for open mouth detection:
# On Raspberry Pi
sudo apt update
sudo apt install cmake libopenblas-dev liblapack-dev
# Install dlib (this takes 10-20 minutes on Pi)
pip3 install dlib
# Alternative: Use pre-compiled wheel if available
pip3 install dlib --find-links https://github.com/ageitgey/dlib-wheels/releases
# Create models directory
mkdir -p ~/octv2_v2/models
cd ~/octv2_v2/models
# Download the 68-point facial landmark predictor
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
# Extract the model
bunzip2 shape_predictor_68_face_landmarks.dat.bz2
# Move to project directory
mv shape_predictor_68_face_landmarks.dat ../
Edit octv2_server_v2.py
if the model file is in a different location:
# Line 117: Update path to your model file
self.predictor = dlib.shape_predictor("/path/to/shape_predictor_68_face_landmarks.dat")
mouth_height / mouth_width
Threshold: If ratio > 0.5, mouth is considered "open"
# Mouth landmarks in 68-point model:
# 48-54: Outer lip contour (left to right)
# 55-59: Outer lip contour (right to left)
# 60-64: Inner lip contour (left to right)
# 65-67: Inner lip contour (right to left)
Edit these values in octv2_server_v2.py
:
# For dlib method
open_threshold = 0.5 # Lower = more sensitive (0.3-0.7)
# For basic method
confidence = min(1.0, variance / 1000.0) # Adjust divisor (500-2000)
if confidence > 0.3: # Minimum confidence (0.2-0.5)
Run this test script to tune parameters:
import cv2
import dlib
# Load your camera
cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = detector(gray)
for face in faces:
landmarks = predictor(gray, face)
# Get mouth measurements
inner_top = landmarks.part(62)
inner_bottom = landmarks.part(66)
left_corner = landmarks.part(48)
right_corner = landmarks.part(54)
mouth_height = abs(inner_top.y - inner_bottom.y)
mouth_width = abs(right_corner.x - left_corner.x)
ratio = mouth_height / mouth_width if mouth_width > 0 else 0
# Display measurements
cv2.putText(frame, f'Ratio: {ratio:.2f}', (50, 50),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
if ratio > 0.5:
cv2.putText(frame, 'OPEN MOUTH!', (50, 100),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.imshow('Mouth Detection Test', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# Check installation
python3 -c "import dlib; print('dlib version:', dlib.DLIB_VERSION)"
# If fails, install prerequisites
sudo apt install cmake libopenblas-dev liblapack-dev gfortran
pip3 install dlib
# Download the model file
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bunzip2 shape_predictor_68_face_landmarks.dat.bz2
# Place in same directory as octv2_server_v2.py
open_threshold
(try 0.6-0.7)open_threshold
(try 0.3-0.4)# Check CPU usage
htop
# Reduce camera resolution if needed
# Edit octv2_server_v2.py camera config:
config = self.camera.create_preview_configuration(
main={"size": (320, 240)}, # Smaller resolution
lores={"size": (160, 120)}
)
With proper mouth detection setup, your OCTv2 will accurately target open mouths for optimal Oreo delivery!