Drowsiness detection with OpenCV | Face_recognition | Numpy


One of the Main motto towards working on this project was Driver drowsiness detection which is a car safety Technology which helps prevent accidents caused by the driver getting drowsy.
"It's better to stop driving when drowsy than sinking one foot in grave"
Road accidents are nowadays very common. In India, on average about 1214 crashes happen on a daily basis. Various studies have suggested that around 20% of all road accidents are fatigue-related, up to 50% on certain roads. A solution to this problem is to identify when the driver is falling asleep and alarming the passengers of the situation so that appropriate measures can be taken.
"Drowsiness is the biggest reason for road accidents and Data Science is the best remedy for it"
 Today, we are going to extend this method and use it to determine how long a given person’s eyes have been closed for. If there eyes have been closed for a certain amount of time, we’ll assume that they are starting to doze off and play an alarm to wake them up and grab their attention.
I’ll be demonstrating how we can implement our own drowsiness detector using OpenCV, Face_recognition, Numpy and Python.

The drowsiness detector Algorithm

The general flow of our drowsiness detection algorithm is fairly straightforward.
  • First, we’ll setup a camera/webcam that monitors a stream for faces
  • If a face is found, we apply facial landmark detection and extract the eye regions
  • Now that we have the eye regions, we can compute the eye aspect ratio(EAR) to determine if the eyes are closed
  • If the eye aspect ratio indicates that the eyes have been closed for a sufficiently long enough amount of time(in our case 5 seconds), we’ll sound an alarm to wake up the driver

Implementing the drowsiness detector

The key points I will discuss here:
  1. Dependencies: import our required Python packages -We’ll need the imutils package, a series of computer vision and image processing functions to make working with OpenCV easier. We’ll need the SciPy package so we can compute the Euclidean distance between facial landmarks points in the eye aspect ratio calculation. We’ll also import the Thread class so we can play our alarm in a separate thread from the main thread to ensure our script doesn’t pause execution while the alarm sounds. In order to actually play our WAV/MP3 alarm, we need the playsound library, a pure Python, cross-platform implementation for playing simple sounds. To Recognize and manipulate faces from Python we’ll need the face-recognition. We need to install Numpy to apply facial landmark detection to localize each of the important regions of the face. For determining the facial landmarks for the face region, and then convert the facial landmark (x, y)-coordinates to a NumPy array.
  2. We also need to define the eye_aspect_ratio function which is used to compute the ratio of distances between the vertical eye landmarks and the distances between the horizontal eye landmarks. The return value of the eye aspect ratio will be approximately constant when the eye is open. The value will then rapid decrease towards zero during a blink. If the eye is closed, the eye aspect ratio will again remain approximately constant, but will be much smaller than the ratio when the eye is open.
    Top-left: A visualization of eye landmarks when then the eye is open. Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio indicates a blink
  3. MIN_AER : If the eye aspect ratio falls below this threshold, we’ll start counting the number of frames the person has closed their eyes for. If the number of frames the person has closed their eyes in exceeds EYE_AR_CONSEC_FRAMES we’ll sound an alarm.
  4. The facial landmarks produced by face_recognition are an indexable list. Therefore, to extract the eye regions from a set of facial landmarks, we simply need to know the correct array slice indexes.
  5. The next step is to apply facial landmark detection to localize each of the important regions of the face. We loop over each of the detected faces, we assume there is only one face — the driver — but I left this for loop in here just in case we want to apply the technique to videos with more than one face.
  6. We can then visualize each of the eye regions on our frame by using the cv2.polylines function below — this is often helpful when we are trying to debug our script and want to ensure that the eyes are being correctly detected and localized.
  7. Finally, we are now ready to check to see if the person in our video stream is starting to show symptoms of drowsiness.

Summary

In this Project post I demonstrated how to build a drowsiness detector using OpenCV, face_recognition and Python.
Our drowsiness detector hinged on two important computer vision techniques:
  • Facial landmark detection
  • Eye aspect ratio
Once we have our eye regions, we can apply the eye aspect ratio to determine if the eyes are closed. If the eyes have been closed for a sufficiently long enough period of time, we can assume the user is at risk of falling asleep and sound an alarm to grab their attention.

Project Github Link :Click Here

Comments