Top Ad unit 728 × 90

Eye control virtual mouse with source code and synopsis

Introduction of Eye control virtual mouse(ECVM)

Eye controlled virtual mouse (ECVM) is a communication aid design for the severely  disabled. Specifically ,it provide the user to connect or control their PC with their eye movement like blinking , starling by constantly monitoring the electrooculogram Signal (EOG) of the user, this mouse has an ability to recognize several intentional eye motions and in turn, control a cursor on the PC Screen.

The system is a mouse like eye based interface that converts movement of eyes like blinking, staring, and squinting into mouse cursor actions.

If the amputee's eyeball and facial feature, as well as the direction of the eye in which it is staring can be recorded, the movement of the facial features may be transferred to the cursor, allowing the user to move the cursor at whim. 

An eye tracking mouse is a gadget which take the commands from user's eye as well as head movement. The project relies on mapping facial traits to the cursor to recognize and capture them into the video. When the camera is opened, the application must extract all the video's frame.

Moving the pointer along the screen using computer mouse or by moving once eyes or from hand detection has become fairly common in today's technology Every movement of mouse and eyes is detected and mapped to the movement of pointer from the system.

Objective:

The objective of the project is that to make the work of amputees (people who do not have their arm to be operational) easy. Amputees or quadriplegics can benefit from our project (people who are affected from the paralysis of their all four limbs) can operate and use the mouse by their facial expression and actions of their eyes.

To deliver the user-friendly human computer interaction project will design a system that will just require a camera to employ human eyes and facial characteristics as a pointing device for the computer system.

The main objectives are given following:

•  Eye & Face detection

•  Eye & face extraction

•  Develop a GUI that shows the result

•  Move the cursor with movement of head

Software and Hardware Used

The software at which the code is written is Pycharm and the programming language that used is the PYTHON. Pycharm is a python IDE (Integrated Development Environment). It provides code analysis, graphical debugger, an integrated unit tester, integration with version control systems. In our project we use the some pycharm libraries that are given as follows:

܀ Open Computer vision 2 (open cv2)

* Media pipe

܀ Pyautogui

In the hardware section we implement some of the necessary devices by which our project do not face any difficulties while using:

܀ Monitor or PC

܀ Web Camera

܀ Lightening Devices

Open cv2 runs on Window, Linux, Android IOS, Mac O.S. Using the new generation 6/10 operating system makes it faster for the accessing in the limited or less time period.

github source code click here

Conclusion

We are developing the system to control the cursor of the computer with the real time camera.

This system is based on computer vision algorithms and can do all the task similar to the task which performed by a mouse cursor.

However, it is difficult to get the stable result due to the variety of skin colours and lighting of the human races.

This system will help in presentations and to reduce work space.

Features such as shrinking and enlarging windows, closing windows, etc. by using the movements of the head and the eye gestures.

This system have many useful scopes in the new generation as well as for the who avoiding the contacts of objects.

Future Scope

✓ Provides a non-contact Human Computer Interaction.

✓ Useful for the Amputees & Four limbs paralysis patients.

✓ Reduce hardware cost by eliminating the use of mouse.

✓ Convenient for the user who are not comfortable with the touchpad.

✓ The framework may be useful for controlling the different type of games which work only on user defined gesture


Useful for application which needs user gesture to their complete their operations.

Useful in public places to avoid touching by different-different users during any panic situation.

Module :

CV2:-  CV2 is a powerful library for working with images in Python. In this article, we have covered some of the most commonly used functions and methods in CV2, including image loading and display, image manipulation, and image filtering. With these tools at your disposal, you can explore the exciting world of computer vision and image processing.

mediapipe:- MediaPipe is an open-source framework for building pipelines to perform computer vision inference over arbitrary sensory data such as video or audio. Using MediaPipe, such a perception pipeline can be built as a graph of modular components.

pyautogui:- PyAutoGUI is a cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard.


source code


import cv2
import mediapipe as mp
import pyautogui
cam = cv2.VideoCapture(0)
face_mesh = mp.solutions.face_mesh.FaceMesh(refine_landmarks=True)
screen_w, screen_h = pyautogui.size()
while True:
    _, frame = cam.read()
    frame = cv2.flip(frame, 1)
    rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    output = face_mesh.process(rgb_frame)
    landmark_points = output.multi_face_landmarks
    frame_h, frame_w, _ = frame.shape
    if landmark_points:
        landmarks = landmark_points[0].landmark
        for id, landmark in enumerate(landmarks[474:478]):
            x = int(landmark.x * frame_w)
            y = int(landmark.y * frame_h)
            cv2.circle(frame, (x, y), 3, (2, 2, 255))
            if id == 1:
                screen_x = screen_w / frame_w * x
                screen_y = screen_h / frame_h * y
                pyautogui.moveTo(screen_x,screen_y)
        left = [landmarks[145], landmarks[159]]
        for landmark in left:
            x = int(landmark.x * frame_w)
            y = int(landmark.y * frame_h)
            cv2.circle(frame, (x, y), 3, (2, 255, 255))
            if (left[0].y - left[1].y) < 0.004:
                pyautogui.click()
                pyautogui.sleep(0)
    cv2.imshow('Eye Mouse', frame)
    cv2.waitKey(1)

Output:



Download source code : click here

Download Synopsis :  click here


Eye control virtual mouse with source code and synopsis Reviewed by For Learnig on November 02, 2023 Rating: 5

No comments:

If you have any doubts, please tell me know

Contact Form

Name

Email *

Message *

Powered by Blogger.