“Smart Glasses for blind person using ArduCam and AI”
“The glasses with an obstacle detecting module in the centre, a processing unit and a power supply make up this device. The processing unit is coupled to the obstacle detecting module and the output device. The central processing unit receives power from the power supply. A ultrasonic sensor serves as the obstacle detection module, while a control module serves as the processing unit, and a buzzer serves as the output unit. The control unit activates the ultrasonic sensors, which gather information about the barrier or object in front of the person, analyses it, and then sends obtained data through serial connection. After sending data, in the form of voice letting the user know if obstacle is near or far away and currency detection is enabled with AI integration.”
-
“The glasses detects obstacles in front of it with voice notification. It can recognize Indian Rupees with a voice message.”
-
“The Indian currencies are annotated using Roboflow software for the image training.”
Source Code:
The voice module:
“We have used Python eSpeak Module. eSpeak is a small open source software speech synthesizer for Linux and Windows that supports English and other languages. eSpeak employs a technique known as “format synthesis.” This enables a large number of languages to be supplied in a little space. It basically converts text to speech format. There are several Voices to choose from, each with its own set of attributes that can be changed. In this Project we are using eSpeak for two purpose convert the distance calculated using ultrasonic sensor to speech. It will let user know how safer are they to move forward. The other purpose is currency detection. When any 100, 200, 500 etc notes is bought in front of camera. Detected note will be speak out loud using eSpeak module.”
Cool project! Does this give anyone ideas for improvements to a project you’re currently on?