Indian Discern Gestures Using Tensor Flow

Main Article Content

Dr. Mohammed Abdul Waheed
Shreeta
Zohara Begum

Abstract

Primary means of communication between individuals is communication. Due to their inability to verbally communicate, deaf & dumb individuals are forced to depend on visual means of expression. Numerous languages have been spoken & translated on a global scale. In this context, "Special People" refers to those who have trouble hearing and/or communicating. People who are "The Dumb" or "The Deaf" having trouble hearing or comprehending what another person is saying. Misunderstandings may occur when individuals rely upon lip reading, lip syncing, or sign language to communicate. Our system is designed to assist these persons with unique needs in participating equally in society. Our proposed system captures the video. Splits it into frames.  Video frames will be processed by trained model in backend, which will then identify the hand gesture performed by impaired individual. The next step is to translate sign language's meaning into spoken language. The MobileNet convolutional neural network, the TensorFlow library for training & retraining MobileNet, & TensorFlow Mobile Lite for executing the learned model using an Android smartphone are all used in our study.

Article Details

Section
Articles
Author Biographies

Dr. Mohammed Abdul Waheed

Department of Computer Science, Visvesveraya Technological University Belagavi, CPGS Kalaburagi, Karnataka, India

Shreeta

Department of Computer Science, Visvesveraya Technological University Belagavi, CPGS Kalaburagi, Karnataka, India

Zohara Begum

Assistant Professor Dept. of E&CE, FENT, KBN University, Kalaburagi, Karnataka, India