American Sign Language Fingerspell Detection Using Convolution Neural Network Within A 2-Way Translator

Authors

  • jihan angrila
  • neno ruseno
  • normalisa

Abstract

With time, computing and neural network has become more sophisticatedand can be applied for
various uses. One common and vital use of this technology is forcomputer vision. Just like a human brain,
which the neural network imitates, computervision can be used to classify and detect objects for various
reasons. This ranges from smart systems to quality control in factories. An up-and-coming usage of computer
vision is to help those with impaired hearing and users of sign language communicate with those who can’t.
This is done by detecting the sign language and converting it to other forms of communication such as text,
audio output, and more. In this project, a two-way translator application is made with 3 features:gesture
detection, text-to-speech, and speech-to-text translator. Convolution Neural Network (CNN) is used as the
model for the gesture detection and the language used isEnglish for verbal and American Sign Language (ASL)
for the signed. Confusion matrix and loss and accuracy evolution were analysed to see the effectivity of the
model. The accuracy of the model was found to be 82.6% with certain independent variable factors (bright
lighting, single-coloured background, using front-facing left hand, and fully covering the ROI box for gestures).
The application Graphical User Interface (GUI) was made using Tkinter in python. Although not directly
translated to speech due to coding limitations, having the detection feature is an important part as it shows
potential of what the translator could be.
Keyword: Convolution Neural Network, American Sign Language, Python, Object Classification, Object
Detection, Impaired Hearing Translator, Tkinter, Graphical User Interface.

Downloads

Published

2022-06-30

Issue

Section

Table of Content