Sign-To-Speech

Sign-To-Speech

You can find the source code at this Project Repository. Additionally, if you’d like to read about the results you can do so at this Paper Link.

Brief Description

Accessibility for the deaf and hearing-impaired community is significantly limited by the financial and human capital supply costs associated with the use of signing professionals. Although recent innovations with virtual interpreting have made improvements in connecting interpreters with those who need them, there are additional gains to be made with the automation of digital interpreting and sign language learning. This project takes a step toward automatic interpreting by evaluating the performance of common machine learning models in a sign language classification problem and utilizing them in a sign language learning task. Cross-referencing model performance indicates that a shallow neural network model maximizes static single-hand-sign recognition accuracy and can improve signing proficiency with visualizations.

Overview

Researched supervised learning models for American Sign Language (ASL) translation using Google’s MediaPipe hand-landmark identification algorithm, image convolution, and Python. Created a set of automated visualizations to aid in ASL learning without the need for a signing professional: https://observablehq.com/d/790228ffa9ae0f19