Gesture recognition

Development of a Wearable Device for Sign Language Translation

A wearable device for sign language translation, called Talking Hands, is presented. It is composed by a custom data glove, which is designed to optimize the data acquisition, and a smartphone application, which offers user personalizations. Although Talking Hands can not translate a whole sign language, it offers an effective communication to deaf and mute people with everyone through a scenario-based translation. The different challenges of a gesture recognition system have been overcame with simple solutions, since the main goal of this work is an user-based product.

Exploiting Recurrent Neural Networks and Leap Motion Controller for the Recognition of Sign Language and Semaphoric Hand Gestures

Hand gesture recognition is still a topic of great interest for the computer vision community. In particular, sign language and semaphoric hand gestures are two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. Any hand gesture can be represented by sets of feature vectors that change over time. Recurrent Neural Networks (RNNs) are suited to analyse this type of sets thanks to their ability to model the long term contextual information of temporal sequences.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma