A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images

A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images

0.00 Avg rating0 Votes
Article ID: iaor20172016
Volume: 34
Issue: 3
Publication Date: Jun 2017
Journal: Expert Systems
Authors: ,
Keywords: neural networks, artificial intelligence, artificial intelligence: decision support
Abstract:

Sign language is used by approximately 70 million (http://wfdeaf.org/human‐rights/crpd/sign‐language) people throughout the world, and an automatic tool for interpreting it could make a major impact on communication between those who use it and those who may not understand it. However, computer interpretation of sign language is very difficult given the variability in size, shape, and position of the fingers or hands in an image. Hence, this paper explores the applicability of deep learning for interpreting sign language and develops a convolutional neural network aimed at classifying fingerspelling images using both image intensity and depth data. The developed convolutional network is evaluated by applying it to the problem of fingerspelling recognition for American Sign Language. The evaluation shows that the developed convolutional network performs better than previous studies and has precision of 82% and recall of 80%. Analysis of the confusion matrix from the evaluation reveals the underlying difficulties of classifying some particular signs, which are discussed in the paper.

Reviews

Required fields are marked *. Your email address will not be published.