![]() | Only 14 pages are availabe for public view |
Abstract It is known that there are research sensing gesture recognition techniques based on Wi-Fi signals are introduced because of the commercial off-the-shelf Wi-Fi devices without any need for additional equipment. In this thesis, we use the well known public American Sign Language (ASL) which based on Channel State Information (CSI) dataset collected from different environments. To achieve such words, a deep learning-based sign language recognition system is proposed. In order to build a unique pattern for each sign word, we use the Wi-Fi CSI amplitude and phase information as input to the proposed model. The proposed model uses three types of deep learning: CNN, LSTM, and ABLSTM with a complete study of the impact of optimizers, the use of amplitude and phase of CSI, and preprocessing phase. Accuracy, F -score, Precision, and recall are used as performance metrics to evaluate the proposed model. The proposed model achieves 99.855%, 99.674%, 99.734%, and 93.84% average recognition accuracy of the lab, home, lab + home, and 5 different users in a lab environment, respectively. Experimental results show that the proposed model can effectively detect sign gestures in complex environments compared with some deep learning recognition models. Also, a new sign language recognition system which includes the attention mechanism with the convolutional neural network and bidirectional Long-Short-Term-Memory (CNN-BiLSTM) is proposed. It achieves 95.643%, 98.025%, 98.804% and 91.12% recognition accuracy for the home, lab, lab + home, and 5 different users in a lab environment, respectively. |