Aim: Over the years, joystick-controlled wheelchairs have helped in the mobility of non-ambulant people. However, existing wheelchairs may not be completely helpful for people suffering from additional medical conditions such as, dysarthria, cerebral palsy, and diplegic users. The current work aims to develop a speech and gesture multimodal controlled wheelchair system for people with multiple disabilities. The motivation behind this work is to cater to their needs by customizing their wheelchairs for ease of movement and to instill a feeling of independence when they commute.
Methods: The speech recognition module of the proposed wheelchair is developed to cater the needs of both normal and dysarthric speakers. The current work makes use of convolution and residual convolution based neural architectures to model and recognise the speech commands. Further, a range-based gesture recognition system is developed in the current work using MPU 6050.
Results: The proposed speech recognition module of the wheelchair has an accuracy up to 98% and 92% for normal and dysarthric speakers respectively. Further the gesture recognition module has an accuracy of 100% for all users as it is completely threshold based.
Conclusion: The proposed speech module is ported on to a Raspberry pi 4B+ module and the gesture module is ported using an Arduino Uno. These modules are integrated with the control system (i.e., joystick) of the wheelchair to ensure movement. The current work ensures a small footprint and easy mobility of the wheelchair.
Key words: multimodal, speech command recognition, residual convolutional neural network, gesture recognition, Raspberry pi, Arduino Uno, MPU 6050
|