The project is based on the hand recoganization primarly devloped for HANDICAP PEOPLE
who are unable to speak. The aim is to create a model that can speak up on behalf of the user to the smart assistants so that they can also communicate with them and they can also take advantage of modern technology like home automation. The model uses deeplearn-knn-image-classifier
to classify the images and knn.predictClass
to predict the signs. Model accuracy during the testing was 90.25%.
The application is that it will save the desired keywords for the gestures that he/she wants to say to the smart assistant to do or to ask the things. Then the model will train according to the gestures saved by the user. When it comes to detecting the gestures it will simply take the picture of the user doing the gesture with the help of a camera and convert it into pixels and calculate its threshold and matches with the trained data’s threshold and predicts the output for that particular gesture. The output will get converted into text andspeech so that smart assistants can listen to it as well as deaf people can also listen to it.
- Ayush Gupta
- Shivanshu Bajpai
- Piyush Kumar
- Vrati Pandey
- Aryendra Prakash Singh
- Aryan