This project is a web application that uses a machine learning model to interpret sign language and has Speech-to-text functionality for deaf indivisualds communicate well.The web app also consists of tutorials quizess and more. The frontend is built with React and the backend is a Node.js server. The machine learning model is implemented in a Jupyter notebook.
You can access the live demo of the project frontend at Demo Link
- Node.js
- Python
- npm or yarn
- Navigate to the
Backend
directory. - Install the necessary packages with
npm install
. - Start the server with
npm run dev
.
- Navigate to the
Frontend
directory. - Install the necessary packages with
npm install
. - Start the development server with
npm run dev
.
- Navigate to the
ML_Backend
directory. - Open the Jupyter notebook with
jupyter notebook testing3-py.ipynb
. - Run the cells in the notebook.
With all three parts running, you should be able to interact with the project through the frontend at http://localhost:5173
.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.