This is an official pytorch implementation of Feature Fusion of sEMG and Ultrasound Signals in Hand Gesture Recognition. Please cite this paper if you find this repo helpful for you.
EUNet has two version: one-stream and two stream.
🚩 one-stream EUNet is designed for hand gesture recognition based on seperate sEMG or A-mode ultrasound signals;
🚩 two-stream EUNet is designed for hand gesture recognition based on fusion sEMG and A-mode ultrasound signals.
-
EUNet(one stream) The shared CNN architecture for separate sEMG or ultrasound signal to feature extraction and classification.
-
EUNet(two stream) The two stream CNN architecture for sEMG and ultrasound feature extraction, feature fusion and classification.
The code is developed using python 3.7 on Ubuntu 18.04. NVIDIA GPUs are needed.
The complete hybrid sEMG/US dataset is not released now. We apply collected sEMG/US data of one subject for code testing, which can be downloaded from: Baidu Disk (code: h99k).
- Clone this repo
- Install dependencies:
pip install -r requirements.txt
- Create a soft link to the dir you save the dataset:
ln -s **datadir_save** data
- Train EUNet for sEMG modality
sh scripts/train_emg_EUNet.sh
- Train EUNet for A-mode ultrasound modality
sh scripts/train_us_EUNet.sh
- Validate EUNet for sEMG modality
sh scripts/test_emg_EUNet.sh
- Validate EUNet for A-mode ultrasound modality
sh scripts/test_us_EUNet.sh
If you have any questions, feel free to contact me through Github issues.