This repository conatins the PyTorch implementation of Audio-visual automatic group affect analysis method.
For Video Level Group AFfect (VGAF) dataset contact - [email protected] and [email protected]
python VGAFNet_fusion.py
This file need to change the path of the pre-processed features as an input. For the holistic channel, frames are sampled from the original video. For the face-level channel, vggface features are extracted. Please refer the paper for more details on data pre-processing.
If you find the code useful for your research, please consider citing our work:
@article{sharma2021audio,
title={Audio-visual automatic group affect analysis},
author={Sharma, Garima and Dhall, Abhinav and Cai, Jianfei},
journal={IEEE Transactions on Affective Computing},
year={2021},
publisher={IEEE}
}
In case of any questions, please contact [email protected].