Official Code Release of the paper "Understanding the Perceived Quality of Video Predictions". The database can be downloaded from the project webpage.
- Copy the videos to
Data/PVQA/Predicted_Videos
directory. - Copy the
MOS.csv
file toData/PVQA/MOS.csv
. - Run
src/feature_extractors/FeatureExtractor.py
file to extract the features from the videos. - Run
demo1()
method insrc/Trainer.py
file to train model. The trained model will be saved inTrained_Models/PVQA
. - Additionally,
demo2()
method insrc/Trainer.py
can be used to evaluate the model on 100 splits and compute median scores of PLCC, SROCC and RMSE. - To train on a different database, organize the videos and MOS similarly. Write a new data-loader for the new database (similar to
src/data_loaders/PVQA.py
) and change the training configs to use the new data-loader.
Our Model and Baseline Models pretrained on our database are available here.
- To compute the quality score of a single video, use
demo1()
method insrc/Tester.py
, by specifying the path to the video. - To compute the quality scores of multiple videos, place all the videos in a single directory and use the method
demo2()
insrc/Tester.py
. - To compute the quality score of a video whose features has been already computed, use the method
demo3()
insrc/Tester.py
. - Since tensorflow updates the ResNet-50/VGG-19/Inception-v3 pretrained model weights with newer versions, if you use a different version of tensorflow in your setup, please train the PVQA model again instead of using the pretrained models.
If you use our PVQA model in your publication, please specify the version you are using. The current version is 1.3.1.
Copyright 2020 Nagabhushan Somraj, Manoj Surya Kashi, S P Arun, Rajiv Soundararajan
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this code except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
If you use this code for your research, please cite our paper
@article{somraj2020pvqa,
title = {Understanding the Perceived Quality of Video Predictions},
author = {Somraj, Nagabhushan and Kashi, Manoj Surya and Arun, S. P. and Soundararajan, Rajiv},
journal = {Signal Processing: Image Communication},
volume = {102},
pages = {116626},
issn = {0923-5965},
year = {2022},
doi = {https://doi.org/10.1016/j.image.2021.116626}
}