{"id":13543564,"url":"https://github.com/SSL92/hyperIQA","last_synced_at":"2025-04-02T13:30:40.831Z","repository":{"id":41365465,"uuid":"245560005","full_name":"SSL92/hyperIQA","owner":"SSL92","description":"Source code for the CVPR'20 paper \"Blindly Assess Image Quality in the Wild Guided by A Self-Adaptive Hyper Network\"","archived":false,"fork":false,"pushed_at":"2023-12-14T09:29:58.000Z","size":2144,"stargazers_count":366,"open_issues_count":34,"forks_count":52,"subscribers_count":7,"default_branch":"master","last_synced_at":"2024-11-03T10:33:01.608Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SSL92.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-03-07T03:17:17.000Z","updated_at":"2024-10-30T12:01:15.000Z","dependencies_parsed_at":"2024-01-14T02:38:53.227Z","dependency_job_id":"382aa129-37d7-4327-9e47-4550a5bfb0f7","html_url":"https://github.com/SSL92/hyperIQA","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SSL92%2FhyperIQA","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SSL92%2FhyperIQA/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SSL92%2FhyperIQA/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SSL92%2FhyperIQA/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SSL92","download_url":"https://codeload.github.com/SSL92/hyperIQA/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246823549,"owners_count":20839745,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T11:00:33.073Z","updated_at":"2025-04-02T13:30:39.089Z","avatar_url":"https://github.com/SSL92.png","language":"Python","readme":"# HyperIQA\n\nThis is the source code for the CVPR'20 paper \"[Blindly Assess Image Quality in the Wild Guided by A Self-Adaptive Hyper Network](https://openaccess.thecvf.com/content_CVPR_2020/papers/Su_Blindly_Assess_Image_Quality_in_the_Wild_Guided_by_a_CVPR_2020_paper.pdf)\".\n\n## Dependencies\n\n- Python 3.6+\n- PyTorch 0.4+\n- TorchVision\n- scipy\n\n(optional for loading specific IQA Datasets)\n- csv (KonIQ-10k Dataset)\n- openpyxl (BID Dataset)\n\n## Usages\n\n### Testing a single image\n\nPredicting image quality with our model trained on the Koniq-10k Dataset.\n\nTo run the demo, please download the pre-trained model at [Google drive](https://drive.google.com/file/d/1OOUmnbvpGea0LIGpIWEbOyxfWx6UCiiE/view?usp=sharing) or [Baidu cloud](https://pan.baidu.com/s/1yY3O8DbfTTtUwXn14Mtr8Q) (password: 1ty8), put it in 'pretrained' folder, then run:\n\n```\npython demo.py\n```\n\nYou will get a quality score ranging from 0-100, and a higher value indicates better image quality.\n\n### Training \u0026 Testing on IQA databases\n\nTraining and testing our model on the LIVE Challenge Dataset.\n\n```\npython train_test_IQA.py\n```\n\nSome available options:\n* `--dataset`: Training and testing dataset, support datasets: livec | koniq-10k | bid | live | csiq | tid2013.\n* `--train_patch_num`: Sampled image patch number per training image.\n* `--test_patch_num`: Sampled image patch number per testing image.\n* `--batch_size`: Batch size.\n\nWhen training or testing on CSIQ dataset, please put 'csiq_label.txt' in your own CSIQ folder.\n\n## Citation\nIf you find this work useful for your research, please cite our paper:\n```\n@InProceedings{Su_2020_CVPR,\nauthor = {Su, Shaolin and Yan, Qingsen and Zhu, Yu and Zhang, Cheng and Ge, Xin and Sun, Jinqiu and Zhang, Yanning},\ntitle = {Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network},\nbooktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\nmonth = {June},\nyear = {2020}\n}\n```\n","funding_links":[],"categories":["Denoising Algorithms","Face Manipulation"],"sub_categories":["Face IQA"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FSSL92%2FhyperIQA","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FSSL92%2FhyperIQA","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FSSL92%2FhyperIQA/lists"}