{"id":17349610,"url":"https://github.com/researchmm/TTSR","last_synced_at":"2025-02-26T02:31:57.860Z","repository":{"id":41068958,"uuid":"269604701","full_name":"researchmm/TTSR","owner":"researchmm","description":"[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution","archived":false,"fork":false,"pushed_at":"2022-07-24T05:04:00.000Z","size":3262,"stargazers_count":765,"open_issues_count":3,"forks_count":115,"subscribers_count":14,"default_branch":"master","last_synced_at":"2024-10-16T18:18:05.771Z","etag":null,"topics":["image-restoration","image-super-resolution"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/researchmm.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-06-05T10:45:10.000Z","updated_at":"2024-10-02T21:15:01.000Z","dependencies_parsed_at":"2022-07-14T08:08:57.188Z","dependency_job_id":null,"html_url":"https://github.com/researchmm/TTSR","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/researchmm%2FTTSR","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/researchmm%2FTTSR/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/researchmm%2FTTSR/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/researchmm%2FTTSR/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/researchmm","download_url":"https://codeload.github.com/researchmm/TTSR/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":240780769,"owners_count":19856422,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["image-restoration","image-super-resolution"],"created_at":"2024-10-15T16:56:23.260Z","updated_at":"2025-02-26T02:31:57.836Z","avatar_url":"https://github.com/researchmm.png","language":"Python","readme":"# TTSR (CVPR2020)\nOfficial PyTorch implementation of the paper [Learning Texture Transformer Network for Image Super-Resolution](https://arxiv.org/abs/2006.04139) accepted in CVPR 2020.\n\n## Contents\n- [Introduction](#introduction)\n  - [Contribution](#contribution)\n  - [Approach overview](#approach-overview)\n  - [Main results](#main-results)\n- [Requirements and dependencies](#requirements-and-dependencies)\n- [Model](#model)\n- [Quick test](#quick-test)\n- [Dataset prepare](#dataset-prepare)\n- [Evaluation](#evaluation)\n- [Train](#train)\n- [Citation](#citation)\n- [Contact](#contact)\n\n## Introduction\nWe proposed an approach named TTSR for RefSR task. Compared to SISR, RefSR has an extra high-resolution reference image whose textures can be utilized to help super-resolve low-resolution input.\n\n### Contribution\n1. We are one of the first to introduce the transformer architecture into image generation tasks. More specifically, we propose a texture transformer with four closely-related modules for image SR which achieves significant improvements over SOTA approaches.\n2. We propose a novel cross-scale feature integration module for image generation tasks which enables our approach to learn a more powerful feature representation by stacking multiple texture transformers.\n\n### Approach overview\n\u003cimg src=\"https://github.com/FuzhiYang/TTSR/blob/master/IMG/TT.png\" width=40%\u003e\u003cimg src=\"https://github.com/FuzhiYang/TTSR/blob/master/IMG/CSFI.png\" width=60%\u003e\n\n### Main results\n\u003cimg src=\"https://github.com/FuzhiYang/TTSR/blob/master/IMG/results.png\" width=80%\u003e\n\n## Requirements and dependencies\n* python 3.7 (recommend to use [Anaconda](https://www.anaconda.com/))\n* python packages: `pip install opencv-python imageio`\n* pytorch \u003e= 1.1.0\n* torchvision \u003e= 0.4.0\n\n## Model\nPre-trained models can be downloaded from [onedrive](https://1drv.ms/u/s!Ajav6U_IU-1gmHZstHQxOTn9MLPh?e=e06Q7A), [baidu cloud](https://pan.baidu.com/s/1j9swBtz14WneuMYgTLkWtA)(0u6i), [google drive](https://drive.google.com/drive/folders/1CTm-r3hSbdYVCySuQ27GsrqXhhVOS-qh?usp=sharing).\n* *TTSR-rec.pt*: trained with only reconstruction loss\n* *TTSR.pt*: trained with all losses\n\n## Quick test\n1. Clone this github repo\n```\ngit clone https://github.com/FuzhiYang/TTSR.git\ncd TTSR\n```\n2. Download pre-trained models and modify \"model_path\" in test.sh\n3. Run test\n```\nsh test.sh\n```\n4. The results are in \"save_dir\" (default: `./test/demo/output`)\n\n## Dataset prepare\n1. Download [CUFED train set](https://drive.google.com/drive/folders/1hGHy36XcmSZ1LtARWmGL5OK1IUdWJi3I) and [CUFED test set](https://drive.google.com/file/d/1Fa1mopExA9YGG1RxrCZZn7QFTYXLx6ph/view)\n2. Make dataset structure be:\n- CUFED\n    - train\n        - input\n        - ref\n    - test\n        - CUFED5\n\n## Evaluation\n1. Prepare CUFED dataset and modify \"dataset_dir\" in eval.sh\n2. Download pre-trained models and modify \"model_path\" in eval.sh\n3. Run evaluation\n```\nsh eval.sh\n```\n4. The results are in \"save_dir\" (default: `./eval/CUFED/TTSR`)\n\n## Train\n1. Prepare CUFED dataset and modify \"dataset_dir\" in train.sh\n2. Run training\n```\nsh train.sh\n```\n3. The training results are in \"save_dir\" (default: `./train/CUFED/TTSR`)\n\n## Related projects\nWe also sincerely recommend some other excellent works related to us. :sparkles: \n* [FTVSR: Learning Spatiotemporal Frequency-Transformer for Compressed Video Super-Resolution](https://github.com/researchmm/FTVSR)\n* [TTVSR: Learning Trajectory-Aware Transformer for Video Super-Resolution](https://github.com/researchmm/TTVSR)\n* [CKDN: Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment](https://github.com/researchmm/CKDN)\n\n## Citation\n```\n@InProceedings{yang2020learning,\nauthor = {Yang, Fuzhi and Yang, Huan and Fu, Jianlong and Lu, Hongtao and Guo, Baining},\ntitle = {Learning Texture Transformer Network for Image Super-Resolution},\nbooktitle = {CVPR},\nyear = {2020},\nmonth = {June}\n}\n```\n\n## Contact\nIf you meet any problems, please describe them in issues or contact:\n* Fuzhi Yang: \u003cyfzcopy0702@sjtu.edu.cn\u003e\n\n","funding_links":[],"categories":["Table of Contents"],"sub_categories":["DETR变种"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fresearchmm%2FTTSR","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fresearchmm%2FTTSR","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fresearchmm%2FTTSR/lists"}