https://github.com/wuer5/OMGSR
https://github.com/wuer5/OMGSR
Last synced: about 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/wuer5/OMGSR
- Owner: wuer5
- Created: 2025-08-05T10:27:20.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-08-12T03:12:31.000Z (about 2 months ago)
- Last Synced: 2025-08-12T04:22:35.223Z (about 2 months ago)
- Size: 15.6 KB
- Stars: 2
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
OMGSR: You Only Need One Mid-timestep Guidance for Real-World Image Super-Resolution
![]()
Zhiqiang Wu1,2* |
Zhaomang Sun2 |
Tong Zhou2 |
Bingtao Fu2 |
Ji Cong2 |
Yitong Dong2 |
\
Huaqi Zhang2 |
Xuan Tang1 |
Mingsong Chen1 |
Xian Wei1†1Software Engineering Institute, East China Normal University |
2vivo Mobile Communication Co. Ltd, Hangzhou, China |
*Work done during internship at vivo |
†Corresponding author## :boom: News
- **2025.8.12**: The arXiv paper is released.
- **2025.8.6**: This repo is released.## :runner: TODO
- [ ] Release the code in a week
- [ ] Release the weight in two weeks
- [ ] Develop OMGSR-Q (Qwen-Image)...## Framework

## Environment
```
## git clone this repository
git clone https://github.com/wuer5/OMGSR.git
cd OMGSR
# create an environment
conda create -n OMGSR python=3.10
conda activate OMGSR
pip install --upgrade pip
pip install -r requirements.txt
```## Quick Start
1. Download the pre-trained models from huggingface
- Download SD-Turbo for OMGSR-S.
- Download FLUX.1-dev for OMGSR-F.2. Download the OMGSR Lora adapters weights
- Download OMGSR-S-LoRA-adapter to the folder ```adapters```.
- Download OMGSR-F-LoRA-adapter to the folder ```adapters```.3. Prepare your testing data
You should put the testing data (```.png```, ```.jpg```, ```.jpeg``` formats) to the folder ```tests```.
4. Start to infer
For OMGSR-S:
``````
For OMGSR-F:
``````
Training
1. Prepare your training datasets
You should download the training datasets ```LSDIR``` and ```FFHQ``` (first 10k images) followed by our paper settings or your custom datasets.
Please generate the path list of training datasets in ```training_dataset.txt``` by:
```
# All LSDIR images
python gen_txt_path.py --path [YOUR LSDIR DATASET PATH] --nums_sample all
# The first 10k FFHQ images
python gen_txt_path.py --path [YOUR FFHQ DATASET PATH] --nums_sample 10000
```
Then you will get the file like
```
xxx.png
xxx.png
...
```Start to train OMGSR-S at 512-resolution:
``````
Start to train OMGSR-F at 512-resolution:
``````
Start to train OMGSR-F at 1k-resolution:
``````
