{"id":19143820,"url":"https://github.com/tom-doerr/tecogan-docker","last_synced_at":"2025-05-07T01:12:14.267Z","repository":{"id":106088287,"uuid":"271435005","full_name":"tom-doerr/TecoGAN-Docker","owner":"tom-doerr","description":"This is a fork of the TecoGAN project (https://github.com/thunil/TecoGAN) that adds support for docker.","archived":false,"fork":false,"pushed_at":"2021-03-04T21:01:44.000Z","size":20368,"stargazers_count":105,"open_issues_count":7,"forks_count":21,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-05-05T04:53:01.340Z","etag":null,"topics":["docker-container","tecogan-docker","tecogan-model","video"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tom-doerr.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-06-11T02:41:56.000Z","updated_at":"2024-11-06T11:27:48.000Z","dependencies_parsed_at":null,"dependency_job_id":"03f439da-c096-40c2-808a-547d95c89fc5","html_url":"https://github.com/tom-doerr/TecoGAN-Docker","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tom-doerr%2FTecoGAN-Docker","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tom-doerr%2FTecoGAN-Docker/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tom-doerr%2FTecoGAN-Docker/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tom-doerr%2FTecoGAN-Docker/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tom-doerr","download_url":"https://codeload.github.com/tom-doerr/TecoGAN-Docker/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252793653,"owners_count":21805058,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["docker-container","tecogan-docker","tecogan-model","video"],"created_at":"2024-11-09T07:32:56.845Z","updated_at":"2025-05-07T01:12:14.239Z","avatar_url":"https://github.com/tom-doerr.png","language":"Python","readme":"# TecoGAN Docker\nThis repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution.\n_Authors: Mengyu Chu, You Xie, Laura Leal-Taixe, Nils Thuerey. Technical University of Munich._\n\nThis repository so far contains the code for the TecoGAN _inference_ \nand _training_. Data generation, i.e., download, will follow soon.\nPre-trained models are also available below, you can find links for downloading and instructions below.\nThe video and pre-print of our paper can be found here:\n\nVideo: \u003chttps://www.youtube.com/watch?v=pZXFXtfd-Ak\u003e\nPreprint: \u003chttps://arxiv.org/pdf/1811.09393.pdf\u003e\n\n![TecoGAN teaser image](resources/teaser.jpg)\n\n### Additional Generated Outputs\n\nOur method generates fine details that \npersist over the course of long generated video sequences. E.g., the mesh structures of the armor,\nthe scale patterns of the lizard, and the dots on the back of the spider highlight the capabilities of our method.\nOur spatio-temporal discriminator plays a key role to guide the generator network towards producing coherent detail.\n\n\u003cimg src=\"resources/tecoGAN-lizard.gif\" alt=\"Lizard\" width=\"900\"/\u003e\u003cbr\u003e\n\n\u003cimg src=\"resources/tecoGAN-armour.gif\" alt=\"Armor\" width=\"900\"/\u003e\u003cbr\u003e\n\n\u003cimg src=\"resources/tecoGAN-spider.gif\" alt=\"Spider\" width=\"600\" hspace=\"150\"/\u003e\u003cbr\u003e\n\n### Running the TecoGAN Model\n\nBelow you can find a quick start guide for running a trained TecoGAN model.\nFor further explanations of the parameters take a look at the runGan.py file.  \nNote: evaluation (test case 2) currently requires an Nvidia GPU with `CUDA` and Linux. \n\n#### 1. Install docker\nOn Ubuntu/Debian/Linux-Mint etc.:\n```\nsudo apt-get install docker.io\nsudo systemctl enable --now docker\n```\nInstructions for other platforms:\nhttps://docs.docker.com/install/\n\n\n#### 2. Install the NVIDIA Container Toolkit\nThis step will only work on Linux and is only necessary if you want GPU support.\nAs far as I know it's not possible to use the GPU with docker under Windows/Mac.\n\nOn Ubuntu/Debian/Linux-Mint etc.:\n```sh\n# Add the package repositories\ndistribution=$(. /etc/os-release;echo $ID$VERSION_ID)\ncurl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -\ncurl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list\n\nsudo apt-get update \u0026\u0026 sudo apt-get install -y nvidia-container-toolkit\nsudo systemctl restart docker\n```\n\nInstructions for other platforms:\nhttps://github.com/NVIDIA/nvidia-docker\n\n#### 3. Build the docker images\n```bash\nfor e in docker/*\ndo\n        dockerfile_name=$(basename $e)\n        docker build --file $e . -t \"$dockerfile_name\"_image \ndone\n```\n\n#### 4. Start the docker container we just build\nYou only need to start either the CPU or the GPU docker container.\n\nCPU version:\n```bash\ndocker run -it --mount src=$(pwd),target=/TecoGAN,type=bind -w /TecoGAN tecogan_cpu_image bash\n```\nGPU version:\n```bash\ndocker run --gpus all -it --mount src=$(pwd),target=/TecoGAN,type=bind -w /TecoGAN tecogan_gpu_image bash\n```\n\n#### 5. Run the model\nRun the following inside the docker container.\n```bash\n# Download our TecoGAN model, the _Vid4_ and _TOS_ scenes shown in our paper and video.\npython3 runGan.py 0\n\n# Run the inference mode on the calendar scene.\n# You can take a look of the parameter explanations in the runGan.py, feel free to try other scenes!\npython3 runGan.py 1 \n\n# Evaluate the results with 4 metrics, PSNR, LPIPS[1], and our temporal metrics tOF and tLP with pytorch.\n# Take a look at the paper for more details! \npython3 runGan.py 2\n\n```\n\n### Train the TecoGAN Model\n\n#### 1. Prepare the Training Data\n\nThe training and validation dataset can be downloaded with the following commands into a chosen directory `TrainingDataPath`.  Note: online video downloading requires youtube-dl.  \n\n```bash\n# take a look of the parameters first:\npython3 dataPrepare.py --help\n\n# To be on the safe side, if you just want to see what will happen, the following line won't download anything,\n# and will only save information into log file.\n# TrainingDataPath is still important, it the directory where logs are saved: TrainingDataPath/log/logfile_mmddHHMM.txt\npython3 dataPrepare.py --start_id 2000 --duration 120 --disk_path TrainingDataPath --TEST\n\n# This will create 308 subfolders under TrainingDataPath, each with 120 frames, from 28 online videos.\n# It takes a long time.\npython3 dataPrepare.py --start_id 2000 --duration 120 --REMOVE --disk_path TrainingDataPath\n\n\n```\n\nOnce ready, please update the parameter TrainingDataPath in runGAN.py (for case 3 and case 4), and then you can start training with the downloaded data! \n\nNote: most of the data (272 out of 308 sequences) are the same as the ones we used for the published models, but some (36 out of 308) are not online anymore. Hence the script downloads suitable replacements.\n\n\n#### 2. Train the Model  \nThis section gives command to train a new TecoGAN model. Detail and additional parameters can be found in the runGan.py file. Note: the tensorboard gif summary requires ffmpeg.\n\n```bash\n# Train the TecoGAN model, based on our FRVSR model\n# Please check and update the following parameters: \n# - VGGPath, it uses ./model/ by default. The VGG model is ca. 500MB\n# - TrainingDataPath (see above)\n# - in main.py you can also adjust the output directory of the  testWhileTrain() function if you like (it will write into a train/ sub directory by default)\npython3 runGan.py 3\n\n# Train without Dst, (i.e. a FRVSR model)\npython3 runGan.py 4\n```\n\nRun the the following outside of the docker container (you need to replace the logdir path):\n```bash\n# View log via tensorboard\ntensorboard --logdir='ex_TecoGANmm-dd-hh/log'\n\n```\n\n### Tensorboard GIF Summary Example\n\u003cimg src=\"resources/gif_summary_example.gif\" alt=\"gif_summary_example\" width=\"600\" hspace=\"150\"/\u003e\u003cbr\u003e\n\n### Acknowledgements\nThis work was funded by the ERC Starting Grant realFlow (ERC StG-2015-637014).  \nPart of the code is based on LPIPS[1], Photo-Realistic SISR[2] and gif_summary[3].\n\n### Reference\n[1] [The Unreasonable Effectiveness of Deep Features as a Perceptual Metric (LPIPS)](https://github.com/richzhang/PerceptualSimilarity)  \n[2] [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https://github.com/brade31919/SRGAN-tensorflow.git)  \n[3] [gif_summary](https://colab.research.google.com/drive/1vgD2HML7Cea_z5c3kPBcsHUIxaEVDiIc)\n\nTUM I15 \u003chttps://ge.in.tum.de/\u003e , TUM \u003chttps://www.tum.de/\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftom-doerr%2Ftecogan-docker","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftom-doerr%2Ftecogan-docker","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftom-doerr%2Ftecogan-docker/lists"}