Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bcmi/Composite-Image-Evaluation
https://github.com/bcmi/Composite-Image-Evaluation
image-composition
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/bcmi/Composite-Image-Evaluation
- Owner: bcmi
- Created: 2023-08-15T13:08:57.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-14T14:09:01.000Z (11 months ago)
- Last Synced: 2024-08-03T17:10:06.628Z (5 months ago)
- Topics: image-composition
- Homepage:
- Size: 5.86 KB
- Stars: 19
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-Generative-Image-Composition - Composite-Image-Evaluation
README
# Composite-Image-Evaluation
Here are some possible evaluation metrics to evaluate the quality of composite images from different aspects.
+ Evaluate whether the foreground is harmonious with background.
+ Harmony score: use [illumination encoder](https://github.com/bcmi/BargainNet-Image-Harmonization) to extract the illumination codes from foreground and background, and measure their similarity.
+ Inharmony hit: use [inharmonious region localization model](https://github.com/bcmi/MadisNet-Inharmonious-Region-Localization) to detect the inharmonious region, and calculate the overlap (e.g., IoU) between detected region and foreground region.+ Evaluate whether the foreground object placement is reasonable.
+ OPA score: use [object placement assessment model](https://github.com/bcmi/Object-Placement-Assessment-Dataset-OPA) to predict the accuracy of object placement.
+ Evaluate whether the foreground is compatible with background in terms of geometry and semantics.
+ FOS score: use [foreground object search model](https://github.com/bcmi/Foreground-Object-Search-Dataset-FOSD) to calculate the compatibility score between foreground and background in terms of geometry and semantics.
+ Evaluate the fidelity of foreground, i.e., whether the synthesized foreground is similar to the input foreground.
+ Clip score: use [CLIP](https://github.com/openai/CLIP) to extract the embeddings from the input foreground image and the generated foreground patch, and measure their similarity.
+ Dino score: use [DINO](https://github.com/facebookresearch/dino) to measure the average cosine similarity between the input and generated foreground.+ Evaluate the over quality of foreground or the whole composite image.
+ FID: use pretrained image encoder (*e.g.*, InceptionNet, CLIP) to extract the embeddings from real images and generated images, and measure their [Fréchet Inception Distance](https://github.com/mseitzer/pytorch-fid).
+ QS: use [quality score](https://github.com/cientgu/GIQA) to measure the quality of each single generated image, and compute average score.