Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/deshanadesai/vqa-dataaugmentation
Data Augmentation for Visual Question Answering
https://github.com/deshanadesai/vqa-dataaugmentation
question-answering visual
Last synced: 18 days ago
JSON representation
Data Augmentation for Visual Question Answering
- Host: GitHub
- URL: https://github.com/deshanadesai/vqa-dataaugmentation
- Owner: deshanadesai
- Created: 2018-03-31T21:10:53.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2022-12-08T00:52:07.000Z (about 2 years ago)
- Last Synced: 2024-04-21T17:05:42.343Z (9 months ago)
- Topics: question-answering, visual
- Language: Jupyter Notebook
- Homepage:
- Size: 1.74 MB
- Stars: 7
- Watchers: 3
- Forks: 1
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
### VQA Data Augmentation
Baseline: Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering
(paper: https://arxiv.org/pdf/1704.03162.pdf)
(code: https://github.com/Cyanogenoid/pytorch-vqa)Datasets: VQA2.0 http://www.visualqa.org/download.html
Evaluation: https://github.com/GT-Vision-Lab/VQA
Experiments To Do:
To run today:
* Add converse substitution to Language Only augmentation
* Multiple word substitutions.
* Do some paraphrasing for known question types.* How many / Color of - question substitution with hypernym doubt.
* Language augment other methods.
* Change all augmentation methods to fit the same vocab.
* Filter conceptnet based on question repetition.
* Add all working methods together for data augmentation.
* Make custom test set for places 365? Places 365 has adjectives as well as scene understanding.
* Add augmentation on image based on wrong answer or image type?