{"id":13638698,"url":"https://github.com/feizc/Visual-LLaMA","last_synced_at":"2025-04-19T18:30:54.481Z","repository":{"id":150040292,"uuid":"622880703","full_name":"feizc/Visual-LLaMA","owner":"feizc","description":"Open LLaMA Eyes to See the World","archived":false,"fork":false,"pushed_at":"2023-04-16T07:20:49.000Z","size":175,"stargazers_count":175,"open_issues_count":2,"forks_count":10,"subscribers_count":6,"default_branch":"main","last_synced_at":"2024-11-09T08:40:11.687Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/feizc.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2023-04-03T08:52:30.000Z","updated_at":"2024-11-07T08:41:46.000Z","dependencies_parsed_at":"2024-01-14T09:14:11.265Z","dependency_job_id":"70abfed7-3aba-48ae-9688-adb26e2c4de1","html_url":"https://github.com/feizc/Visual-LLaMA","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/feizc%2FVisual-LLaMA","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/feizc%2FVisual-LLaMA/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/feizc%2FVisual-LLaMA/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/feizc%2FVisual-LLaMA/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/feizc","download_url":"https://codeload.github.com/feizc/Visual-LLaMA/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249764730,"owners_count":21322290,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T01:00:51.899Z","updated_at":"2025-04-19T18:30:54.178Z","avatar_url":"https://github.com/feizc.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n     \u003cimg src=\"figures/logo.png\" alt=\"logo\" width = \"600\"\u003e\n     \u003cbr/\u003e\n\u003c/p\u003e\n\n\n\n## Open LLaMA Eyes to See the World\n\nThis project aims to optimize LLaMA model for visual information understanding like GPT-4 and further explore the potentional of large language model. \n\nGenerally, we use CLIP vision encoder to extract image features, then image features are projected with MLP-based or Transformer-based connection network into text embedding dimensionality. Then, visual representation (including additional special tokens [boi] and [eoi]) is concatenated with text representation to learn in a autoregressive manner. The framework is similar to [kosmos-1](https://arxiv.org/pdf/2302.14045.pdf) and [PaLM-E](https://palm-e.github.io/).\n\n\n- [X] Code adjustation to support for multi-modal generation. Download [clip](https://huggingface.co/openai/clip-vit-large-patch14) and [LLaMA](https://huggingface.co/decapoda-research/llama-7b-hf) models from huggingface. Meantime, we test the scripts are also compatible with other LLaMA model size. Please use script ```preprocess.py``` to deal with the data.  \n\n- [X] Supervised training stage: freeze llama and clip-encoder models and only optimize the connection network. In this stage, we use COCO, CC-3M and COYO-700M datasets with training scripts ```train.py```. \n     We provide the training hyper-parameter used in our experiemnts on A100 GPU(80G).  We also evaluate the image captioning performance in COCO testing set. \n       \n     | Argument | Values |\n     |------|------|\n     | `batch size` | 1 * 8 * 8 |\n     | `epochs` | 3 |\n     | `cut length` | 256 |\n     | `learning rate` | 4e-3 |\n     | `image sequence length` | 10 |\n\n\n\n- [X] Instructing tuning stage: fine-tuning full model with mixed VQA and language-only instructing dataset. We use lora strategy to optimize the entire model with fine-tuning scripts ```finetune.py```. \n\n     | Argument | Values |\n     |------|------|\n     | `batch size` | 1024 |\n     | `epochs` | 3 |\n     | `cut length` | 256 |\n     | `learning rate` | 2e-5 |\n     | `image sequence length` | 10 |\n\n\n- [ ] Open source trained ckpt on huggingface and gradio interface for multi-model generation. \n\n\n## Reference \n\n[1] https://github.com/facebookresearch/llama \n\n[2] https://github.com/tloen/alpaca-lora\n\n\n\n\n","funding_links":[],"categories":["Summary"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffeizc%2FVisual-LLaMA","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffeizc%2FVisual-LLaMA","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffeizc%2FVisual-LLaMA/lists"}