{"id":16517364,"url":"https://github.com/vietanhdev/samexporter","last_synced_at":"2025-04-04T15:08:11.030Z","repository":{"id":163952615,"uuid":"639311442","full_name":"vietanhdev/samexporter","owner":"vietanhdev","description":"Export Segment Anything Models to ONNX","archived":false,"fork":false,"pushed_at":"2024-08-03T04:35:28.000Z","size":8720,"stargazers_count":326,"open_issues_count":18,"forks_count":39,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-03-28T14:06:45.727Z","etag":null,"topics":["onnx","segment-anything","segment-anything-model"],"latest_commit_sha":null,"homepage":"https://pypi.org/project/samexporter/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vietanhdev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":"vietanhdev"}},"created_at":"2023-05-11T07:59:31.000Z","updated_at":"2025-03-28T08:18:22.000Z","dependencies_parsed_at":"2024-05-09T01:50:34.324Z","dependency_job_id":"4e6b2953-fc46-4f7e-b801-29ce724af923","html_url":"https://github.com/vietanhdev/samexporter","commit_stats":{"total_commits":17,"total_committers":2,"mean_commits":8.5,"dds":0.05882352941176472,"last_synced_commit":"79ebdf2caed6b22d93bae003f766a068d09e5fce"},"previous_names":[],"tags_count":6,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vietanhdev%2Fsamexporter","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vietanhdev%2Fsamexporter/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vietanhdev%2Fsamexporter/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vietanhdev%2Fsamexporter/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vietanhdev","download_url":"https://codeload.github.com/vietanhdev/samexporter/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247198459,"owners_count":20900080,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["onnx","segment-anything","segment-anything-model"],"created_at":"2024-10-11T16:30:06.452Z","updated_at":"2025-04-04T15:08:11.014Z","avatar_url":"https://github.com/vietanhdev.png","language":"Python","readme":"# SAM Exporter - Now with Segment Anything 2!~~\n\nExporting [Segment Anything](https://github.com/facebookresearch/segment-anything), [MobileSAM](https://github.com/ChaoningZhang/MobileSAM), and [Segment Anything 2](https://github.com/facebookresearch/segment-anything-2) into ONNX format for easy deployment.\n\n[![PyPI version](https://badge.fury.io/py/samexporter.svg)](https://badge.fury.io/py/samexporter)\n[![Downloads](https://pepy.tech/badge/samexporter)](https://pepy.tech/project/samexporter)\n[![Downloads](https://pepy.tech/badge/samexporter/month)](https://pepy.tech/project/samexporter)\n[![Downloads](https://pepy.tech/badge/samexporter/week)](https://pepy.tech/project/samexporter)\n\n**Supported models:**\n\n- Segment Anything 2 (Tiny, Small, Base, Large) - **Note:** Experimental. Only image input is supported for now.\n- Segment Anything (SAM ViT-B, SAM ViT-L, SAM ViT-H)\n- MobileSAM\n\n## Installation\n\nRequirements:\n\n- Python 3.10+\n\nFrom PyPi:\n\n```bash\npip install torch==2.4.0 torchvision --index-url https://download.pytorch.org/whl/cpu\npip install samexporter\n```\n\nFrom source:\n\n```bash\npip install torch==2.4.0 torchvision --index-url https://download.pytorch.org/whl/cpu\ngit clone https://github.com/vietanhdev/samexporter\ncd samexporter\npip install -e .\n```\n\n## Convert Segment Anything, MobileSAM to ONNX\n\n- Download Segment Anything from [https://github.com/facebookresearch/segment-anything](https://github.com/facebookresearch/segment-anything).\n- Download MobileSAM from [https://github.com/ChaoningZhang/MobileSAM](https://github.com/ChaoningZhang/MobileSAM).\n\n```text\noriginal_models\n   + sam_vit_b_01ec64.pth\n   + sam_vit_h_4b8939.pth\n   + sam_vit_l_0b3195.pth\n   + mobile_sam.pt\n   ...\n```\n\n- Convert encoder SAM-H to ONNX format:\n\n```bash\npython -m samexporter.export_encoder --checkpoint original_models/sam_vit_h_4b8939.pth \\\n    --output output_models/sam_vit_h_4b8939.encoder.onnx \\\n    --model-type vit_h \\\n    --quantize-out output_models/sam_vit_h_4b8939.encoder.quant.onnx \\\n    --use-preprocess\n```\n\n- Convert decoder SAM-H to ONNX format:\n\n```bash\npython -m samexporter.export_decoder --checkpoint original_models/sam_vit_h_4b8939.pth \\\n    --output output_models/sam_vit_h_4b8939.decoder.onnx \\\n    --model-type vit_h \\\n    --quantize-out output_models/sam_vit_h_4b8939.decoder.quant.onnx \\\n    --return-single-mask\n```\n\nRemove `--return-single-mask` if you want to return multiple masks.\n\n- Inference using the exported ONNX model:\n\n```bash\npython -m samexporter.inference \\\n    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \\\n    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \\\n    --image images/truck.jpg \\\n    --prompt images/truck_prompt.json \\\n    --output output_images/truck.png \\\n    --show\n```\n\n![truck](https://raw.githubusercontent.com/vietanhdev/samexporter/main/sample_outputs/truck.png)\n\n```bash\npython -m samexporter.inference \\\n    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \\\n    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \\\n    --image images/plants.png \\\n    --prompt images/plants_prompt1.json \\\n    --output output_images/plants_01.png \\\n    --show\n```\n\n![plants_01](https://raw.githubusercontent.com/vietanhdev/samexporter/main/sample_outputs/plants_01.png)\n\n```bash\npython -m samexporter.inference \\\n    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \\\n    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \\\n    --image images/plants.png \\\n    --prompt images/plants_prompt2.json \\\n    --output output_images/plants_02.png \\\n    --show\n```\n\n![plants_02](https://raw.githubusercontent.com/vietanhdev/samexporter/main/sample_outputs/plants_02.png)\n\n\n**Short options:**\n\n- Convert all Segment Anything models to ONNX format:\n\n```bash\nbash convert_all_meta_sam.sh\n```\n\n- Convert MobileSAM to ONNX format:\n\n```bash\nbash convert_mobile_sam.sh\n```\n\n## Convert Segment Anything 2 to ONNX\n\n- Download Segment Anything 2 from [https://github.com/facebookresearch/segment-anything-2.git](https://github.com/facebookresearch/segment-anything-2.git). You can do it by:\n\n```bash\ncd original_models\nbash download_sam2.sh\n```\n\nThe models will be downloaded to the `original_models` folder:\n\n```text\noriginal_models\n    + sam2_hiera_tiny.pt\n    + sam2_hiera_small.pt\n    + sam2_hiera_base_plus.pt\n    + sam2_hiera_large.pt\n   ...\n```\n\n- Install dependencies:\n\n```bash\npip install git+https://github.com/facebookresearch/segment-anything-2.git\n```\n\n- Convert all Segment Anything models to ONNX format:\n\n```bash\nbash convert_all_meta_sam2.sh\n```\n\n- Inference using the exported ONNX model (only image input is supported for now):\n\n```bash\npython -m samexporter.inference \\\n    --encoder_model output_models/sam2_hiera_tiny.encoder.onnx \\\n    --decoder_model output_models/sam2_hiera_tiny.decoder.onnx \\\n    --image images/plants.png \\\n    --prompt images/truck_prompt_2.json \\\n    --output output_images/plants_prompt_2_sam2.png \\\n    --sam_variant sam2 \\\n    --show\n```\n\n![truck_sam2](https://raw.githubusercontent.com/vietanhdev/samexporter/main/sample_outputs/sam2_truck.png)\n\n## Tips\n\n- Use \"quantized\" models for faster inference and smaller model size. However, the accuracy may be lower than the original models.\n- SAM-B is the most lightweight model, but it has the lowest accuracy. SAM-H is the most accurate model, but it has the largest model size. SAM-M is a good trade-off between accuracy and model size.\n\n## AnyLabeling\n\nThis package was originally developed for auto labeling feature in [AnyLabeling](https://github.com/vietanhdev/anylabeling) project. However, you can use it for other purposes.\n\n[![AnyLabeling](https://user-images.githubusercontent.com/18329471/236625792-07f01838-3f69-48b0-a12e-30bad27bd921.gif)](https://youtu.be/5qVJiYNX5Kk)\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## References\n\n- ONNX-SAM2-Segment-Anything: [https://github.com/ibaiGorordo/ONNX-SAM2-Segment-Anything](https://github.com/ibaiGorordo/ONNX-SAM2-Segment-Anything).\n","funding_links":["https://github.com/sponsors/vietanhdev"],"categories":["🛠️ Production Deployment \u0026 Tools"],"sub_categories":["☁️ Deployment Platforms"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvietanhdev%2Fsamexporter","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvietanhdev%2Fsamexporter","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvietanhdev%2Fsamexporter/lists"}