{"id":28999842,"url":"https://github.com/meyerls/fruitnerfpp","last_synced_at":"2026-02-07T00:04:23.987Z","repository":{"id":295690502,"uuid":"857733665","full_name":"meyerls/FruitNeRFpp","owner":"meyerls","description":null,"archived":false,"fork":false,"pushed_at":"2025-06-13T07:41:43.000Z","size":6,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-07-22T09:54:04.300Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/meyerls.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-09-15T13:21:37.000Z","updated_at":"2025-06-13T07:41:46.000Z","dependencies_parsed_at":"2025-07-22T09:35:08.392Z","dependency_job_id":"5f91ded8-cdfb-4476-a723-8a8c103bb8ef","html_url":"https://github.com/meyerls/FruitNeRFpp","commit_stats":null,"previous_names":["meyerls/fruitnerfpp"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/meyerls/FruitNeRFpp","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRFpp","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRFpp/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRFpp/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRFpp/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/meyerls","download_url":"https://codeload.github.com/meyerls/FruitNeRFpp/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRFpp/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29181265,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-06T23:15:33.022Z","status":"ssl_error","status_checked_at":"2026-02-06T23:15:09.128Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-06-25T08:08:58.972Z","updated_at":"2026-02-07T00:04:23.981Z","avatar_url":"https://github.com/meyerls.png","language":null,"readme":"\u003ch1 style=\"text-align: center;\"\u003e:apple: :pear: FruitNeRF++: A Generalized Multi-Fruit Counting Method Utilizing Contrastive Learning and Neural Radiance Fields :peach: :lemon:\u003c/h1\u003e\n\nLukas Meyer, Andrei-Timotei Ardelean, Tim Weyrich, Marc Stamminger,\n\u003cbr\u003e\n\n\u003cp align=\"center\"\u003e\n\u003ca href=\"https://meyerls.github.io/fruit_nerfpp\"\u003e🌐[Project Page]\u003c/a\u003e\n\u003ca href=\"https://arxiv.org/abs/2505.19863\"\u003e📄[Paper]\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp style=\"align:justify\"\u003e\u003cb\u003eAbstract\u003c/b\u003e: We introduce FruitNeRF++, a novel fruit-counting approach that combines contrastive learning with neural radiance fields to count fruits from unstructured input photographs of orchards. Our work is based on FruitNeRF, which employs a neural semantic field combined with a fruit-specific clustering approach. The requirement for adaptation for each fruit type limits the applicability of the method, and makes it difficult to use in practice. To lift this limitation, we design a shape-agnostic multi-fruit counting framework, that complements the RGB and semantic data with instance masks predicted by a vision foundation model. The masks are used to encode the identity of each fruit as instance embeddings into a neural instance field. By volumetrically sampling the neural fields, we extract a point cloud embedded with the instance features, which can be clustered in a fruit-agnostic manner to obtain the fruit count. We evaluate our approach using a synthetic dataset containing apples, plums, lemons, pears, peaches, and mangoes, as well as a real-world benchmark apple dataset. Our results demonstrate that FruitNeRF++ is easier to control and compares favorably to other state-of-the-art methods. \u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"images/cf_nerf_pipeline.png\"/\u003e\n\u003c/p\u003e\n\n\n# News\n\n* Soon the [Dataset](https://zenodo.org/records/10869455) will be released.\n* 14.12: Code release :rocket:\n* 26.05.25: Released [Paper](https://arxiv.org/abs/2505.19863) on Arxiv\n* 15.09.24: [Project Page](https://meyerls.github.io/fruit_nerfpp) released\n\n# Installation\n\n### Install Nerfstudio\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for guide\u003c/summary\u003e\n\n#### 0. Install Nerfstudio dependencies\n\n[Follow these instructions](https://docs.nerf.studio/quickstart/installation.html) up to and including \"\ntinycudann\" to install dependencies and create an environment.\n\n**Important**: In Section *Install nerfstudio* please install version **1.1.5** via `pip install nerfstudio==1.1.5` NOT\nthe latest one!\n\nInstall additional dependencies\n```bash\npip install --upgrade pip setuptools wheel\npip install nerfstudio==1.1.5 # Important!!!\npip install pyntcloud==0.3.1\npip install hdbscan\npip install numba\npip install hausdorff\nconda install docutils\n```\n\n#### 1. Clone this repo\n\n`git clone https://github.com/meyerls/FruitNeRF.git`\n\n#### 2. Install this repo as a python package\n\nNavigate to this folder and run `python -m pip install -e .`\n\n#### 3. Run `ns-install-cli`\n\n#### Checking the install\n\nRun `ns-train -h`: you should see a list of \"subcommand\" with fruit_nerf included among them.\n\u003c/details\u003e\n\n### Install Grounding-SAM\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for guide\u003c/summary\u003e\n\nPlease install Grounding-SAM into the cf_nerf?segmentation folder. More details can be found\nin [install segment anything](https://github.com/facebookresearch/segment-anything#installation)\nand [install GroundingDINO](https://github.com/IDEA-Research/GroundingDINO#install). A copied variant is listed below.\n\n```bash\n# Start from FruitNerf root folder.\ncd cf_nerf/segmentation \n\n# Clone GroundedSAM repository and rename folder\ngit clone https://github.com/IDEA-Research/Grounded-Segment-Anything.git groundedSAM\ncd groundedSAM\n\n# Checkout version compatible with FruitNeRFpp\ngit checkout fe24\n```\n\nYou should set the environment variable manually as follows if you want to build a local GPU environment for\nGrounded-SAM:\n\n```bash\nexport AM_I_DOCKER=False\nexport BUILD_WITH_CUDA=True\nexport CUDA_HOME=/path/to/cuda-11.3/\n```\n\nInstall Segment Anything:\n\n```bash\npython -m pip install -e segment_anything\n```\n\nInstall Grounding DINO:\n\n```bash\npip install --no-build-isolation -e GroundingDINO\n```\n\nInstall diffusers and misc:\n\n```bash\npip install --upgrade diffusers[torch]\n\npip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel\n```\n\nDownload pretrained weights\n\n```bash\n# Download into grounded_sam folder\nwget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth\nwget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth\n```\n\nInstall SAM-HQ\n\n```bash\npip install segment-anything-hq\n```\n\nDownload SAM-HQ checkpoint from [here](https://github.com/SysCV/sam-hq#model-checkpoints) (We recommend ViT-H HQ-SAM)\ninto the Grounded-Segment-Anything folder.\n\n**Done!**\n\n\u003c/details\u003e\n\n### Install Detic\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for guide\u003c/summary\u003e\n\nPlease install Grounding-SAM into the cf_nerf?segmentation folder. More details can be found\nin [install DETIC](https://github.com/facebookresearch/Detic/blob/main/docs/INSTALL.md). A copied variant is listed\nbelow:\n\n```bash\ncd cf_nerf/segmentation \n\ngit clone https://github.com/facebookresearch/detectron2.git\ncd detectron2\npip install -e .\n```\n\n```bash\n# Start from FruitNerf root folder (cf_nerf/segmentation ).\ncd ..\n\n# Clone GroundedSAM repository and rename folder\ngit clone https://github.com/facebookresearch/Detic.git --recurse-submodules\ncd Detic\npip install -r requirements.txt\n\n```\n\n\u003c/details\u003e\n\n### Troubleshooting\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for guide\u003c/summary\u003e\n\nNo module cog\n\n````BASH\npip install cog\n````\n\nNo module fvcore\n\n```bash\nconda install -c fvcore -c iopath -c conda-forge fvcore\n```\n\nError: name '_C' is not defined , UserWarning: Failed to load custom C++ ops. Running on CPU mode Only!\n[Github Issue](https://github.com/IDEA-Research/Grounded-Segment-Anything/issues/436)\n\n\u003c/details\u003e\n\n# 🍎 Using FruitNeRF++\n\n\u003e **Note**  \n\u003e The original working title of this project was **Contrastive-FruitNeRF (CF-NeRF)**.  \n\u003e Throughout the codebase, the project is referred to **exclusively as `cf-nerf`**.\n\nOnce FruitNeRF++ is installed, you are ready to start counting fruits 🚀  \nYou can train and evaluate the model using:\n\n- **Your own dataset**\n- Our **real or synthetic FruitNeRF Dataset**  \n  👉 https://zenodo.org/records/10869455\n- The **Fuji Dataset**  \n  👉 https://zenodo.org/records/3712808\n\nIf you use **our FruitNeRF dataset**, you can **skip the data preparation step** and proceed directly to **Training**.\n\n---\n\n## 🗂️ Preparing Your Data\n\nYour input data should consist of:\n\n- An **image directory**\n- A corresponding **`transforms.json`** file (NeRF camera poses)\n\nIf you do **not** already have a `transforms.json`, you can estimate camera poses using **COLMAP**.  \nTo enable automatic pose estimation, run the pipeline with:\n\n```bash\n--use-colmap\n```\n\nAt this step the input should contain an image folder and a transform.json file! If you do not have a transform.json you may compute\nthe poses with COLMAP. Therefor please set ```--use-colmap```.\n\n```bash\n# Define your input parameter\nINPUT_PATH=\"path/to/processed/folder\" # Folder must have an *images* folder! Image files must be [\".jpg\", \".jpeg\", \".png\", \".tif\", \".tiff\"]\nDATA_PATH=\"path/to/output/folder\"\nSEMANTIC_CLASS='apple' # string or a list is also possible\n\n# Run processor \n ns-process-fruit-data cf-nerf-dataset --data INPUT_PATH --output-dir DATA_PATH --num_downscales 2 --instance_model SAM --segmentation_class $SEMANTIC_CLASS --text_threshold 0.35 --box_threshold 0.35 --nms_threshold 0.2\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for more options\u003c/summary\u003e\n\n```bash\nusage: ns-process-fruit-data cf-nerf-dataset [-h] [CF-NERF-DATASET OPTIONS]\n\n╭─ Some options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│ -h, --help              show this help message and exit                                                                                                                     │\n│ --data PATH             Path the data, either a video file or a directory of images. (required)                                                                             │\n│ --output-dir PATH       Path to the output directory. (required)                                                                                                            │\n│ --verbose, --no-verbose If True, print extra logging. (default: False)                                                                                                      │\n│ --num-downscales INT    Number of times to downscale the images. Downscales by 2 each time. For example a value of 3 will downscale the                                     │\n│                         images by 2x, 4x, and 8x. (default: 1)                                                                                                              │\n│ --crop-factor FLOAT FLOAT FLOAT FLOAT                                                                                                                                       │\n│                         Portion of the image to crop. All values should be in [0,1]. (top, bottom, left, right) (default: 0.0 0.0 0.0 0.0)                                  │\n│ --same-dimensions, --no-same-dimensions                                                                                                                                     │\n│                         Whether to assume all images are same dimensions and so to use fast downscaling with no autorotation. (default: True)                               │\n│ --compute-instance-mask, --no-compute-instance-mask                                                                                                                         │\n│                         Compute instance mask. (default: True)                                                                                                              │\n│ --instance-model {SAM,DETIC,sam,detic}                                                                                                                                      │\n│                         Which model to use. SAM or DETIC. (default: sam)                                                                                                    │\n│ --segmentation-class {None}|STR|{[STR [STR ...]]}                                                                                                                           │\n│                         Text threshold for DINO/SAM (default: fruit apple pomegranate peach)                                                                                │\n│ --text-threshold FLOAT  Box threshold for DINO/SAM (default: 0.25)                                                                                                          │\n│ --box-threshold FLOAT   NMS for fusing boxes (default: 0.3)                                                                                                                 │\n│ --nms-threshold FLOAT   (default: 0.3)                                                                                                                                      │\n│ --semantics-gt {None}|STR (default: None)                                                                                                                                   │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n```\n\u003c/details\u003e\n\nThe dataset should look like this:\n```bash\napple_dataset\n├── images\n│   ├── frame_00001.png\n│   ├── ...\n│   └── frame_00XXX.png\n├── images_2\n│   ├── frame_00001.png\n│   ├── ...\n│   └── frame_00XXX.png\n├── semantics\n│   ├── frame_00001.png\n│   ├── ...\n│   └── frame_00XXX.png\n├── semantics_2\n│   ├── frame_00001.png\n│   ├── ...\n│   └── frame_00XXX.png\n└── transforms.json\n```\n\n## 🚀 Training\n\nTo start training, use a dataset that follows the structure described in the previous section.  \nNote that **cf-nerf** is available in two model sizes with different GPU memory requirements.\n\n```bash\nRESULT_PATH=\"./results\"\nns-train cf-nerf-small \\\n  --data $DATA_PATH \\\n  --output-dir $RESULT_PATH \\\n  --viewer.camera-frustum-scale 0.2 \\\n  --pipeline.model.temperature 0.1\n```\n\n**Model variants:**\n- `cf-nerf-small` → ~8 GB VRAM  \n- `cf-nerf` → ~12 GB VRAM  \n\n---\n\n## 📦 Export Point Cloud\n\nAdjust the parameters below according to your GPU and desired point cloud density:\n\n- `--num_rays_per_batch`: depends on GPU VRAM  \n- `--num_points_per_side`: controls point cloud density  \n- `--bounding-box-min / --bounding-box-max`: adapt to your scene geometry  \n\n```bash\nCONFIG_PATH=\"./results/[MODEL/RUN_FOLDER]/config.yml\"\nPCD_OUTPUT_PATH=\"./results/[MODEL/RUN_FOLDER]\"\n\nns-export-semantics instance-pointcloud \\\n  --load-config $CONFIG_PATH \\\n  --output-dir $PCD_OUTPUT_PATH \\\n  --use-bounding-box True \\\n  --bounding-box-min -1 -1 -1 \\\n  --bounding-box-max  1  1  1 \\\n  --num_rays_per_batch 2000 \\\n  --num_points_per_side 1000\n```\n\n---\n\n## 🔢 Count Fruits\n\nTo count fruits, the extracted point cloud—containing **Euclidean coordinates** and **feature vectors**—is clustered to identify individual fruit instances.\n\n```bash\nns-count \\\n  --load_pcd $PCD_OUTPUT_PATH \\\n  --output_dir $PCD_OUTPUT_PATH \\\n  --lambda-eucl-dist 1.2 \\\n  --lambda-cosine 0.5\n```\n\n**Parameters:**\n- `--lambda-eucl-dist`: weight for spatial (Euclidean) distance  \n- `--lambda-cosine`: weight for feature similarity (cosine distance)  \n\nAdjust these weights to balance geometric proximity and semantic similarity for your dataset.\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for more options\u003c/summary\u003e\n\n```bash\nusage: ns-count [-h] [OPTIONS]\n\nCount instance point cloud.\n\n╭─ options ────────────────────────────────────────────────────────────────────────────────╮\n│ -h, --help              show this help message and exit                                  │\n│ --load-pcd PATH         Path to the point cloud files. (required)                        │\n│ --output-dir PATH       Path to the output directory. (required)                         │\n│ --gt-pcd-file {None}|PATH|STR                                                            │\n│                         Name of the gt fruit file. (default: None)                       │\n│ --lambda-eucl-dist FLOAT                                                                 │\n│                         euclidean term for distance metric. (default: 1.2)               │\n│ --lambda-cosine FLOAT   cosine term for distance metric. (default: 0.2)                  │\n│ --distance-threshold FLOAT                                                               │\n│                         Distance (non metric) to assign to gt fruit. (default: 0.05)     │\n│ --staged-max-points INT                                                                  │\n│                         Maximum number of points for staged clustering (default: 600000) │\n│ --clustering-variant STR                                                                 │\n│                         (default: staged)                                                │\n│ --staged-num-clusters INT                                                                │\n│                         (default: 30)                                                    │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\n```\n\u003c/details\u003e\n\n# Download Data\nTo reproduce our counting results, you can download the extracted point clouds for every training run. Download can be \nfound here: tbd.\n\n## Synthetic Dataset\n\n\u003cp align=\"center\" \u003e\n    \u003cimg src=\"images/apple.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/lemon.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/mango.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/peach.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/pear.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/plum.gif\" style=\" width: 128px\"/\u003e\n\u003c/p\u003e\n\nLink: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10869455.svg)](https://doi.org/10.5281/zenodo.10869455)\n\n## Real Dataset\n\n\u003cimg src=\"images/row2.jpg\" style=\"display: block; margin-left: auto; margin-right: auto; width: 512px\"/\u003e\n\nLink: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10869455.svg)](https://doi.org/10.5281/zenodo.10869455)\n\n\n\n\n## Bibtex\n\nIf you find this useful, please cite the paper!\n\u003cpre id=\"codecell0\"\u003e\n@inproceedings{fruitnerfpp2025,\n  author    = {Meyer, Lukas and Ardelean, Andrei-Timotei and Weyrich, Tim and Stamminger, Marc},\n  title     = {FruitNeRF++: A Generalized Multi-Fruit Counting Method Utilizing Contrastive Learning and Neural Radiance Fields},\n  booktitle = {2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  year      = {2025},\n  doi       = {10.1109/IROS60139.2025.11247341},\n  url       = {https://meyerls.github.io/fruit_nerfpp/}\n}\n \u003c/pre\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmeyerls%2Ffruitnerfpp","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmeyerls%2Ffruitnerfpp","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmeyerls%2Ffruitnerfpp/lists"}