{"id":16320251,"url":"https://github.com/meyerls/fruitnerf","last_synced_at":"2025-04-12T15:32:35.566Z","repository":{"id":229614611,"uuid":"777183463","full_name":"meyerls/FruitNeRF","owner":"meyerls","description":"[IROS24] Offical Code for \"FruitNeRF: A Unified Neural Radiance Field based Fruit Counting Framework\" - Inegrated into Nerfstudio","archived":false,"fork":false,"pushed_at":"2025-01-07T15:59:12.000Z","size":58287,"stargazers_count":294,"open_issues_count":13,"forks_count":37,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-04-03T15:09:18.101Z","etag":null,"topics":["agriculture","digital-horticulture","farming","fruit","fruit-count","fruit-counting","nerf","nerfstudio","precision-agriculture","precision-farming"],"latest_commit_sha":null,"homepage":"https://meyerls.github.io/fruit_nerf","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/meyerls.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-25T11:19:39.000Z","updated_at":"2025-03-24T13:14:42.000Z","dependencies_parsed_at":"2024-08-01T14:39:52.841Z","dependency_job_id":"4184df05-6de2-4e59-af63-73d0041b8250","html_url":"https://github.com/meyerls/FruitNeRF","commit_stats":null,"previous_names":["meyerls/fruitnerf"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRF","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRF/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRF/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meyerls%2FFruitNeRF/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/meyerls","download_url":"https://codeload.github.com/meyerls/FruitNeRF/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248589839,"owners_count":21129689,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agriculture","digital-horticulture","farming","fruit","fruit-count","fruit-counting","nerf","nerfstudio","precision-agriculture","precision-farming"],"created_at":"2024-10-10T22:43:41.633Z","updated_at":"2025-04-12T15:32:35.526Z","avatar_url":"https://github.com/meyerls.png","language":"Python","readme":"\u003ch1 style=\"text-align: center;\"\u003e:apple: :pear: FruitNeRF: A Generalized Framework for Counting Fruits in Neural Radiance Fields :peach: :lemon:\u003c/h1\u003e\n\n\n\u003cp align=\"center\"\u003e\n\u003ca href=\"https://meyerls.github.io/\"\u003eLukas Meyer, \u003c/a\u003e\n\u003ca href=\"https://scholar.google.com/citations?user=h84gW6QAAAAJ\u0026hl=de\"\u003eAndreas Gilson, \u003c/a\u003e\n\u003ca href=\"https://scholar.google.com/citations?user=fJXIVKsAAAAJ\u0026hl=en\"\u003eUte Schmid, \u003c/a\u003e\n\u003ca href=\"https://scholar.google.com/citations?hl=de\u0026user=cx4AaqoAAAAJ\"\u003eMarc Stamminger\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n\u003ca href=\"https://meyerls.github.io/fruit_nerf/\"\u003e🌐[Project Page]\u003c/a\u003e\n\u003ca href=\"https://meyerls.github.io/fruit_nerf/\"\u003e📄[Paper]\u003c/a\u003e\n\u003ca href=\"https://zenodo.org/records/10869455\"\u003e📁[Dataset]\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp style=\"align:justify\"\u003e\u003cb\u003eAbstract\u003c/b\u003e: We introduce FruitNeRF, a unified novel fruit counting framework that leverages state-of-the-art view synthesis methods\nto count any fruit type directly in 3D. Our framework takes an unordered set of posed images captured by a monocular\ncamera and segments fruit in each image. To make our system independent of the fruit type, we employ a foundation model\nthat generates binary segmentation masks for any fruit. Utilizing both modalities, RGB and semantic, we train a semantic\nneural radiance field. Through uniform volume sampling of the implicit Fruit Field, we obtain fruit-only point clouds.\nBy applying cascaded clustering on the extracted point cloud, our approach achieves precise fruit count. The use of\nneural radiance fields provides significant advantages over conventional methods such as object tracking or optical\nflow, as the counting itself is lifted into 3D. Our method prevents double counting fruit and avoids counting irrelevant\nfruit. We evaluate our methodology using both real-world and synthetic datasets. The real-world dataset consists of\nthree apple trees with manually counted ground truths, a benchmark apple dataset with one row and ground truth fruit\nlocation, while the synthetic dataset comprises various fruit types including apple, plum, lemon, pear, peach, and\nmangoes. Additionally, we assess the performance of fruit counting using the foundation model compared to a U-Net.\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n\u003ca href=\"https://www.fau.eu/\"\u003e\u003cimg style=\"padding: 10px\" height=\"100px\" src=\"images/FAU.png\"\u003e \u003c/a\u003e\n\u003ca href=\"https://www.lgdv.tf.fau.de/\"\u003e\u003cimg style=\"padding: 10px\" height=\"100px\" src=\"images/vce.svg\"\u003e \u003c/a\u003e\n\u003ca href=\"https://unit.aist.go.jp/icps/icps-am/en/\"\u003e\u003cimg style=\"padding: 10px\" height=\"100px\" src=\"images/fraunhofer_iis.svg\"\u003e \u003c/a\u003e\n\u003ca href=\"https://unit.aist.go.jp/icps/icps-am/en/\"\u003e\u003cimg style=\"padding: 10px\" height=\"100px\" src=\"images/bamberg.png\"\u003e \u003c/a\u003e\n\u003c/p\u003e\n\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"images/fruitnerfsvg_real.png\"/\u003e\n\u003c/p\u003e\n\n\u003c!--p align=\"center\"\u003e\n    \u003cimg src=\"images/teaser.gif\" style=\"width: 512px\"/\u003e\n\u003c/p--\u003e\n\n# :loudspeaker: News\n\n* 24.09.2024: Thanks to [johnnynunez](https://github.com/johnnynunez) for employing [FruitNeRF on an NVIDIA Jetson](https://www.jetson-ai-lab.com/nerf.html) \n* 12.08.2024: Paper is available on [Arxiv](https://arxiv.org/abs/2408.06190).\n* 30.06.2024: Paper got accepted to IROS.\n\n# Installation\n\n### Install Nerfstudio\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for guide\u003c/summary\u003e\n\n#### 0. Install Nerfstudio dependencies\n\n[Follow these instructions](https://docs.nerf.studio/quickstart/installation.html) up to and including \"\ntinycudann\" to install dependencies and create an environment.\n\n**Important**: In Section *Install nerfstudio* please install version **0.3.2** via `pip install nerfstudio==0.3.2` not\nthe latest one!\n\n#### 1. Clone this repo\n\n`git clone https://github.com/meyerls/FruitNeRF.git`\n\n#### 2. Install this repo as a python package\n\nNavigate to this folder and run `python -m pip install -e .`\n\n#### 3. Run `ns-install-cli`\n\n#### Checking the install\n\nRun `ns-train -h`: you should see a list of \"subcommand\" with fruit_nerf included among them.\n\u003c/details\u003e\n\n### Install Grounding-SAM\n\n\u003cdetails\u003e\n  \u003csummary\u003eExpand for guide\u003c/summary\u003e\n\nPlease install Grounding-SAM into the segmentation folder. More details can be found\nin [install segment anything](https://github.com/facebookresearch/segment-anything#installation)\nand [install GroundingDINO](https://github.com/IDEA-Research/GroundingDINO#install). A copied variant is listed below.\n\n```bash\n# Start from FruitNerf root folder.\ncd segmentation \n\n# Clone GroundedSAM repository and rename folder\ngit clone https://github.com/IDEA-Research/Grounded-Segment-Anything.git grounded_sam\ncd grounded_sam\n\n# Checkout version compatible with FruitNeRF\ngit checkout fe24\n```\n\nYou should set the environment variable manually as follows if you want to build a local GPU environment for\nGrounded-SAM:\n\n```bash\nexport AM_I_DOCKER=False\nexport BUILD_WITH_CUDA=True\nexport CUDA_HOME=/path/to/cuda-11.3/\n```\n\nInstall Segment Anything:\n\n```bash\npython -m pip install -e segment_anything\n```\n\nInstall Grounding DINO:\n\n```bash\npip install --no-build-isolation -e GroundingDINO\n```\n\nInstall diffusers and misc:\n\n```bash\npip install --upgrade diffusers[torch]\n\npip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel\n```\n\nDownload pretrained weights\n\n```bash\ncd .. # Download into grounded_sam\nwget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth\nwget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth\n```\n\nInstall SAM-HQ\n\n```bash\npip install segment-anything-hq\n```\n\nDownload SAM-HQ checkpoint from [here](https://github.com/SysCV/sam-hq#model-checkpoints) (We recommend ViT-H HQ-SAM)\ninto the Grounded-Segment-Anything folder.\n\n**Done!**\n\n\u003c/details\u003e\n\n# Using FruitNeRF\n\nNow that FruitNeRF is installed you can start counting fruits! You can use your own data, our real or\nsynthetic [FruitNeRF Dataset](https://zenodo.org/records/10869455) or\nthe [Fuji Dataset](https://zenodo.org/records/3712808).\nIf you use ower FruitNeRF dataset you can skip the preparations step and jump to **Training**.\n\n## Prepare own Data\n\nFor our data and the Fuji dataset you first have to compute the intrinsic and extrinsic camera parameters and segment\nthe images using grounded-SAM:\n\n```bash\nns-process-fruit-data --data {path/to/image-dir} --output-dir {path/to/output-dir} --segmentation-class [Str+Str+Str]\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eExpand for more options\u003c/b\u003e\u003c/summary\u003e\n\n- ```--data [PATH]```: Path the data, either a video file or a directory of images.\n- ```--output-dir [PATH]```: Path to the output directory.\n- ```--segmentation-class [Str+Str+Str+...]]``` Text prompt for segmentation with Grounded SAM. Multiple arguments are\n  also valid.\n- ```--num_downscales [INT]```: Number of times to downscale the images. Default is 3.\n- ```--text_threshold [FLOAT]``` Threshold for text prompt/class to segment images. Default value is 0.15.\n- ```--box_threshold [FLOAT]``` Threshold for bounding box prediction. Default value is 0.15.\n- ```--data_semantic [PATH]```: Predefined path to precomputed masks.\n- ```--skip-colmap```: skips COLMAP and generates transforms.json if possible.\n- ```--skip_image_processing```: skips copying and downscaling of images and only runs COLMAP if possible and enabled.\n- ```--flag_segmentation_image_debug```: saves the masks overlay on rgb images.\n\n\u003c/details\u003e\n\nIf you already have **binary** segmentation masks please parse the image folder:\n\n```bash\nns-prepocess-fruit-data --data {path/to/image-dir} --output-dir {path/to/output-dir} --data_semantic {path/to/seg-dir} \n```\n\n## Training\n\n#### FruitNerf (~15min)\n\n```bash\nns-train fruit_nerf --data {path/to/workspace-dir} --output-dir {path/to/output-dir}\n```\n\n#### FruitNerf Big (~3h)\n\n```bash\nns-train fruit_nerf_big --data {path/to/workspace-dir} --output-dir {path/to/output-dir}\n```\n\n## Volumetric Sampling\n\n```bash\nns-export-semantics semantic-pointcloud --load-config {path/to/config.yaml} --output-dir {path/to/export/dir} --use-bounding-box True --bounding-box-min -1 - 1 -1 --bounding-box-max 1 1 1 --num_rays_per_batch 2000 --num_points_per_side 2000\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eExpand for more options\u003c/b\u003e\u003c/summary\u003e\n- `--config {path/to/config.yaml}`: The config.yaml  can be found in the output dir specified during the ns-train.\n- `--bounding-box-min` and `--bounding-box-max`: Values for the bounding box. To find out the best parameters please try out the Crop Viewport in the nerfstudio viewer\n- `--num_rays_per_batch`: Number of rays per batch. This depends on the capability of your GPU.\n- `--num_points_per_side`: We sample a Volume with NxNxN points. The more points the better the resolution and more compute time.\n\n\u003cp align=\"center\" \u003e\n    \u003cimg src=\"images/export_image.png\" style=\"width: 200px\"/\u003e\n\u003c/p\u003e\n\n\u003c/details\u003e\n\n## Point Cloud Clustering / Fruit Counting\n\nClustering is not integrated into the nerfstudio pipeline. Therefore, we have created a specific cluster\nscript (```clustering\\run_clustering.py```). \n\nIf you want to use it for your own data you have to create a config profile first:\n\n```python\nApple_GT_1024x1024_300 = {\n    \"path\": \"/path/2/extracted/pcd/semantic_colormap.ply\",\n    \"remove_outliers_nb_points\": 200, # Clean pcd\n    \"remove_outliers_radius\": 0.01, # Clean pcd\n    \"down_sample\": 0.001, # Voxel downsample for faster computation / clustering\n    \"eps\": 0.01,\n    \"cluster_merge_distance\": 0.04, # Merge distance for small clusters\n    \"minimum_size_factor\": 0.3,\n    \"min_samples\": 100, # Min cluster point size\n    'template_path': './clustering/apple_template.ply', # Template apple /fruit\n    'apple_template_size': 0.7, # Scale apple template if no gt size is available\n    'gt_cluster': \"/path/2/gt/mesh/fruits.obj\", # or None\n    \"gt_count\": 283 # or None\n}\n```\n\nAfterward perform the Clustering (see more information in ```clustering\\run_clustering.py```!):\n\n```python\nBaum = Apple_GT_1024x1024_300\nclustering = Clustering(remove_outliers_nb_points=Baum['remove_outliers_nb_points'],\n                        remove_outliers_radius=Baum['remove_outliers_radius'],\n                        voxel_size_down_sample=Baum['down_sample'],\n                        template_path=Baum['template_path'],\n                        min_samples=Baum['min_samples'],\n                        apple_template_size=Baum['apple_template_size'],\n                        gt_cluster=Baum['gt_cluster'],\n                        cluster_merge_distance=Baum['cluster_merge_distance'],\n                        gt_count=Baum['gt_count']\n                        )\ncount = clustering.count(pcd=Baum[\"path\"], eps=Baum['eps'])\n```\n\nFor reproducibility, we provide the extracted point clouds for our synthetic and real-world data. From Table I and\nFig.8. Data can be downloaded from [here]().\n\n# Download Data\nTo reproduce our counting results you can download the extracted point clouds for every training run. Download can be \nfound here: tbd.\n\n## Synthetic Dataset\n\n\u003cp align=\"center\" \u003e\n    \u003cimg src=\"images/apple.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/lemon.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/mango.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/peach.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/pear.gif\" style=\" width: 128px\"/\u003e\n    \u003cimg src=\"images/plum.gif\" style=\" width: 128px\"/\u003e\n\u003c/p\u003e\n\nLink: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10869455.svg)](https://doi.org/10.5281/zenodo.10869455)\n\n## Real Dataset\n\n\u003cimg src=\"images/row2.jpg\" style=\"display: block; margin-left: auto; margin-right: auto; width: 512px\"/\u003e\n\nLink: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10869455.svg)](https://doi.org/10.5281/zenodo.10869455)\n\n## Bibtex\n\nIf you find this useful, please cite the paper!\n\u003cpre id=\"codecell0\"\u003e\n@inproceedings{fruitnerf2024,\n\u0026nbsp;author     = {Lukas Meyer, Andreas Gilson, Ute Schmidt, Marc Stamminger},\n\u0026nbsp;title      = {FruitNeRF: A Unified Neural Radiance Field based Fruit Counting Framework},\n\u0026nbsp;booktitle  = {IROS},\n\u0026nbsp;year       = {2024},\n url        = {https://meyerls.github.io/fruit_nerf}\n} \u003c/pre\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmeyerls%2Ffruitnerf","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmeyerls%2Ffruitnerf","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmeyerls%2Ffruitnerf/lists"}