{"id":19855142,"url":"https://github.com/leggedrobotics/wild_visual_navigation","last_synced_at":"2025-04-04T21:06:03.037Z","repository":{"id":232742650,"uuid":"499416007","full_name":"leggedrobotics/wild_visual_navigation","owner":"leggedrobotics","description":"Wild Visual Navigation: A system for fast traversability learning via pre-trained models and online self-supervision","archived":false,"fork":false,"pushed_at":"2024-12-22T16:02:54.000Z","size":55074,"stargazers_count":165,"open_issues_count":11,"forks_count":16,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-03-28T20:05:46.148Z","etag":null,"topics":["computer-vision","field-robotics","legged-robots","machine-learning","navigation","online-learning","robot-navigation","robotics","self-supervised-learning","traversability-estimation"],"latest_commit_sha":null,"homepage":"https://sites.google.com/leggedrobotics.com/wild-visual-navigation","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/leggedrobotics.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-06-03T07:15:12.000Z","updated_at":"2025-03-26T07:43:25.000Z","dependencies_parsed_at":"2025-01-23T00:05:56.975Z","dependency_job_id":"587cc7c4-445c-4d7e-9850-5c40c4c7d1ad","html_url":"https://github.com/leggedrobotics/wild_visual_navigation","commit_stats":null,"previous_names":["leggedrobotics/wild_visual_navigation"],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leggedrobotics%2Fwild_visual_navigation","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leggedrobotics%2Fwild_visual_navigation/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leggedrobotics%2Fwild_visual_navigation/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leggedrobotics%2Fwild_visual_navigation/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/leggedrobotics","download_url":"https://codeload.github.com/leggedrobotics/wild_visual_navigation/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247249524,"owners_count":20908212,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","field-robotics","legged-robots","machine-learning","navigation","online-learning","robot-navigation","robotics","self-supervised-learning","traversability-estimation"],"created_at":"2024-11-12T14:11:49.317Z","updated_at":"2025-04-04T21:06:03.009Z","avatar_url":"https://github.com/leggedrobotics.png","language":"Python","readme":"\n\u003ch1 align=\"center\"\u003e\n  \u003cbr\u003e\n  Wild Visual Navigation\n  \u003cbr\u003e\n\u003c/h1\u003e\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#features\"\u003eFeatures\u003c/a\u003e •\n  \u003ca href=\"#citing-this-work\"\u003eCiting\u003c/a\u003e •\n  \u003ca href=\"#quick-start\"\u003eQuick Start\u003c/a\u003e •\n  \u003ca href=\"#setup\"\u003eSetup\u003c/a\u003e •\n  \u003ca href=\"#demos\"\u003eDemos\u003c/a\u003e  •\n  \u003ca href=\"#development\"\u003eDevelopment\u003c/a\u003e\n\u003c/p\u003e\n\n\n\u003c!-- [START BADGES] --\u003e\n\u003c!-- Please keep comment here to allow auto update --\u003e\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"[https://github.com/wow-actions/add-badges/blob/master/LICENSE\"\u003e\u003cimg src=\"https://img.shields.io/github/license/wow-actions/add-badges?style=flat-square\" alt=\"MIT License\" /\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/leggedrobotics/wild_visual_navigation/actions/workflows/formatting.yml/badge.svg)\"\u003e\u003cimg src=\"https://github.com/leggedrobotics/wild_visual_navigation/actions/workflows/formatting.yml/badge.svg\" alt=\"formatting\" /\u003e\u003c/a\u003e\n\u003c/p\u003e\n\u003c!-- [END BADGES] --\u003e\n\n![Overview](./assets/drawings/header.jpg)\n\n\nThis package implements the Wild Visual Navigation (WVN) system presented in Frey \u0026 Mattamala et al. [\"Fast Traversability Estimation for Wild Visual Navigation\"](https://www.roboticsproceedings.org/rss19/p054.html) (2023) and later extended in Mattamala \u0026 Frey et al. [\"Wild Visual Navigation: Fast Traversability Learning via Pre-Trained Models and Online Self-Supervision\"](https://arxiv.org/abs/2404.07110) (2024). It implements a visual, self-supervised traversability estimation system for mobile robots, trained online after a few minutes of human demonstrations in the field.\n\n\n\u003cimg align=\"right\" width=\"40\" height=\"40\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png\" alt=\"Dino\"\u003e \n\n## Features\n- Implementation of the full WVN system in pure Python\n- Quick start demos for online training in simulation, as well as scripts for inference using pre-trained models\n- Robot integration packages for ANYmal and Jackal robots using ROS 1\n- Integration into [`elevation_mapping_cupy`](https://github.com/leggedrobotics/elevation_mapping_cupy/tree/main)\n\n\n\n\u003cimg align=\"right\" width=\"40\" height=\"40\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png\" alt=\"Dino\"\u003e \n\n## Citing this work\n```bibtex\n@INPROCEEDINGS{frey23fast, \n  AUTHOR    = {Jonas Frey AND Matias Mattamala AND Nived Chebrolu AND Cesar Cadena AND Maurice Fallon AND Marco Hutter}, \n  TITLE     = {{Fast Traversability Estimation for Wild Visual Navigation}}, \n  BOOKTITLE = {Proceedings of Robotics: Science and Systems}, \n  YEAR      = {2023}, \n  ADDRESS   = {Daegu, Republic of Korea}, \n  MONTH     = {July}, \n  DOI       = {10.15607/RSS.2023.XIX.054} \n} \n```\n\nIf you are also building up on the STEGO integration or using the pre-trained models for comparison, please cite:\n```bibtex\n@INPROCEEDINGS{mattamala24wild, \n  AUTHOR    = {Jonas Frey AND Matias Mattamala AND Libera Piotr AND Nived Chebrolu AND Cesar Cadena AND Georg Martius AND Marco Hutter AND Maurice Fallon}, \n  TITLE     = {{Wild Visual Navigation: Fast Traversability Learning via Pre-Trained Models and Online Self-Supervision}}, \n  BOOKTITLE = {under review for Autonomous Robots}, \n  YEAR      = {2024}\n} \n```\n\nIf you are using the `elevation_mapping_cupy` integration:\n```bibtex\n@INPROCEEDINGS{erni23mem,\n  AUTHOR={Erni, Gian and Frey, Jonas and Miki, Takahiro and Mattamala, Matias and Hutter, Marco},\n  TITLE={{MEM: Multi-Modal Elevation Mapping for Robotics and Learning}}, \n  BOOKTITLE={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, \n  YEAR={2023},\n  PAGES={11011-11018},\n  DOI={10.1109/IROS55552.2023.10342108}\n}\n```\n\n\u003cimg align=\"right\" width=\"40\" height=\"40\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png\" alt=\"Dino\"\u003e \n\n## Quick start\nWe prepared a quick-start demo using a simulated Jackal robot. The demo runs on Docker, so no system dependencies are required. Please check the full instructions [here](docker/README.md)\n\n![Overview](./assets/images/sim_demo.jpg)\n\n\n\u003cimg align=\"right\" width=\"40\" height=\"40\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png\" alt=\"Dino\"\u003e \n\n## Setup\nWe recommend following the aforementioned [Docker instructions](docker/README.md) as well as inspecting the [Dockerfile](docker/Dockerfile) for a clean, system-independent setup.\n\nOtherwise, the next steps provide specific instructions to setup WVN on different systems.\n\n### Requirements\nThe next steps assume you have the following hardware \u0026 software setup.\n- ROS 1 Noetic\n- CUDA-enabled GPU\n- CUDA drivers (we use 12.0)\n\n### Minimal setup\nThese are the minimum requirements to use the WVN scripts (no robot integration).\n\n#### Installation\nFirst clone the WVN and our STEGO reimplementation.\n```shell\nmkdir ~/git \u0026\u0026 cd ~/git \ngit clone git@github.com:leggedrobotics/wild_visual_navigation.git\ngit clone git@github.com:leggedrobotics/self_supervised_segmentation.git\n./self_supervised_segmentation/models/download_pretrained.sh\n```\n\n(Recommended) Create new virtual environment.\n```shell\nmkdir ~/.venv\npython -m venv ~/venv/wvn\nsource ~/venv/wvn/bin/activate\n```\n\nInstall the wild_visual_navigation package.\n```shell\ncd ~/git\npip3 install -e ./wild_visual_navigation\npip3 install -e ./self_supervised_segmentation\n```\n\n#### Execution\nPlease refer to the [Demos](#inference-of-pre-trained-model) section below.\n\n\n### ROS setup\nThe following steps are required for a full installation, including the deployment tools. This enables the appropriate use of the ANYmal rosbags, enabling the robot model visualization and other deployment tools.\n\n#### Installation\n```shell\n# Create new catkin workspace\nsource /opt/ros/noetic/setup.bash\nmkdir -r ~/catkin_ws/src \u0026\u0026 cd ~/catkin_ws/src\ncatkin init\ncatkin config --extend /opt/ros/noetic\ncatkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo\n\n# Clone repos\ngit clone git@github.com:ANYbotics/anymal_d_simple_description.git\ngit clone git@github.com:ori-drs/procman_ros.git\n\n# Symlink WVN-repository\nln -s ~/git/wild_visual_navigation ~/catkin_ws/src\n\n# Dependencies\nrosdep install -ryi --from-paths . --ignore-src\n\n# Build\ncd ~/catkin_ws\ncatkin build anymal_d_simple_description\ncatkin build wild_visual_navigation_ros\n\n# Source\nsource /opt/ros/noetic/setup.bash\nsource ~/catkin_ws/devel/setup.bash\n```\n\n#### Execution\nAfter successfully building the ROS workspace, you can run the entire pipeline by either using the launch file or by running the nodes individually.\nOpen multiple terminals and run the following commands:\n\n- Run wild_visual_navigation\n```shell\nroslaunch wild_visual_navigation_ros wild_visual_navigation.launch\n```\n\n- (ANYmal replay only) Load ANYmal description for RViZ\n```shell\nroslaunch anymal_d_simple_description load.launch\n```\n\n- (ANYmal replay only) Replay Rosbag:\n```shell\nrobag play --clock path_to_mission/*.bag\n```\n\n- RViz:\n```shell\nroslaunch wild_visual_navigation_ros view.launch\n```\n\n- Debugging (sometimes it is desirable to run the two nodes separately):\n```shell\npython wild_visual_navigation_ros/scripts/wvn_feature_extractor_node.py\n```\n```shell\npython wild_visual_navigation_ros/scripts/wvn_learning_node.py\n```\n\n- The general configuration files can be found under: [`wild_visual_navigation/cfg/experiment_params.py`](wild_visual_navigation/cfg/experiment_params.py)\n- This configuration is used in the `offline-model-training` and in the `online-ros` mode.\n- When running the `online-ros` mode, additional configurations for the individual nodes are defined in [`wild_visual_navigation/cfg/ros_params.py`](wild_visual_navigation/cfg/ros_params.py).\n- These configuration file is filled based on the ROS parameter server during runtime.\n- The default values for this configuration can be found under [`wild_visual_navigation/wild_visual_navigation_ros/config/wild_visual_navigation/default.yaml`](wild_visual_navigation_ros/config/wild_visual_navigation/default.yaml).\n\n\n\u003cimg align=\"right\" width=\"40\" height=\"40\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png\" alt=\"Dino\"\u003e \n\n## Demos\n\n### Inference of pre-trained model\nWe provide the [`python3 quick_start.py`](quick_start.py) script to inference traversability from images within the input folder ([`assets/demo_data/*.png`](assets/demo_data)), given a pre-trained model checkpoint (`assets/checkpoints/model_name.pt`, you can obtain them from [Google Drive](https://drive.google.com/drive/folders/1v18a95u_s8s0870o3UZ8T-9xizsIZwSp?usp=share_link)). \nThe script stores the result in the provided output folder (`results/demo_data/*.png`).\n\n```python\npython3 quick_start.py\n\n# python3 quick_start.py --help for more CLI information\n# usage: quick_start.py  [-h] [--model_name MODEL_NAME] [--input_image_folder INPUT_IMAGE_FOLDER]\n#        [--output_folder_name OUTPUT_FOLDER_NAME] [--network_input_image_height NETWORK_INPUT_IMAGE_HEIGHT] \n#        [--network_input_image_width NETWORK_INPUT_IMAGE_WIDTH] [--segmentation_type {slic,grid,random,stego}]\n#        [--feature_type {dino,dinov2,stego}] [--dino_patch_size {8,16}] [--dino_backbone {vit_small}]\n#        [--slic_num_components SLIC_NUM_COMPONENTS] [--compute_confidence] [--no-compute_confidence]\n#        [--prediction_per_pixel] [--no-prediction_per_pixel]\n```\n\n### Online adaptation from rosbags\n\nTo quickly test out the online training and adaption we provide some example rosbags ([GDrive](https://drive.google.com/drive/folders/1Rf2TRPT6auFxOpnV9-ZfVMjmsvdsrSD3?usp=sharing)), collected with our ANYmal D robot. These can be tested using the [ROS instructions](#execution)\n\nHere we provide some examples for the different sequences:\n\u003cdiv align=\"center\"\u003e\n\n| MPI Outdoor | MPI Indoor | Bahnhofstrasse | Bike Trail |\n|----------------|------------|-------------|---------------------|\n| \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mpi_outdoor_trav.png\" alt=\"MPI Outdoor\"\u003e                |     \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mpi_indoor_trav.png\" alt=\"MPI Indoor\"\u003e        |   \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/bahnhofstrasse_trav.png\" alt=\"Bahnhofstrasse\"\u003e           |        \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mountain_bike_trail_trav.png\" alt=\"Mountain Bike\"\u003e              |\n| \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demo_data/mpi_outdoor_raw.png\" alt=\"MPI Outdoor\"\u003e                |     \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demo_data/mpi_indoor_raw.png\" alt=\"MPI Indoor\"\u003e        |   \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demo_data/bahnhofstrasse_raw.png\" alt=\"Bahnhofstrasse\"\u003e           |        \u003cimg align=\"center\" width=\"120\" height=\"120\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demo_data/mountain_bike_trail_raw.png\" alt=\"Mountain Bike\"\u003e              |\n\n\u003c/div\u003e\n\n\n\u003cimg align=\"right\" width=\"40\" height=\"40\" src=\"https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png\" alt=\"Dino\"\u003e \n\n## Development\nLastly, we provide some general guidelines for development. These could be useful to test WVN with your own robot platform.\n\n### Repository structure\nThe WVN repo is structured in different folders, which we explain in the following figure:\n\n```sh\n📦wild_visual_navigation  \n ┣ 📂assets\n     ┣ 📂demo_data                            # Example images\n        ┣ 🖼 example_images.png\n        ┗ ....\n     ┗ 📂checkpoints                          # Pre-trained model checkpoints (must be downloaded from Google Drive)\n        ┣ 📜 mountain_bike_trail_v2.pt\n        ┗ ....\n ┣ 📂docker                                   # Quick start docker container\n ┣ 📂results   \n ┣ 📂test   \n ┣ 📂wild_visual_navigation                   # Core, ROS-independent implementation of WVN\n ┣ 📂wild_visual_navigation_anymal            # ROS1 ANYmal helper package \n ┣ 📂wild_visual_navigation_jackal            # ROS1 Jackal simulation example\n ┣ 📂wild_visual_navigation_msgs              # ROS1 message definitions\n ┣ 📂wild_visual_navigation_ros               # ROS1 nodes for running WVN \n    ┗ 📂scripts                               \n       ┗ 📜 wvn_feature_extractor_node.py     # Main process for feature extraction and inference\n       ┗ 📜 wvn_learning_node.py              # Main process that generates supervision signals and the online training loop\n ┗ 📜 quick_start.py                          # Inference demo from pre-trained checkpoints\n```\n\n### Adapting WVN for your own robot\nWe recommend making a new ROS package to implement the overlays to run WVN with your own robot platform. We suggest inspecting the [wild_visual_navigation_jackal](wild_visual_navigation_jackal) as a reference.\n\nIn a nutshell, you need to configure:\n\n- [wild_visual_navigation_jackal/config/wild_visual_navigation/camera.yaml](wild_visual_navigation_jackal/config/wild_visual_navigation/camera.yaml): This specifies the main parameters of the cameras available in your robot. Particularly if you plan to use them for training and inference, or just inference. You can also specify a weight that controls the time allocation of the camera scheduler.\n- [wild_visual_navigation_jackal/config/wild_visual_navigation/jackal.yaml](wild_visual_navigation_jackal/config/wild_visual_navigation/jackal.yaml): This specifies the WVN parameters, as well as the robot input signals and frames. We recommend changing the frames accordingly, but implement a _converter_ node for the input signals (robot velocity and human command velocity, see below).\n- [wild_visual_navigation_jackal/scripts/jackal_state_converter_node.py](wild_visual_navigation_jackal/scripts/jackal_state_converter_node.py): This script implements a node that re-maps the velocity estimates and velocity commands using custom WVN messages. We adopted this approach to avoid the installation of custom robot messages on the GPU computer where we ran WVN, and instead running the converter on the robot computer to obtain the signals we required.\n- [wild_visual_navigation_jackal/launch/wild_visual_navigation.launch](wild_visual_navigation_jackal/launch/wild_visual_navigation.launch): A launchfile that loads the custom parameters of the package as well as launches the WVN nodes.\n\n### Further notes\n\u003cdetails\u003e\n\u003csummary\u003eHere we provide additional details if you want to contribute.\u003c/summary\u003e\n\n#### Install pre-commit\n```shell\npip3 install pre-commit\ncd wild_visual_navigation \u0026\u0026 python3 -m pre_commit install\ncd wild_visual_navigation \u0026\u0026 python3 -m pre_commit run\n```\n\n#### Code formatting\n```shell\n# for formatting\npip install black\nblack --line-length 120 .\n# for checking lints\npip install flake8\nflake8 .\n```\nCode format is checked on push.\n\n#### Testing\nIntroduction to [pytest](https://github.com/pluralsight/intro-to-pytest).\n```shell\npytest\n```\n\n#### Open-sourcing\nGenerating headers\n```shell\npip3 install addheader\n\n# If your are using zsh otherwise remove \\\naddheader wild_visual_navigation -t header.txt -p \\*.py --sep-len 79 --comment='#' --sep=' '\naddheader wild_visual_navigation_ros -t header.txt -p \\*.py -sep-len 79 --comment='#' --sep=' '\naddheader wild_visual_navigation_anymal -t header.txt -p \\*.py --sep-len 79 --comment='#' --sep=' '\n\naddheader wild_visual_navigation_ros -t header.txt -p \\*CMakeLists.txt --sep-len 79 --comment='#' --sep=' '\naddheader wild_visual_navigation_anymal -t header.txt -p \\*.py -p \\*CMakeLists.txt --sep-len 79 --comment='#' --sep=' '\n```\n\n#### Releasing ANYmal data\n```shell\nrosrun procman_ros sheriff -l ~/git/wild_visual_navigation/wild_visual_navigation_anymal/config/procman/record_rosbags.pmd --start-roscore \n```\n\n```shell\nrosbag_play --tf --sem --flp --wvn  mission/*.bag\n```\n\u003c/details\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleggedrobotics%2Fwild_visual_navigation","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fleggedrobotics%2Fwild_visual_navigation","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleggedrobotics%2Fwild_visual_navigation/lists"}