{"id":20516363,"url":"https://github.com/nianticlabs/rectified-features","last_synced_at":"2026-01-28T05:21:26.917Z","repository":{"id":90552509,"uuid":"284974048","full_name":"nianticlabs/rectified-features","owner":"nianticlabs","description":"[ECCV 2020] Single image depth prediction allows us to rectify planar surfaces in images and extract view-invariant local features for better feature matching","archived":false,"fork":false,"pushed_at":"2021-05-11T12:36:39.000Z","size":2824,"stargazers_count":64,"open_issues_count":7,"forks_count":7,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-06-03T15:29:14.862Z","etag":null,"topics":["depth-estimation","feature-matching","image-matching","local-features","monocular-depth-estimation","rectification","robotcar","single-image-depth-prediction"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nianticlabs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2020-08-04T12:33:10.000Z","updated_at":"2024-10-03T01:35:02.000Z","dependencies_parsed_at":"2023-07-18T11:00:58.048Z","dependency_job_id":null,"html_url":"https://github.com/nianticlabs/rectified-features","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/nianticlabs/rectified-features","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Frectified-features","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Frectified-features/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Frectified-features/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Frectified-features/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nianticlabs","download_url":"https://codeload.github.com/nianticlabs/rectified-features/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Frectified-features/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28840088,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-28T02:10:51.810Z","status":"ssl_error","status_checked_at":"2026-01-28T02:10:50.806Z","response_time":57,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["depth-estimation","feature-matching","image-matching","local-features","monocular-depth-estimation","rectification","robotcar","single-image-depth-prediction"],"created_at":"2024-11-15T21:28:31.584Z","updated_at":"2026-01-28T05:21:26.890Z","avatar_url":"https://github.com/nianticlabs.png","language":null,"readme":"# [Single-Image Depth Prediction Makes Feature Matching Easier](https://arxiv.org/abs/2008.09497)\n\n**[Carl Toft](https://scholar.google.com/citations?hl=en\u0026user=vvgmWA0AAAAJ\u0026view_op=list_works\u0026sortby=pubdate), [Daniyar Turmukhambetov](http://dantkz.github.io/about), [Torsten Sattler](https://scholar.google.com/citations?user=jzx6_ZIAAAAJ\u0026hl=en), [Fredrik Kahl](http://www.maths.lth.se/matematiklth/personal/fredrik/) and [Gabriel J. Brostow](http://www0.cs.ucl.ac.uk/staff/g.brostow/) – ECCV 2020**\n\n\n[Link to paper](https://arxiv.org/abs/2008.09497)  \n[Link to supplementary pdf](https://storage.googleapis.com/niantic-lon-static/research/rectified-features/rectified-features-supplementary.pdf)\n\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://storage.googleapis.com/niantic-lon-static/research/rectified-features/short-video.mp4\"\u003e\n  \u003cimg src=\"assets/1min.png\" alt=\"1 minute ECCV presentation video link\" width=\"400\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://storage.googleapis.com/niantic-lon-static/research/rectified-features/long-video.mp4\"\u003e\n  \u003cimg src=\"assets/10min.png\" alt=\"10 minute ECCV presentation video link\" width=\"400\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\n**Code is coming soon...**  \n\n\nGood local features improve the robustness of many 3D relocalization and multi-view reconstruction pipelines. The problem is that viewing angle and distance severely impact the recognizability of a local feature. Attempts to improve appearance invariance by choosing better local feature points or by leveraging outside information, have come with pre-requisites that made some of them impractical. In this paper, we propose a surprisingly effective enhancement to local feature extraction, which improves matching.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/teaser.png\" alt=\"We use single-image depth estimation to account for perspective distortion when extracting local features\" width=\"600\" /\u003e\n\u003c/p\u003e\n\nWe show that CNN-based depths inferred from single RGB images are quite helpful, despite their flaws. They allow us to pre-warp images and rectify perspective distortions, to significantly enhance SIFT and BRISK features, enabling more good matches, even when cameras are looking at the same scene but in opposite directions.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/pipeline.png\" alt=\"Our pipeline\" width=\"600\" /\u003e\n\u003c/p\u003e\n\nOur pipeline finds planar patches according to estimated depth, and extracts features from rectified views of these patches. Non-rectified features are also extracted from regions that do not belong to planar patches.\n\n## 💾 📸 Dataset\n\n[Dataset README](https://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/README.txt)\n\nThe \"Strong Viewpoint Changes Dataset\" is published as part of ECCV 2020 \"Single-Image Depth Prediction Makes Feature Matching Easier\" paper by \nCarl Toft, Daniyar Turmukhambetov, Torsten Sattler, Fredrik Kahl and Gabriel J. Brostow.\n\nPlease cite the paper if you are using this dataset.\n\nThe images, file pairs for evaluation and ground truth poses for the 8 scenes are\navailable at:\n```\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene1.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene2.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene3.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene4.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene5.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene6.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene7.zip\n\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/scene8.zip\n```\n\nThe dataset is published with Attribution 4.0 International (CC BY 4.0) License, see:\nhttps://storage.googleapis.com/niantic-lon-static/research/rectified-features/StrongViewpointChangesDataset/LICENSE.txt\n\n\n## ✏️ 📄 Citation\n\nIf you find our work useful or interesting, please consider citing [our paper](https://arxiv.org/abs/2008.09497):\n\n```\n@inproceedings{toft-2020-rectified-features,\n title   = {Single-Image Depth Prediction Makes Feature Matching Easier},\n author  = {Carl Toft and\n            Daniyar Turmukhambetov and\n            Torsten Sattler and\n            Fredrik Kahl and\n            Gabriel J. Brostow\n           },\n booktitle = {European Conference on Computer Vision ({ECCV})},\n year = {2020}\n}\n```\n\n\n# 👩‍⚖️ License\nCopyright © Niantic, Inc. 2020. Patent Pending. All rights reserved. Please see the license file for terms.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnianticlabs%2Frectified-features","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnianticlabs%2Frectified-features","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnianticlabs%2Frectified-features/lists"}