{"id":19068702,"url":"https://github.com/machine-learning-tokyo/tactile_patterns","last_synced_at":"2025-09-21T19:04:50.804Z","repository":{"id":37594119,"uuid":"174066398","full_name":"Machine-Learning-Tokyo/tactile_patterns","owner":"Machine-Learning-Tokyo","description":"Convert photo to tactile image to assist visually impaired","archived":false,"fork":false,"pushed_at":"2023-02-02T06:26:33.000Z","size":6376,"stargazers_count":16,"open_issues_count":5,"forks_count":3,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-04-18T16:25:53.556Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Machine-Learning-Tokyo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-03-06T03:44:44.000Z","updated_at":"2024-09-27T17:33:44.000Z","dependencies_parsed_at":"2022-09-08T20:20:35.167Z","dependency_job_id":"b50255c3-1afb-4840-a983-5b961794c2a1","html_url":"https://github.com/Machine-Learning-Tokyo/tactile_patterns","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Machine-Learning-Tokyo%2Ftactile_patterns","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Machine-Learning-Tokyo%2Ftactile_patterns/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Machine-Learning-Tokyo%2Ftactile_patterns/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Machine-Learning-Tokyo%2Ftactile_patterns/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Machine-Learning-Tokyo","download_url":"https://codeload.github.com/Machine-Learning-Tokyo/tactile_patterns/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251321175,"owners_count":21570689,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-09T01:11:24.410Z","updated_at":"2025-09-21T19:04:50.682Z","avatar_url":"https://github.com/Machine-Learning-Tokyo.png","language":"Python","readme":"# Photo to Tactile Image Converter\nThis is a collaborative project between MLT (Suzana Ilic, John Lau) and Prof. Tune Kamae, a former physics professor (University of Tokyo, Stanford University) who has been engaged for years in supporting, teaching and creating learning resources for visually impaired and blind people. MLT is supporting his efforts.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/Machine-Learning-Tokyo/tactile_patterns/blob/master/learning_resources.jpg\" width=\"500\"\u003e\n\u003c/p\u003e\n\n\nThe goal of this project is to convert photo to [Tactile Image](https://en.wikipedia.org/wiki/Tactile_graphic) that can be printed with Tactile Printer on Swell Paper to assist visually impaired person in seeing the photo.\n\nThis project is done as part of the [Machine Learning Tokyo](mltokyo.ai) activities **AI for Social Good**. Currently we have built a simple prototype that will get deployed as a web application as the next step.\n\n## Example: Input image and output patterns\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/Machine-Learning-Tokyo/tactile_patterns/blob/master/photos/Tokyo-Tower-2-1024x683.jpg\" width=\"500\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/Machine-Learning-Tokyo/tactile_patterns/blob/master/output/tokyo_tower.png\" width=\"500\"\u003e\n\u003c/p\u003e\n\n\n## After thermo-printing\nBlind people are testing the tactile patterns representing the original image.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/Machine-Learning-Tokyo/tactile_patterns/blob/master/tactile_patterns.jpg\" width=\"500\"\u003e\n\u003c/p\u003e\n\n\n\n### Photo Credits\n* Jessica A Paje, https://en.japantravel.com/tokyo/tokyo-station-celebrates-100-years/17770.\n* https://travel.gaijinpot.com/tokyo-tower/\n* https://commons.wikimedia.org/wiki/File:Tokyo_Skytree_2014_%E2%85%A2.jpg\n* https://www.jinjyagoshuin.com/entry/wakabayashiinari\n* http://www.studio-mario.jp/event/family/\n* https://bit.ly/2Z6HokG\n\n### References\n* [Tactile Images Give Visually Impaired Access To Earth From The Air](https://www.culture24.org.uk/sector-info/art17622).\n* [Ford's Feel the View smart window lets blind passengers enjoy the landscape](https://www.dezeen.com/2018/05/06/fords-feel-the-view-smart-window-blind-passengers-technology/).\n* [3D-printed display lets blind people explore images by touch](https://www.newscientist.com/article/2076693-3d-printed-display-lets-blind-people-explore-images-by-touch/).\n* [Tactile Graphics](http://www.pathstoliteracy.org/tactile-graphics).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmachine-learning-tokyo%2Ftactile_patterns","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmachine-learning-tokyo%2Ftactile_patterns","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmachine-learning-tokyo%2Ftactile_patterns/lists"}