https://github.com/code-yeongyu/youeye
YouEye - An AI-powered, problem breaker for blind individuals, providing accessibility to no-voice supported kiosks
https://github.com/code-yeongyu/youeye
ai ocr python python3
Last synced: about 1 month ago
JSON representation
YouEye - An AI-powered, problem breaker for blind individuals, providing accessibility to no-voice supported kiosks
- Host: GitHub
- URL: https://github.com/code-yeongyu/youeye
- Owner: code-yeongyu
- License: mit
- Created: 2020-01-31T15:20:24.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2023-02-16T00:35:59.000Z (about 2 years ago)
- Last Synced: 2025-03-18T17:57:22.047Z (about 1 month ago)
- Topics: ai, ocr, python, python3
- Language: Python
- Homepage:
- Size: 45.2 MB
- Stars: 5
- Watchers: 2
- Forks: 2
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# YouEye - kiosk machine helper solution for blinded people
[한국어 README](https://github.com/code-yeongyu/YouEye/blob/master/README_ko.md)
Using a no-voice supported kiosk is extremly difficult for blind people.
Therefore, if you just reach your hand up and capture it with your phone camera, YouEye will read you what will you touch.
Currently, only python demos are available, with web-server wrapper.
Word coordinate detector for is used for this project.## Current works
### Demonstration Video
[](https://youtu.be/GAdjqtUidms)
### Demonstration Pictures





All these ocr-ed results are genereated by naver-ocr-api, both naver-ocr-api and tesseracts are available.
Because of the lack of trainset, I recommend you to use naver-ocr-api.## Run demonstration
Clone this project, move to directory python-demo and install required pip modules by executing folliwing commands.
```bash
pip install -r requirements.txt
```then execute:
```bash
python demonstration.py
```For web-interface demonstration:
```bash
python web-wrapper.py
```### About web interface
Send post request to /i_am_iron_man with form-data body, and attach your image to key image.
Then the server will respond with the desired text.
Currently, the web interface is setted to use naver-ocr-api, so you have to get permission or use tesseract to use ocr properly.