https://github.com/pupil-labs/real-time-screen-gaze
https://github.com/pupil-labs/real-time-screen-gaze
Last synced: 12 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/pupil-labs/real-time-screen-gaze
- Owner: pupil-labs
- License: mit
- Created: 2023-05-11T12:07:07.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-12-09T11:36:08.000Z (5 months ago)
- Last Synced: 2025-04-13T05:45:22.224Z (12 days ago)
- Language: Python
- Size: 99.6 KB
- Stars: 7
- Watchers: 7
- Forks: 2
- Open Issues: 2
-
Metadata Files:
- Readme: README.rst
- Changelog: CHANGES.rst
- License: LICENSE
Awesome Lists containing this project
README
=====================
Real-time Screen Gaze
=====================
This package is designed to allow users of the Pupil Labs eyetracking hardware, especially `Neon `_, to acquire screen-based gaze coordinates in realtime without relying on `Pupil Core software `_.This works by identifying the image of the display as it appears in the scene camera. We accomplish this with `AprilTags `_, 2D barcodes similar to QR codes. This package provides a ``marker_generator`` module to create AprilTag image data.
.. code-block:: python
from pupil_labs.real_time_screen_gaze import marker_generator
...marker_pixels = marker_generator.generate_marker(marker_id=0)
More markers will yield higher accuracy, and we recommend a minimum of four. Each marker must be unique, and the ``marker_id`` parameter is provided for this purpose.
Once you've drawn the markers to the screen using your GUI toolkit of choice, you'll next need to setup a ``GazeMapper`` object. This requires calibration data for the scene camera. For Neon, this is very simple:
.. code-block:: python
from pupil_labs.realtime_api.simple import discover_one_device
from pupil_labs.real_time_screen_gaze.gaze_mapper import GazeMapper
...device = discover_one_device()
calibration = device.get_calibration()
gaze_mapper = GazeMapper(calibration)For Pupil Invisible, you'll need to extract the `scene_camera.json` file the Time Series Data of a recording which has been been uploaded to Pupil Cloud. This method will also work with Neon recordings in a non-realtime context.
.. code-block:: python
import json
from pupil_labs.real_time_screen_gaze.gaze_mapper import GazeMapper
...with open("scene_camera.json") as calibration_file:
calibration_data = json.load(calibration_file)
if "dist_coefs" in calibration_data:
calibration_data["distortion_coefficients"] = calibration_data["dist_coefs"]calibration = {
"scene_camera_matrix": [calibration_data["camera_matrix"]],
"scene_distortion_coefficients": [calibration_data["distortion_coefficients"]],
}gaze_mapper = GazeMapper(calibration)
Now that we have a ``GazeMapper`` object, we need to specify which AprilTag markers we're using and where they appear on the screen.
.. code-block:: python
marker_verts = {
0: [ # marker id 0
(32, 32), # Top left marker corner
(96, 32), # Top right
(96, 96), # Bottom right
(32, 96), # Bottom left
],
...
}screen_size = (1920, 1080)
screen_surface = gaze_mapper.add_surface(
marker_verts,
screen_size
)Here, ``marker_verts`` is a dictionary whose keys are the IDs of the markers we'll be drawing to the screen. The value for each key is a list of the 2D coordinates of the four corners of the marker, starting with the top left and going clockwise.
With that, setup is complete and we're ready to start mapping gaze to the screen! On each iteration of our main loop we'll grab a video frame from the scene camera and gaze data from the Realtime API. We pass those along to our ``GazeMapper`` instance for processing, and it returns our gaze positions mapped to screen coordinates.
.. code-block:: python
from pupil_labs.realtime_api.simple import discover_one_device
...device = discover_one_device(max_search_duration_seconds=10)
while True:
frame, gaze = device.receive_matched_scene_video_frame_and_gaze()
result = gaze_mapper.process_frame(frame, gaze)for surface_gaze in result.mapped_gaze[screen_surface.uid]:
printf(f"Gaze at {surface_gaze.x}, {surface_gaze.y}")