{"id":23123743,"url":"https://github.com/nishimura5/behavior_senpai","last_synced_at":"2025-08-17T03:31:37.627Z","repository":{"id":197903648,"uuid":"699616844","full_name":"nishimura5/behavior_senpai","owner":"nishimura5","description":"A collection of Python scripts and applications for supporting quantitative behavioral observation using videos.","archived":false,"fork":false,"pushed_at":"2025-06-10T00:59:28.000Z","size":8046,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-06-10T01:35:15.566Z","etag":null,"topics":["behavior-coding","behavioral-coding","ethology","markerless-tracking","mediapipe","mmpose","observation","pose-estimation","python-application","ultralytics","yolov8"],"latest_commit_sha":null,"homepage":"https://doi.org/10.48708/7160651","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nishimura5.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-10-03T01:43:51.000Z","updated_at":"2025-06-09T08:31:04.000Z","dependencies_parsed_at":"2023-12-02T02:29:47.123Z","dependency_job_id":"4d25c57b-9969-45a0-a30b-3ac4f546a12a","html_url":"https://github.com/nishimura5/behavior_senpai","commit_stats":null,"previous_names":["nishimura5/python_senpai","nishimura5/behavior_senpai"],"tags_count":14,"template":false,"template_full_name":null,"purl":"pkg:github/nishimura5/behavior_senpai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nishimura5%2Fbehavior_senpai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nishimura5%2Fbehavior_senpai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nishimura5%2Fbehavior_senpai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nishimura5%2Fbehavior_senpai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nishimura5","download_url":"https://codeload.github.com/nishimura5/behavior_senpai/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nishimura5%2Fbehavior_senpai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270802912,"owners_count":24648668,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-17T02:00:09.016Z","response_time":129,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["behavior-coding","behavioral-coding","ethology","markerless-tracking","mediapipe","mmpose","observation","pose-estimation","python-application","ultralytics","yolov8"],"created_at":"2024-12-17T07:35:40.350Z","updated_at":"2025-08-17T03:31:37.604Z","avatar_url":"https://github.com/nishimura5.png","language":"Python","readme":"# Behavior Senpai v.1.5.1\n\n[pyproject]: https://github.com/nishimura5/behavior_senpai/blob/master/pyproject.toml\n[app_detect]: https://github.com/nishimura5/behavior_senpai/blob/master/src/app_detect.py\n[app_track_list]: https://github.com/nishimura5/behavior_senpai/blob/master/src/app_track_list.py\n[app_trajplot]: https://github.com/nishimura5/behavior_senpai/blob/master/src/app_trajplot.py\n[app_points_calc]: https://github.com/nishimura5/behavior_senpai/blob/master/src/app_points_calc.py\n[app_feat_mix]: https://github.com/nishimura5/behavior_senpai/blob/master/src/app_feat_mix.py\n[app_dimredu]: https://github.com/nishimura5/behavior_senpai/blob/master/src/app_dimredu.py\n[gui_parts]: https://github.com/nishimura5/behavior_senpai/blob/master/src/gui_parts.py\n[detector_proc]: https://github.com/nishimura5/behavior_senpai/blob/master/src/detector_proc.py\n[keypoint_toml]: https://github.com/nishimura5/behavior_senpai/tree/master/src/keypoint\n\n![ScreenShot](https://www.design.kyushu-u.ac.jp/~eigo/behavior_senpai_files/bs_capture_120.jpg)\n\nBehavior Senpai is an application that supports quantitative behavior observation in video observation methods. It converts video files into time-series coordinate data using keypoint detection AI, enabling quantitative analysis and visualization of human behavior.\nBehavior Senpai is distinctive in that it permits the utilization of multiple AI models without the necessity for coding. \n\n The following AI image processing frameworks/models are supported by Behavior Senpai:\n- [YOLO11 Pose](https://docs.ultralytics.com/tasks/pose/)\n- [YOLOv8 Pose](https://github.com/ultralytics/ultralytics/issues/1915)\n- [MediaPipe Holistic](https://github.com/google/mediapipe/blob/master/docs/solutions/holistic.md)\n- [RTMPose Halpe26 (MMPose)](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose#26-keypoints)\n- [RTMPose WholeBody133 (MMPose)](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose#wholebody-2d-133-keypoints)\n\nBehavior Senpai performs pose estimation of a person in a video using an AI model selected by the user, and outputs time-series coordinate data.\n(These are variously referred to as \"pose estimation\", \"markerless motion capture\", \"landmark detection\", and so forth, depending on the intended purpose and application.)\n\nBehavior Senpai can import inference results (with .h5 extension) from [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut).\n\nBehavior Senpai is an open source software developed at [Faculty of Design, Kyushu University](https://www.design.kyushu-u.ac.jp/en/home/).\n\n## Requirement\n\nIn order to use Behavior Senpai, you need a PC that meets the following performance requirements. The functionality has been confirmed on Windows 11 (23H2).\n\n### When using CUDA\n\n - Disk space: 12GB or more\n - RAM: 16GB or more\n - Screen resolution: 1920x1080 or higher\n - GPU: RTX3060~ (and its [drivers](https://www.nvidia.com/download/index.aspx))\n\n### Without CUDA\n\nIf you do not have a CUDA-compatible GPU, only MediaPipe Holistic can be used.\n\n - Disk space: 8GB or more\n - RAM: 16GB or more\n - Screen resolution: 1920x1080 or higher\n\n## Usage\n\n### Download\n\nDownload [BehaviorSenpai151.zip](https://github.com/nishimura5/behavior_senpai/releases/download/v1.5.1/BehaviorSenpai151.zip)\n\n### Install\n\nRunning BehaviorSenpai.exe will start the application; if you want to use CUDA, check the \"Enable features using CUDA\" checkbox the first time you start the application and click the \"OK\" button.\n\nBehaviorSenpai.exe is an application to automate the construction of the Python environment by [uv](https://docs.astral.sh/uv/) and the startup of Behavior Senpai itself.\nThe initial setup by BehaviorSenpai.exe takes some time. Please wait until the terminal (black screen) closes automatically.\n\n\u003cp align=\"center\"\u003e\n \u003ca href=\"https://youtu.be/0k8GA1DscKQ\"\u003e\n   \u003cimg width=\"30%\" alt=\"How to install Behavior Senpai\" src=\"https://img.youtube.com/vi/0k8GA1DscKQ/0.jpg\"\u003e\n \u003c/a\u003e\n\u003c/p\u003e\n\nTo uninstall Behavior Senpai or replace it with the latest version, delete the entire folder containing BehaviorSenpai.exe.\n\n## Keypoints\n\n### YOLO11 and YOLOv8\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"24%\" alt=\"Keypoints of body in YOLO\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/body_coco.png\"\u003e\n\u003c/p\u003e\n\n### RTMPose Halpe26\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"24%\" alt=\"Keypoints of body in RTMPose Halpe26\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/body_halpe26.png\"\u003e\n\u003c/p\u003e\n\n### RTMPose WholeBody133\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"50%\" alt=\"Keypoints of body and hands in RTMPose WholeBody133\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/body_wholebody133.png\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"60%\" alt=\"Keypoints of face in RTMPose WholeBody133\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/face_wholebody133.png\"\u003e\n\u003c/p\u003e\n\n### MediaPipe Holistic\n\nSee [here](https://storage.googleapis.com/mediapipe-assets/documentation/mediapipe_face_landmark_fullsize.png) for a document with all IDs.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"60%\" alt=\"Keypoints of face in Mediapipe Holistic\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/facemesh.png\"\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"60%\" alt=\"Keypoints of face in Mediapipe Holistic\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/facemesh2.png\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"50%\" alt=\"Keypoints of hands in Mediapipe Holistic\" src=\"https://github.com/nishimura5/behavior_senpai/blob/master/src/img/hands.png\"\u003e\n\u003c/p\u003e\n\n## Interface\n\n### Folder structure of data\n\n```\nobservation_jan31     \u003c------- Root directory\n├── ABC_cond1.MP4\n├── ABC_cond2.MP4\n├── XYZ_cond1.MOV\n├── XYZ_cond2.MOV\n├── calc\n│   ├── case1         \u003c------- calc_case subdirectory\n│   │   ├── ABC_cond1.feat\n│   │   └── XYZ_cond1.feat\n│   └── case2\n│       └── XYZ_cond_1.feat\n└── trk\n    ├── ABC_cond1.pkl\n    ├── ABC_cond2.pkl\n    ├── XYZ_cond1.pkl\n    ├── XYZ_cond2.pkl\n    └── backup\n        └── ABC_cond1.pkl\n```\n\n#### Root directory\n\nEach root directory (e.g., \"observation_jan31\") represents a single series of experiments and serves as the base level of the project's file organization. You can create multiple root directories for different experimental series, and they can be located anywhere on your computer where you have appropriate access permissions. When Behavior Senpai processes videos in a root directory, it automatically creates two essential subdirectories within it: trk/ and calc/. These subdirectory names are fixed and must not be modified under any circumstances.\n\n#### Generated files\n\nBehavior Senpai first generates tracking files (.pkl) from the video files and stores them in the trk directory. These files preserve the video's base name. After tracking files are generated, users can define calc_case names (e.g., \"case1\", \"case2\") within Behavior Senpai. Feature files are then stored in these user-defined calc_case subdirectories, inheriting the base name of their source video.\n\n#### Operational guidelines\n\nAll videos from a given experimental series should be placed directly in their corresponding root directory. Creating subdirectories for different experimental conditions or participants is not recommended. When naming video files, we recommend either keeping the original camera-generated file names (such as GX010001.MP4) or using custom names that are meaningful for your project management. It is crucial not to rename video files after processing has begun, as all generated files inherit the base name of the source video. Renaming source videos after processing will break these file relationships.\n\n### Track file\n\nThe time-series coordinate data resulting from keypoint detection in app_detect.py is stored in a [Pickled Pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_pickle.html). This data is referred to by Behavior Senpai as a \"Track file\". The Track file is saved in the \"trk\" folder, which is created in the same directory as the video file where the keypoint detection was performed.\nThe Track file holds time-series coordinate data in a 3-level-multi-index format. The indexes are designated as \"frame\" \"member\", and \"keypoint\", starting from level 0. \"Frame\" is an integer, starting from 0, corresponding to the frame number of the video. \"Member\" and \"keypoint\" are the identifiers of keypoints detected by the model. The Track file always contains three columns: \"x,\" \"y,\" and \"timestamp.\" \"X\" and \"y\" are in pixels, while \"timestamp\" is in milliseconds.\n\nAn illustrative example of a DataFrame stored in the Track file is presented below. It should be noted that the columns may include additional columns such as 'z' and 'conf', contingent on the specifications of the AI model.\n\n|  |  |  | x | y | timestamp |\n| - | - | - | - | - | - |\n| frame | member | keypoint |  |  |  |\n| 0 | 1 | 0 | 1365.023560 | 634.258484 | 0.0 |\n|  |  | 1 | 1383.346191 | 610.686951 | 0.0 |\n|  |  | 2 | 1342.362061 | 621.434998 | 0.0 |\n|  |  | ... | ... | ... | ... |\n|  |  | 16 | 1417.897583 | 893.739258 | 0.0 |\n|  | 2 | 0 | 2201.367920 | 846.174194 | 0.0 |\n|  |  | 1 | 2270.834473 | 1034.986328 | 0.0 |\n|  |  | ... | ... | ... | ... |\n|  |  | 16 | 2328.100098 | 653.919312 | 0.0 |\n| 1 | 1 | 0 | 1365.023560 | 634.258484 | 33.333333 |\n|  |  | 1 | 1383.346191 | 610.686951 | 33.333333 |\n|  |  | ... | ... | ... | ... |\n\n### Feature file\n\nBehaviorSenpai saves calculated features based on Track file data to Feature files. Feature files are created in HDF5 format with the .feat extension.\nFeature files store data calculated by [app_points_calc.py][app_points_calc], [app_trajplot.py][app_trajplot], and [app_feat_mix.py][app_feat_mix] in the format shown in the table below:\n\n|       |        | feat_1   | feat_2   | timestamp |\n| ----- | ------ | -------- | -------- | --------- |\n| frame | member |          |          |           |\n| 0     | 1      | NaN      | 0.050946 | 0.000000  |\n| 0     | 2      | 0.065052 | 0.049657 | 0.000000  |\n| 1     | 1      | NaN      | 0.064225 | 16.683333 |\n| 1     | 2      | 0.050946 | 0.050946 | 16.683333 |\n| 2     | 1      | NaN      | 0.065145 | 33.366667 |\n| 2     | 2      | 0.061077 | 0.068058 | 33.366667 |\n| 3     | 1      | NaN      | 0.049712 | 50.050000 |\n| 3     | 2      | 0.052715 | 0.055282 | 50.050000 |\n|       | ...    | ...      | ...      | ...       |\n\nAdditionally, data calculated by [app_dimredu.py][app_dimredu] is stored in the format shown in the table below. All these data are handled as Pandas DataFrames, with time-series data in 2-level-multi-index format, with the indices designated as \"frame\" and \"member\", respectively, and the columns including a \"timestamp\".\n\n|       |        | class | cat_1 | cat_2 | timestamp |\n| ----- | ------ | ----- | ----- | ----- | --------- |\n| frame | member |       |       |       |           |\n| 0     | 1      | 0.0   | True  | False | 0.000000  |\n| 0     | 2      | 1.0   | False | True  | 0.000000  |\n| 1     | 1      | 0.0   | True  | False | 16.683333 |\n| 1     | 2      | 1.0   | False | True  | 16.683333 |\n| 2     | 1      | 0.0   | True  | False | 33.366667 |\n| 2     | 2      | 1.0   | False | True  | 33.366667 |\n| 3     | 1      | 0.0   | True  | False | 50.050000 |\n| 3     | 2      | 1.0   | False | True  | 50.050000 |\n|       | ...    | ...   | ...   | ...   | ...       |\n\n### Security considerations\n\nAs mentioned above, Behavior Senpai handles pickle format files, and because of the security risks associated with pickle format files, please only open files that you trust (e.g., do not open files from unknown sources that are available on the Internet). (For example, do not try to open files of unknown origin published on the Internet). See [here](https://docs.python.org/3/library/pickle.html) for more information.\n\n### Keypoint definition file\n\nThis document describes the TOML configuration file that is generated to [keypoint folder][keypoint_toml] when importing DeepLabCut's keypoint detection results (.h5 files) into Behavior Senpai application.\n\n#### [keypoints] Section\nDefines the mapping between keypoint names and their IDs, along with display colors for visualization. While keypoint names are used in the configuration file, Behavior Senpai's GUI displays only the numeric IDs. When importing results from DeepLabCut, keypoint names in the .h5 file are automatically converted to corresponding IDs within Behavior Senpai.\n\n#### [draw] Section\nSpecifies the rules for drawing keypoints when displaying on screen or generating video output. This section controls which elements are visible and how they are rendered in the visualization.\n\n#### [bones] Section\nDefines the rules for drawing lines between keypoints when displaying on screen or generating video output. These lines typically represent connections between detected keypoints to visualize the overall structure.\n\n### Annotated Video file\n\nBehavior Senpai can output videos in mp4 format with detected keypoints drawn on them.\n\n### Temporary file\n\nThe application's settings and the path of the most recently loaded Track file are saved as a Pickled dictionary. The file name is \"temp.pkl\". If this file does not exist, the application automatically generates it (using default values). To reset the settings, delete the \"temp.pkl\" file. The Temporary file is managed by [gui_parts.py][gui_parts].\n\n## Citation\n\nPlease acknowledge and cite the use of this software and its authors when results are used in publications or published elsewhere.\n\n```\nNishimura, E. (2025). Behavior Senpai (Version 1.5) [Computer software]. Kyushu University, https://doi.org/10.48708/7160651\n```\n\n```\n@misc{behavior-senpai-software,\n  title = {Behavior Senpai},\n  author = {Nishimura, Eigo},\n  year = {2025},\n  publisher = {Kyushu University},\n  doi = {10.48708/7160651},\n  note = {Available at: \\url{https://hdl.handle.net/2324/7160651}},\n}\n```\n\n### Related Documents\n[Nishimura, E. Feature-based behavior coding for efficient exploratory analysis using pose estimation. Behav Res 57, 167 (2025). https://doi.org/10.3758/s13428-025-02702-6](https://link.springer.com/article/10.3758/s13428-025-02702-6)\n\n[Sample Videos for Behavioral Observation Using Keypoint Detection Technology](https://hdl.handle.net/2324/7172619)\n\n[Quantitative Behavioral Observation Using Keypoint Detection Technology:\n Towards the Development of a New Behavioral Observation Method through Video Imagery (Japanese article)](https://hdl.handle.net/2324/7170833)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnishimura5%2Fbehavior_senpai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnishimura5%2Fbehavior_senpai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnishimura5%2Fbehavior_senpai/lists"}