{"id":18496610,"url":"https://github.com/mouseland/facemap","last_synced_at":"2025-04-08T11:09:01.186Z","repository":{"id":30713094,"uuid":"68824635","full_name":"MouseLand/facemap","owner":"MouseLand","description":"Framework for predicting neural activity from mouse orofacial movements tracked using a pose estimation model. Package also includes singular value decomposition (SVD) of behavioral videos.","archived":false,"fork":false,"pushed_at":"2024-12-05T16:58:06.000Z","size":127272,"stargazers_count":165,"open_issues_count":12,"forks_count":63,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-04-01T10:09:46.185Z","etag":null,"topics":["behavior","deep-learning","gui","matlab-gui","movie","neuroscience","pose-estimation","pupil","python","pytorch","rodents"],"latest_commit_sha":null,"homepage":"https://facemap.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MouseLand.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-09-21T14:21:25.000Z","updated_at":"2025-03-31T14:03:59.000Z","dependencies_parsed_at":"2025-01-05T12:03:08.833Z","dependency_job_id":"4966db3d-2ed4-44a3-bf04-46d1d63c3d2b","html_url":"https://github.com/MouseLand/facemap","commit_stats":null,"previous_names":[],"tags_count":10,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MouseLand%2Ffacemap","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MouseLand%2Ffacemap/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MouseLand%2Ffacemap/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MouseLand%2Ffacemap/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MouseLand","download_url":"https://codeload.github.com/MouseLand/facemap/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247829491,"owners_count":21002995,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["behavior","deep-learning","gui","matlab-gui","movie","neuroscience","pose-estimation","pupil","python","pytorch","rodents"],"created_at":"2024-11-06T13:30:15.018Z","updated_at":"2025-04-08T11:09:01.177Z","avatar_url":"https://github.com/MouseLand.png","language":"Python","readme":"[![Downloads](https://static.pepy.tech/badge/facemap)](https://pepy.tech/project/facemap)\n[![Downloads](https://static.pepy.tech/badge/facemap/month)](https://pepy.tech/project/facemap)\n[![GitHub stars](https://badgen.net/github/stars/Mouseland/facemap)](https://github.com/MouseLand/facemap/stargazers)\n[![GitHub forks](https://badgen.net/github/forks/Mouseland/facemap)](https://github.com/MouseLand/facemap/network/members)\n[![](https://img.shields.io/github/license/MouseLand/facemap)](https://github.com/MouseLand/facemap/blob/main/LICENSE)\n[![PyPI version](https://badge.fury.io/py/facemap.svg)](https://badge.fury.io/py/facemap)\n[![Documentation Status](https://readthedocs.org/projects/ansicolortags/badge/?version=latest)](https://pypi.org/project/facemap/)\n[![GitHub open issues](https://badgen.net/github/open-issues/Mouseland/facemap)](https://github.com/MouseLand/facemap/issues)\n\n# Facemap \u003cimg src=\"https://raw.githubusercontent.com/MouseLand/facemap/main/facemap/mouse.png\" width=\"300\" title=\"facemap\" alt=\"facemap\" align=\"right\" vspace = \"50\"\u003e\n\nFacemap is a framework for predicting neural activity from mouse orofacial movements. It includes a pose estimation model for tracking distinct keypoints on the mouse face, a neural network model for predicting neural activity using the pose estimates, and also can be used compute the singular value decomposition (SVD) of behavioral videos.\n\nPlease find the detailed documentation at **[facemap.readthedocs.io](https://facemap.readthedocs.io/en/latest/index.html)**.\n\nTo learn about Facemap, read the [paper](https://www.nature.com/articles/s41593-023-01490-6) or check out the tweet [thread](https://twitter.com/Atika_Ibrahim/status/1588885329951367168?s=20\u0026t=AhE3vBTnCvW36QiTyhu0qQ). For support, please open an [issue](https://github.com/MouseLand/facemap/issues).\n\n- For latest released version (from PyPI) including svd processing only, run `pip install facemap` for headless version or `pip install facemap[gui]` for using GUI. Note: `pip install facemap` not yet available for latest tracker and neural model, instead install with `pip install git+https://github.com/mouseland/facemap.git`\n\n### CITATION\n\n**If you use Facemap, please cite the Facemap [paper](https://www.nature.com/articles/s41593-023-01490-6):**   \nSyeda, A., Zhong, L., Tung, R., Long, W., Pachitariu, M.\\*, \u0026 Stringer, C.\\* (2024). Facemap: a framework for modeling neural activity based on orofacial tracking. \u003cem\u003eNature Neuroscience\u003c/em\u003e, 27(1), 187-195.\n[[bibtex](https://scholar.googleusercontent.com/scholar.bib?q=info:ckbIvC5D_FsJ:scholar.google.com/\u0026output=citation\u0026scisdr=ClF-mOb-EMjM4mZZ21s:AFWwaeYAAAAAZcpfw1vc6bUrQR0LDQdzaTPXbO8\u0026scisig=AFWwaeYAAAAAZcpfw5Aeocyxj1cWqLJIgPajziE\u0026scisf=4\u0026ct=citation\u0026cd=-1\u0026hl=en)]\n\n**If you use the SVD computation or pupil tracking components, please also cite our previous [paper](https://www.nature.com/articles/s41592-022-01663-4):**  \nStringer, C.\\*, Pachitariu, M.\\*, Steinmetz, N., Reddy, C. B., Carandini, M., \u0026 Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. \u003cem\u003eScience, 364\u003c/em\u003e(6437), eaav7893.\n[[bibtex](https://scholar.googleusercontent.com/scholar.bib?q=info:DNVOkEas4K8J:scholar.google.com/\u0026output=citation\u0026scisdr=CgXHFLYtEMb9qP1Bt0Q:AAGBfm0AAAAAY3JHr0TJourtY6W2vbjy7opKXX2jOX9Z\u0026scisig=AAGBfm0AAAAAY3JHryiZnvgWM1ySwd_xQ9brvQxH71UM\u0026scisf=4\u0026ct=citation\u0026cd=-1\u0026hl=en\u0026scfhb=1)]\n\nThe MATLAB version of the GUI is no longer supported (see old [documentation](https://github.com/MouseLand/facemap/blob/main/docs/svd_matlab_tutorial.md)).\n\n### Disclaimer\nThe outputs of Facemap have only been tested on macos-12 and earlier versions and newer versions may give different/incorrect output so it's advised to use macos-12 for Facemap until the issue is resolved.\n\n### Logo\nLogo was designed by Atika Syeda and [Tzuhsuan Ma](https://github.com/tzhma).\n\n### Video tutorial \nPlease follow the [video tutorial](https://www.youtube.com/watch?v=aO_kXkOuadg) for instructions on how to use Facemap or read the instructions below. \n\n## Installation\n\nIf you have an older `facemap` environment you can remove it with `conda env remove -n facemap` before creating a new one.\n\nIf you are using a GPU, make sure its drivers and the cuda libraries are correctly installed.\n\n1. Install an [Anaconda](https://www.anaconda.com/products/distribution) distribution of Python. Note you might need to use an anaconda prompt if you did not add anaconda to the path.\n2. Open an anaconda prompt / command prompt which has `conda` for **python 3** in the path\n3. Create a new environment with `conda create --name facemap python=3.8`. We recommend python 3.8, but python 3.9 and 3.10 will likely work as well.\n4. To activate this new environment, run `conda activate facemap`\n5. To install the minimal version of facemap, run `python -m pip install facemap`.  \n6. To install facemap and the GUI, run `python -m pip install facemap[gui]`. If you're on a zsh server, you may need to use ' ' around the facemap[gui] call: `python -m pip install 'facemap[gui]'.\n\nTo upgrade facemap (package [here](https://pypi.org/project/facemap/)), run the following in the environment:\n\n~~~sh\npython -m pip install facemap --upgrade\n~~~\n\nNote you will always have to run `conda activate facemap` before you run facemap. If you want to run jupyter notebooks in this environment, then also `pip install notebook` and `python -m pip install matplotlib`.\n\nYou can also try to install facemap and the GUI dependencies from your base environment using the command\n\n~~~~sh\npython -m pip install facemap[gui]\n~~~~\n\nIf you have **issues** with installation, see the [docs](https://github.com/MouseLand/facemap/blob/dev/docs/installation.md) for more details. You can also use the facemap environment file included in the repository and create a facemap environment with `conda env create -f environment.yml` which may solve certain dependency issues.\n\nIf these suggestions fail, open an issue.\n\n### GPU version (CUDA) on Windows or Linux\n\nIf you plan on running many images, you may want to install a GPU version of *torch* (if it isn't already installed).\n\nBefore installing the GPU version, remove the CPU version:\n~~~\npip uninstall torch\n~~~\n\nFollow the instructions [here](https://pytorch.org/get-started/locally/) to determine what version to install. The Anaconda install is strongly recommended, and then choose the CUDA version that is supported by your GPU (newer GPUs may need newer CUDA versions \u003e 10.2). For instance this command will install the 11.3 version on Linux and Windows (note the `torchvision` and `torchaudio` commands are removed because facemap doesn't require them):\n\n~~~\nconda install pytorch==1.12.1 cudatoolkit=11.3 -c pytorch\n~~~~\n\nand this will install the 11.7 toolkit\n\n~~~\nconda install pytorch pytorch-cuda=11.7 -c pytorch\n~~~\n\n## Supported videos\nFacemap supports grayscale and RGB movies. The software can process multi-camera videos for pose tracking and SVD analysis. Please see [example movies](https://drive.google.com/open?id=1cRWCDl8jxWToz50dCX1Op-dHcAC-ttto) for testing the GUI. Movie file extensions supported include:\n\n'.mj2','.mp4','.mkv','.avi','.mpeg','.mpg','.asf'\n\nFor more details, please refer to the [data acquisition page](https://github.com/MouseLand/facemap/blob/main/docs/data_acquisition.md).\n\n## Support\n\nFor any issues or questions about Facemap, please [open an issue](https://github.com/MouseLand/facemap/issues). Please find solutions to some common issues below:\n\n### Download of pretrained models\nThe models will be downloaded automatically from our website when you first run Facemap for processing keypoints. If download of pretrained models fails, please try the following:\n\n- to resolve certificate error try: ```pip install –upgrade certifi```, or\n- download the pretrained model files: [model_params](https://osf.io/download/67f00beaba4331d9888b7f36/), [model_state](https://osf.io/download/67f00be8959068ade6cf70f1/)  and place them in the `models` subfolder of the hidden `facemap` folder located in home directory. Path to the hidden folder is: `C:\\Users\\your_username\\.facemap\\models` on Windows and `/home/your_username/.facemap/models` on Linux and Mac. \n\n# Running Facemap\n\nTo get started, run the following command in terminal to open the GUI:\n\n```\npython -m facemap\n```\n\nClick \"File\" and load a single video file (\"Load video\"), or click \"Load multiple videos\" to choose a folder from which you can select movies to run. The video(s) will pop up in the left side of the GUI. You can zoom in and out with the mouse wheel, and you can drag by holding down the mouse. Double-click to return to the original, full view.\n\nNext you can extract information from the videos like track keypoints, compute movie SVDs, track pupil size etc. Also you can load in neural activity and predict it from these extracted features.\n\n## I. Pose tracking\n\n\u003cimg src=\"https://raw.githubusercontent.com/MouseLand/facemap/main/figs/facemap.gif\" width=\"100%\" height=\"470\" title=\"Tracker\" alt=\"tracker\" algin=\"middle\" vspace = \"10\"\u003e\n\nFacemap provides a trained network for tracking distinct keypoints on the mouse face from different camera views (some examples shown below). Check the `keypoints` box then click `process`. Next a bounding box will appear -- focus this on the face as shown below. Then the processed keypoints `*.h5` file will be saved in the output folder along with the corresponding metadata file `*.pkl`.\n\nKeypoints will be predicted in the selected bounding box region so please ensure the bounding box focuses on the face. See example frames [here](figs/mouse_views.png). \n\nFor more details on using the tracker, please refer to the [GUI Instructions](https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md). Check out the [notebook](https://github.com/MouseLand/facemap/blob/main/docs/notebooks/process_keypoints.ipynb) for processing keypoints in colab.\n\n\u003cp float=\"middle\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/MouseLand/facemap/main/figs/mouse_face1_keypoints.png\"  width=\"310\" height=\"290\" title=\"View 1\" alt=\"view1\" align=\"left\" vspace = \"10\" hspace=\"30\" style=\"border: 0.5px solid white\"  /\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/MouseLand/facemap/main/figs/mouse_face0_keypoints.png\" width=\"310\" height=\"290\" title=\"View 2\" alt=\"view2\" algin=\"right\" vspace = \"10\" style=\"border: 0.5px solid white\"\u003e\n\u003c/p\u003e\n\n### 📢 User contributions 📹 📷\nFacemap aims to provide a simple and easy-to-use tool for tracking mouse orofacial movements. The tracker's performance for new datasets could be further improved by expand our training set. You can contribute to the model by sharing videos/frames on the following email address(es): `asyeda1[at]jh.edu` or `stringerc[at]janelia.hhmi.org`.\n\n## II. ROI and SVD processing\n\nFacemap allows pupil tracking, blink tracking and running estimation, see more details **here**. Also, Facemap can compute the singular value decomposition (SVD) of ROIs on single and multi-camera videos. SVD analysis can be performed across static frames called movie SVD (`movSVD`) to extract the spatial components or over the difference between consecutive frames called motion SVD (`motSVD`) to extract the temporal components of the video. The first 500 principal components from SVD analysis are saved as output along with other variables.\n\nYou can draw ROIs to compute the motion/movie SVD within the ROI, and/or compute the full video SVD by checking `multivideo`. Then check `motSVD`  and/or `movSVD` and click `process`. The processed SVD `*_proc.npy` (and optionally `*_proc.mat`) file will be saved in the output folder selected.\n\nFor more details see [SVD python tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md) or [SVD MATLAB tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_matlab_tutorial.md).\n\n([video](https://www.youtube.com/watch?v=Rq8fEQ-DOm4) with old install instructions)\n\n\u003cimg src=\"https://github.com/MouseLand/facemap/raw/main/figs/face_fast.gif\" width=\"100%\" alt=\"face gif\"\u003e\n\n## III. Neural activity prediction\n\nFacemap includes a deep neural network encoding model for predicting neural activity or principal components of neural activity from mouse orofacial pose estimates extracted using the tracker or SVDs. \n\nThe encoding model used for prediction is described as follows:\n\u003cp float=\"middle\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/MouseLand/facemap/main/figs/encoding_model.png\"  width=\"70%\" height=\"300\" title=\"neural model\" alt=\"neural model\" align=\"center\" vspace = \"10\" hspace=\"30\" style=\"border: 0.5px solid white\"  /\u003e\n\u003c/p\u003e\n\nPlease see neural activity prediction [tutorial](https://github.com/MouseLand/facemap/blob/main/docs/neural_activity_prediction_tutorial.md) for more details.\n","funding_links":[],"categories":["End-User Applications"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmouseland%2Ffacemap","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmouseland%2Ffacemap","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmouseland%2Ffacemap/lists"}