{"id":19776809,"url":"https://github.com/chenglongma/skintoneclassifier","last_synced_at":"2025-04-30T19:31:06.543Z","repository":{"id":53837481,"uuid":"519800277","full_name":"ChenglongMa/SkinToneClassifier","owner":"ChenglongMa","description":"An easy-to-use library for skin tone classification","archived":false,"fork":false,"pushed_at":"2024-03-14T23:28:57.000Z","size":4037,"stargazers_count":65,"open_issues_count":1,"forks_count":9,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-05-07T09:09:26.515Z","etag":null,"topics":["face-detection","image-processing","image-recognition","image-segmentation","skin-detection"],"latest_commit_sha":null,"homepage":"https://chenglongma.com/SkinToneClassifier/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ChenglongMa.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":["ChenglongMa"],"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"lfx_crowdfunding":null,"custom":["https://www.paypal.me/imchenglong"]}},"created_at":"2022-07-31T14:34:50.000Z","updated_at":"2024-05-03T14:19:44.000Z","dependencies_parsed_at":"2023-09-24T07:17:30.956Z","dependency_job_id":"6562ac49-c0db-463e-bfec-ed13cd3b9150","html_url":"https://github.com/ChenglongMa/SkinToneClassifier","commit_stats":{"total_commits":85,"total_committers":3,"mean_commits":"28.333333333333332","dds":0.3411764705882353,"last_synced_commit":"30719b9aa13d9a77a2c33af57b69d49b66119e3c"},"previous_names":[],"tags_count":27,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChenglongMa%2FSkinToneClassifier","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChenglongMa%2FSkinToneClassifier/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChenglongMa%2FSkinToneClassifier/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChenglongMa%2FSkinToneClassifier/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ChenglongMa","download_url":"https://codeload.github.com/ChenglongMa/SkinToneClassifier/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251769282,"owners_count":21640877,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["face-detection","image-processing","image-recognition","image-segmentation","skin-detection"],"created_at":"2024-11-12T05:21:56.172Z","updated_at":"2025-04-30T19:31:06.529Z","avatar_url":"https://github.com/ChenglongMa.png","language":"Python","readme":"\u003chead\u003e\n    \u003clink rel=\"icon\" type=\"image/png\" href=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/icon.svg\"\u003e\n\u003c/head\u003e\n\n\u003cdiv style=\"text-align:center;\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/stone-logo.png\" alt=\"stone logo\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/illustration.svg\" alt=\"model illustration\"\u003e\n\u003c/div\u003e\n\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/skin-tone-classifier)\n[![PyPI](https://img.shields.io/pypi/v/skin-tone-classifier)](https://pypi.org/project/skin-tone-classifier/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/skin-tone-classifier)](https://pypi.org/project/skin-tone-classifier/)\n[![PyPI Downloads](https://static.pepy.tech/badge/skin-tone-classifier)](https://pepy.tech/projects/skin-tone-classifier)\n[![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ChenglongMa/SkinToneClassifier?include_prereleases)](https://github.com/ChenglongMa/SkinToneClassifier/releases/latest)\n[![GitHub License](https://img.shields.io/github/license/ChenglongMa/SkinToneClassifier)](https://github.com/ChenglongMa/SkinToneClassifier/blob/main/LICENSE)\n[![youtube](https://img.shields.io/badge/YouTube-Skin_Tone_Classifier-FF0000?logo=youtube)](https://youtube.com/playlist?list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026si=ezxUR7McUbZa4clT)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1k-cryEZ9PInJRXWIi17ib66ufYV2Ikwe?usp=sharing)\n[![GitHub Repo stars](https://img.shields.io/github/stars/ChenglongMa/SkinToneClassifier)](https://github.com/ChenglongMa/SkinToneClassifier)\n\nAn easy-to-use library for skin tone classification.\n\nThis can be used to detect **face** or **skin area** in the specified images.\nThe detected skin tones are then classified into the specified color categories.\nThe library finally generates results to report the detected faces (if any),\ndominant skin tones and color categories.\n\nCheck out the [Changelog](https://github.com/ChenglongMa/SkinToneClassifier/blob/main/CHANGELOG.md) for the latest updates.\n\n*If you find this project helpful, please\nconsider [giving it a star](https://github.com/ChenglongMa/SkinToneClassifier)* ⭐. *It would be a great encouragement\nfor me!*\n\n---\n\n\u003c!-- START doctoc generated TOC please keep comment here to allow auto update --\u003e\n\u003c!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --\u003e\n**Table of Contents**\n\n- [Showcases](#showcases)\n  - [PERLA Palette (default)](#perla-palette-default)\n  - [YADON-OSTFELD Palette](#yadon-ostfeld-palette)\n  - [PRODER Palette](#proder-palette)\n- [Video tutorials](#video-tutorials)\n  - [Playlist](#playlist)\n  - [1. How to install Python and `stone`](#1-how-to-install-python-and-stone)\n  - [2. Use `stone` in GUI mode](#2-use-stone-in-gui-mode)\n  - [3. Use `stone` in CLI mode](#3-use-stone-in-cli-mode)\n  - [4. Use `stone` in Python scripts](#4-use-stone-in-python-scripts)\n- [Installation](#installation)\n  - [Install from pip](#install-from-pip)\n    - [Install the CLI mode only](#install-the-cli-mode-only)\n    - [Install the CLI mode and the GUI mode](#install-the-cli-mode-and-the-gui-mode)\n  - [Install from source](#install-from-source)\n- [HOW TO USE](#how-to-use)\n  - [Quick Start](#quick-start)\n    - [Use `stone` in a GUI](#use-stone-in-a-gui)\n    - [Use `stone` in command line interface (CLI)](#use-stone-in-command-line-interface-cli)\n    - [Interpretation of the table](#interpretation-of-the-table)\n  - [Detailed Usage](#detailed-usage)\n    - [Use Cases](#use-cases)\n      - [1. Process multiple images](#1-process-multiple-images)\n      - [2. Specify color palette](#2-specify-color-palette)\n      - [3. Specify category labels](#3-specify-category-labels)\n      - [4. Specify output folder](#4-specify-output-folder)\n      - [5. Store report images for debugging](#5-store-report-images-for-debugging)\n      - [6. Specify the types of the input image(s)](#6-specify-the-types-of-the-input-images)\n      - [7. Convert the `color` images to `black/white` images](#7-convert-the-color-images-to-blackwhite-images)\n      - [8. Tune parameters of face detection](#8-tune-parameters-of-face-detection)\n      - [9. Multiprocessing settings](#9-multiprocessing-settings)\n      - [10. Used as a library by importing into other projects](#10-used-as-a-library-by-importing-into-other-projects)\n      - [11. Used in a FAST API project](#11-used-in-a-fast-api-project)\n- [Citation](#citation)\n- [Contributing](#contributing)\n- [Disclaimer](#disclaimer)\n\n\u003c!-- END doctoc generated TOC please keep comment here to allow auto update --\u003e\n\n# Showcases\n\nThe following are some examples of the classification results using different palettes.\n\n## PERLA Palette (default)\n\n![PERLA Palette](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-perla.svg)\n\n![PERLA](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo-perla.png)\n\n## YADON-OSTFELD Palette\n\n![YADON-OSTFELD Palette](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-yadon-ostfeld.svg)\n\n![YADON-OSTFELD](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo-yadon-ostfeld.png)\n\n## PRODER Palette\n\n![PRODER Palette](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-proder.svg)\n\n![PRODER](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo-proder.png)\n\n# Video tutorials\n\n[![youtube](https://img.shields.io/badge/YouTube-Skin_Tone_Classifier-FF0000?logo=youtube)](https://youtube.com/playlist?list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026si=ezxUR7McUbZa4clT)\n\nPlease visit the following video tutorials if you have no programming background or are unfamiliar with how to use Python and this library 💖\n\n## Playlist\n\n[![playlist](https://img.youtube.com/vi/vu6whI0qcmU/0.jpg)](https://youtube.com/playlist?list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026si=ezxUR7McUbZa4clT)\n\n\u003cdetails markdown=\"1\"\u003e\n  \u003csummary\u003e\u003ci\u003eClick here to show more.\u003c/i\u003e\u003c/summary\u003e\n    \n## 1. How to install Python and `stone`\n\n[![YouTube Video Views](https://img.shields.io/youtube/views/vu6whI0qcmU)](https://www.youtube.com/watch?v=vu6whI0qcmU\u0026list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026index=1)\n\n[![installation](https://img.youtube.com/vi/vu6whI0qcmU/0.jpg)](https://www.youtube.com/watch?v=vu6whI0qcmU\u0026list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026index=1)\n\n## 2. Use `stone` in GUI mode\n\n[![YouTube Video Views](https://img.shields.io/youtube/views/08apMEogZgs)](https://www.youtube.com/watch?v=08apMEogZgs\u0026list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026index=2)\n\n[![use gui mode](https://img.youtube.com/vi/08apMEogZgs/0.jpg)](https://www.youtube.com/watch?v=08apMEogZgs\u0026list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026index=2)\n\n## 3. Use `stone` in CLI mode\n\n[![YouTube Video Views](https://img.shields.io/youtube/views/rqJ62DijQaw)](https://www.youtube.com/watch?v=rqJ62DijQaw\u0026list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026index=3)\n\n[![use cli mode](https://img.youtube.com/vi/rqJ62DijQaw/0.jpg)](https://www.youtube.com/watch?v=rqJ62DijQaw\u0026list=PLYRpHlp-9V_E5ZLhW1hbNaVjS5Zg6b6kQ\u0026index=3)\n\n## 4. Use `stone` in Python scripts\n\nPlease refer to this notebook [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1k-cryEZ9PInJRXWIi17ib66ufYV2Ikwe?usp=sharing) for more information.\n\n_More videos are coming soon..._\n\n\u003c/details\u003e\n\n# Installation\n\n\u003e [!TIP]\n\u003e \n\u003e Since v1.2.3, we have made the GUI mode **optional**.\n\u003e \n\n\n## Install from pip\n\n### Install the CLI mode only\n\n```shell\npip install skin-tone-classifier --upgrade\n```\n\nIt is useful for users who want to use this library in non-GUI environments, e.g., servers or [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1k-cryEZ9PInJRXWIi17ib66ufYV2Ikwe?usp=sharing).\n\n### Install the CLI mode and the GUI mode\n\n```shell\npip install skin-tone-classifier[all] --upgrade\n```\n\nIt is useful for users who are not familiar with the command line interface and want to use the GUI mode.\n\n## Install from source\n\n```shell\ngit clone git@github.com:ChenglongMa/SkinToneClassifier.git\ncd SkinToneClassifier\npip install -e . --verbose\n```\n\n\u003e [!TIP]\n\u003e\n\u003e If you encounter the following problem:\n\u003e \n\u003e [`ImportError: DLL load failed while importing _core: The specified module could not be found`](https://stackoverflow.com/q/52306805/8860079)\n\u003e \n\u003e Please download and install **Visual C++ Redistributable** at [here](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#visual-studio-2015-2017-2019-and-2022).\n\u003e\n\u003e Then this error will be gone.\n\n# HOW TO USE\n\n\u003e [!TIP]\n\u003e\n\u003e You can combine the following documents, [the video tutorials above](#video-tutorials)\n\u003e and the running examples [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1k-cryEZ9PInJRXWIi17ib66ufYV2Ikwe?usp=sharing)\n\u003e to understand the usage of this library more intuitively.\n\u003e \n\n## Quick Start\n\n### Use `stone` in a GUI\n\n✨ Since v1.2.0, we have provided a GUI version of `stone` for users who are not familiar with the command line\ninterface.\n\n![stone GUI](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/gui.png)\n\nInstead of typing commands in the terminal, you can use the config GUI of `stone` to process the images.\n\nSteps:\n\n1. Open the terminal that can run `stone` (e.g., `PowerShell` in Windows or `Terminal` in macOS).\n2. Type `stone` (without any parameters) or `stone --gui` and press \u003ckbd\u003eEnter\u003c/kbd\u003e to open the GUI.\n3. Specify the parameters in each tab.\n4. Click the `Start` button to start processing the images.\n\nHopefully, this can make it easier for you to use `stone` 🍻!\n\n\u003e [!TIP]\n\u003e \n\u003e 1. It is recommended to install v1.2.3+, which supports Python 3.9+.\n\u003e \n\u003e    If you have installed v1.2.0, please upgrade to v1.2.3+ by running \n\u003e \n\u003e    `pip install skin-tone-classifier[all] --upgrade`\n\u003e\n\u003e 2. If you encounter the following problem:\n\u003e    \u003e This program needs access to the screen. Please run with a Framework\n\u003e    \u003e build of python, and only when you are logged in on the main display\n\u003e    \u003e of your Mac.\n\u003e \n\u003e    Please launch the GUI by running `pythonw -m stone` in the terminal.\n\u003e    References: \n\u003e       * [stackoverflow](https://stackoverflow.com/a/52732858/8860079)\n\u003e       * [python-using-mac](https://docs.python.org/3/using/mac.html)\n\n### Use `stone` in command line interface (CLI)\n\nTo detect the skin tone in a portrait, e.g.,\n\n\u003cdiv align=\"center\"\u003e\n   \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo.png\"  alt=\"Demo picture\" style=\"display: block; margin: auto\"/\u003e\n\u003c/div\u003e\n\nJust run:\n\n```shell\nstone -i /path/to/demo.png --debug\n```\n\nThen, you can find the processed image in `./debug/color/faces_1` folder, e.g.,\n\n\u003cdiv align=\"center\"\u003e\n   \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo-1.png\"  alt=\"processed demo picture\" style=\"display: block; margin: auto\"/\u003e\n\u003c/div\u003e\n\nIn this image, from left to right you can find the following information:\n\n1. detected face with a label (*Face 1*) enclosed by a rectangle.\n2. dominant colors.\n    1. _The number of colors depends on settings (default is 2), and their sizes depend on their proportion._\n3. specified color palette and the target label is enclosed by a rectangle.\n4. you can find a summary text at the bottom.\n\nFurthermore, there will be a report file named `result.csv` which contains more detailed information, e.g.,\n\n| file     | image type | face id | dominant 1 | percent 1 | dominant 2 | percent 2 | skin tone | tone label | accuracy(0-100) |\n|----------|------------|---------|------------|-----------|------------|-----------|-----------|------------|-----------------|\n| demo.png | color      | 1       | #C99676    | 0.67      | #805341    | 0.33      | #9D7A54   | CF         | 86.27           |\n\n### Interpretation of the table\n\n1. `file`: the filename of the processed image.\n    * **NB: The filename pattern of report image is `\u003cfile\u003e-\u003cface id\u003e.\u003cextension\u003e`**\n2. `image type`: the type of the processed image, i.e., `color` or `bw` (black/white).\n3. `face id`: the id of the detected face, which matches the reported image. `NA` means no face has been detected.\n4. `dominant n`: the `n`-th dominant color of the detected face.\n5. `percent n`: the percentage of the `n`-th dominant color, (0~1.0).\n6. `skin tone`: the skin tone category of the detected face.\n7. `tone label`: the **label** of skin tone category of the detected face.\n8. `accuracy`: the accuracy of the skin tone category of the detected face, (0~100). The larger, the better.\n\n## Detailed Usage\n\nTo see the usage and parameters, run:\n\n```shell\nstone -h (or --help)\n```\n\nDetailed usage:\n\n| Short Option | Long Option   | Definition                                                                                                                                                                                                                                                                                                                                                                        |\n|--------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| -h           | --help        | Show this help message and exit.                                                                                                                                                                                                                                                                                                                                                  |\n| -i           | --images      | Image filename(s) or URLs to process. \u003cbr\u003eSupports multiple values separated by **space**, e.g., `a.jpg b.png`. \u003cbr\u003eSupports directory or file name(s), e.g., `./path/to/images/ a.jpg`. \u003cbr\u003eSupports URL(s), e.g., `https://example.com/images/pic.jpg` since v1.1.0+. \u003cbr\u003eIf you don't specify this option, the app will search all images in the current directory by default. |\n| -r           | --recursive   | Whether to search images **recursively** in the specified directory.                                                                                                                                                                                                                                                                                                              |\n| -t           | --image_type  | Specify whether the input image(s) is/are **colored** or **black/white**. \u003cbr\u003eValid choices are: `auto`, `color`, or `bw`. \u003cbr\u003eDefaults to `auto`, which will be detected **automatically**.                                                                                                                                                                                      |\n| -p           | --palette     | Skin tone palette. \u003cbr\u003eValid choices can be `perla`, `yadon-ostfeld`, `proder`; \u003cbr\u003eYou can also input RGB **hex** values starting with `#` \u003cbr\u003eor **RGB** values separated by **commas**, \u003cbr\u003ee.g., `-p #373028 #422811` or `-p 255,255,255 100,100,100`.                                                                                                                        |\n| -l           | --labels      | Skin tone labels. \u003cbr\u003eDefault values are the **UPPERCASE** alphabet list leading by the image type \u003cbr\u003e(`C` for `color`; `B` for `Black\u0026White`), \u003cbr\u003ee.g., `['CA', 'CB', ..., 'CZ']` or `['BA', 'BB', ..., 'BZ']`.                                                                                                                                                                |\n| -d           | --debug       | Whether to generate report images, used for debugging and verification. \u003cbr\u003eThe report images will be saved in the `./debug` directory.                                                                                                                                                                                                                                           |\n| -bw          | --black_white | Whether to convert the input to **black/white** image(s). \u003cbr\u003eIf `true`, the app will use a **black/white palette** to classify the image.                                                                                                                                                                                                                                        |\n| -o           | --output      | The path of the output file, defaults to **the current directory**.                                                                                                                                                                                                                                                                                                               |\n|              | --n_workers   | The number of workers to process the images, \u003cbr\u003edefaults to **the number of CPUs** in the system.                                                                                                                                                                                                                                                                                |\n|              | --n_colors    | CONFIG: the number of dominant colors to be extracted, defaults to 2.                                                                                                                                                                                                                                                                                                             |\n|              | --new_width   | CONFIG: resize the images with the specified width. \u003cbr\u003e**Negative value will be ignored**, defaults to 250.                                                                                                                                                                                                                                                                      |\n|              | --scale       | CONFIG: how much the image size is reduced at each image scale, defaults to 1.1.                                                                                                                                                                                                                                                                                                  |\n|              | --min_nbrs    | CONFIG: how many neighbors each candidate rectangle should have to retain it. \u003cbr\u003e**Higher value results in fewer detections but with higher quality**, defaults to 5.                                                                                                                                                                                                            |\n|              | --min_size    | CONFIG: minimum possible face size. **Faces smaller than that are ignored**. \u003cbr\u003eValid format: `width height`, defaults to `90 90`.                                                                                                                                                                                                                                               |\n|              | --threshold   | CONFIG: what percentage of the skin area is required to identify the face, \u003cbr\u003edefaults to 0.15.                                                                                                                                                                                                                                                                                  |\n| -v           | --version     | Show the version number and exit.                                                                                                                                                                                                                                                                                                                                                 |\n\n### Use Cases\n\n#### 1. Process multiple images\n\n1.1 Multiple filenames\n\n```shell\nstone -i (or --images) a.jpg b.png https://example.com/images/pic.jpg\n```\n\n1.2 Images in some folder(s)\n\n```shell\nstone -i ./path/to/images/\n```\n\nNB: Supported image formats: `.jpg, .gif, .png, .jpeg, .webp, .tif`.\n\nIn default (i.e., `stone` without `-i` option), the app will search images in current folder.\n\n#### 2. Specify color palette\n\n2.1 Use the built-in palettes\n\nThe built-in palettes are: `perla`, `yadon-ostfeld`, `proder` and `bw`.\n\nNB: The `bw` palette is used to classify the **black/white** images only.\n\nFor example:\n```shell\nstone -p (or --palette) perla\n```\n\nThe HEX values of each palette are:\n\n![PERLA](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-perla.svg)\n\n* `perla`:\n    * `#373028`, `#422811`, `#513B2E`, `#6F503C`, `#81654F`, `#9D7A54`, `#BEA07E`, `#E5C8A6`, `#E7C1B8`, `#F3DAD6`, `#FBF2F3`\n    * Citation: Rejón Piña, R. A., \u0026 Ma, C. (2021). Classification Algorithm for Skin Color (CASCo): A new tool to measure skin color in social science research.\n\n![YADON-OSTFELD](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-yadon-ostfeld.svg)\n\n* `yadon-ostfeld`:\n    * `#36251d`, `#48352c`, `#614539`, `#755848`, `#886958`, `#9b7966`, `#b18972`, `#c29c88`, `#d4afa3`, `#e6c6bf`\n    * Citation: Ostfeld, M. C., \u0026 Yadon, N. (2022). Skin color, power, and politics in America. Russell Sage Foundation.\n\n![PRODER](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-proder.svg)\n\n* `proder`:\n    * `#654d3e`, `#775741`, `#876249`, `#946c51`, `#a0765a`, `#a87f64`, `#b1886c`, `#b69279`, `#be9d86`, `#c5a691`, `#c8ac99`\n    * Citation: Proyecto sobre discriminación étnico-racial en México (PRODER). El Colegio de México. https://discriminacion.colmex.mx/encuesta-proder/\n\n![BW](https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/palette-bw.svg)\n\n* `bw`:\n    * `#FFFFFF`, `#F0F0F0`, `#E0E0E0`, `#D0D0D0`, `#C0C0C0`, `#B0B0B0`, `#A0A0A0`, `#909090`, `#808080`, `#707070`, `#606060`, `#505050`, `#404040`, `#303030`, `#202020`, `#101010`, `#000000`\n    * Citation: Leigh, A., \u0026 Susilo, T. (2009). Is voting skin-deep? Estimating the effect of candidate ballot photographs on election outcomes. Journal of Economic Psychology, 30(1), 61-70.\n\n2.2 Use HEX values\n\n```shell\nstone -p #373028 #422811 #513B2E\n```\n\nNB: Values start with **'#'** and are separated by **space**.\n\n2.3 Use RGB tuple values\n\n```shell\nstone -p 55,48,40 66,40,17 251,242,243\n```\n\nNB: Values split by **comma ','**, multiple values are still separated by **space**.\n\n#### 3. Specify category labels\n\nYou can assign the labels for the skin tone categories, for example:\n\n```text\n\"CA\": \"#373028\",\n\"CB\": \"#422811\",\n\"CC\": \"#513B2E\",\n...\n```\n\nTo achieve this, you can use the `-l` (or `--labels`) option:\n\n3.1 Specify the labels directly using __spaces__ as delimiters, e.g.,\n\n```shell\nstone -l A B C D E F G H\n```\n\n3.2 Specify the range of labels based on this pattern: `\u003cstart\u003e\u003csep\u003e\u003cend\u003e\u003csep\u003e\u003cstep\u003e`.\n\nSpecifically,\n\n* `\u003cstart\u003e`: the **start** label, can be a letter (e.g., `A`) or a number (e.g., `1`);\n* `\u003cend\u003e`: the **end** label, can be a letter (e.g., `H`) or a number (e.g., `8`);\n* `\u003cstep\u003e`: the **step** to generate the label sequence, can be a number (e.g., `2` or `-1`), **defaults to `1`**.\n* `\u003csep\u003e`: the **separator** between `\u003cstart\u003e` and `\u003cend\u003e`, can be one of these symbols: `-`, `,`, `~`, `:`, `;`, `_`.\n\nExamples:\n\n```shell\nstone -l A-H-1\n```\n\nwhich is equivalent to `stone -l A-H` and `stone -l A B C D E F G H`.\n\n```shell\nstone -l A-H-2\n```\n\nwhich is equivalent to `stone -l A C E G`.\n\n```shell\nstone -l 1-8\n```\n\nwhich is equivalent to `stone -l 1 2 3 4 5 6 7 8`.\n\n```shell\nstone -l 1-8-3\n```\n\nwhich is equivalent to `stone -l 1 4 7`.\n\n\n\u003e [!IMPORTANT]\n\u003e\n\u003e Please make sure the number of labels is equal to the number of colors in the palette.\n\n#### 4. Specify output folder\n\nThe app puts the final report (`result.csv`) in current folder in default.\n\nTo change the output folder:\n\n```shell\nstone -o (or --output) ./path/to/output/\n```\n\nThe output folder will be created if it does not exist.\n\nIn `result.csv`, each row is showing the color information of each detected face.\nIf more than one faces are detected, there will be multiple rows for that image.\n\n#### 5. Store report images for debugging\n\n```shell\nstone -d (or --debug)\n```\n\nThis option will store the report image (like the demo portrait above) in\n`./path/to/output/debug/\u003cimage type\u003e/faces_\u003cn\u003e` folder,\nwhere `\u003cimage type\u003e` indicates if the image is `color` or `bw` (black/white);\n`\u003cn\u003e` is the number of faces detected in the image.\n\n**By default, to save storage space, the app does not store report images.**\n\nLike in the `result.csv` file, there will be more than one report images if 2 or more faces were detected.\n\n#### 6. Specify the types of the input image(s)\n\n6.1 The input are color images\n\n```shell\nstone -t (or --image_type) color\n```\n\n6.2 The input are black/white images\n\n```shell\nstone -t (or --image_type) bw\n```\n\n6.3 **In default**, the app will detect the image type automatically, i.e.,\n\n```shell\nstone -t (or --image_type) auto\n```\n\n#### 7. Convert the `color` images to `black/white` images\n\nand then do the classification using `bw` palette\n\n```shell\nstone -bw (or --black_white)\n```\n\nFor example:\n\n\u003cdiv style=\"display: flex; justify-content: center;align-items:flex-end;\"\u003e\n    \u003cdiv style=\"text-align: center;flex:1; margin:10px;\"\u003e\n        \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo.png\"  alt=\"Demo picture\" /\u003e\n        \u003cp\u003e1. Input\u003c/p\u003e\n    \u003c/div\u003e\n    \u003cdiv style=\"text-align: center;flex:1; margin:10px;\"\u003e\n        \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo_bw.png\"  alt=\"Black/white Demo picture\" /\u003e\n        \u003cp\u003e2. Convert to black/white image\u003c/p\u003e\n    \u003c/div\u003e\n    \u003cdiv style=\"text-align: center;flex:1; margin:10px;\"\u003e\n        \u003cimg src=\"https://raw.githubusercontent.com/ChenglongMa/SkinToneClassifier/main/docs/demo_bw-1.png\"  alt=\"Report image\" /\u003e\n        \u003cp\u003e3. The final report image\u003c/p\u003e\n    \u003c/div\u003e\n\u003c/div\u003e\n\nNB: we did not do the opposite, i.e., convert `black/white` images to `color` images\nbecause the current AI models cannot accurately \"guess\" the color of the skin from a `black/white` image.\nIt can further bias the analysis results.\n\n#### 8. Tune parameters of face detection\n\nThe rest parameters of `CONFIG` are used to detect face.\nPlease refer to https://stackoverflow.com/a/20805153/8860079 for detailed information.\n\n#### 9. Multiprocessing settings\n\n```shell\nstone --n_workers \u003cAny Positive Integer\u003e\n```\n\nUse `--n_workers` to specify the number of workers to process images in parallel, defaults to the number of CPUs in your\nsystem.\n\n#### 10. Used as a library by importing into other projects\n\nYou can refer to [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1k-cryEZ9PInJRXWIi17ib66ufYV2Ikwe?usp=sharing) or the following code snippet:\n\n```python\nimport stone\nfrom json import dumps\n\n# process the image\nresult = stone.process(image_path, image_type, palette, *other_args, return_report_image=True)\n# show the report image\nreport_images = result.pop(\"report_images\")  # obtain and remove the report image from the `result`\nface_id = 1\nstone.show(report_images[face_id])\n\n# convert the result to json\nresult_json = dumps(result)\n```\n\n`stone.process` is the main function to process the image.\nIt has the same parameters as the command line version.\n\nIt will return a `dict`, which contains the process result and report image(s) (if required,\ni.e., `return_report_image=True`).\n\nYou can further use `stone.show` to show the report image(s).\nAnd convert the result to `json` format.\n\nThe `result_json` will be like:\n\n```json\n{\n  \"basename\": \"demo\",\n  \"extension\": \".png\",\n  \"image_type\": \"color\",\n  \"faces\": [\n    {\n      \"face_id\": 1,\n      \"dominant_colors\": [\n        {\n          \"color\": \"#C99676\",\n          \"percent\": \"0.67\"\n        },\n        {\n          \"color\": \"#805341\",\n          \"percent\": \"0.33\"\n        }\n      ],\n      \"skin_tone\": \"#9D7A54\",\n      \"tone_label\": \"CF\",\n      \"accuracy\": 86.27\n    }\n  ]\n}\n```\n\n#### 11. Used in a FAST API project\n\n`stone` can be used in a FAST API project to classify the skin tone of the uploaded image(s) via `POST` method.\n\nPlease refer to the following code snippet:\n\n```python\n# Description: This is a simple FastAPI server that receives an image file \n# and processes it using the skin-tone-classifier library.\n\n# requirements.txt:\n#     fastapi\n#     uvicorn\n#     skin-tone-classifier\n#     python-multipart\n\n# Run the server:\n#     uvicorn main:app --reload\n\nfrom typing import Literal\n\nimport stone\nfrom fastapi import FastAPI, UploadFile, HTTPException\nfrom fastapi.responses import JSONResponse\n\napp = FastAPI()\n\n\n@app.post(\"/stone\")\nasync def process_image(\n        image_file: UploadFile,\n        image_type: Literal[\"auto\", \"color\", \"bw\"] = \"auto\",\n        tone_palette: list = None,\n        tone_labels: list = None,\n        # other parameters...\n):\n    image_data = await image_file.read()\n    temp_file_path = \"/tmp/temp_image.jpg\"\n    with open(temp_file_path, \"wb\") as temp_file:\n        temp_file.write(image_data)\n    try:\n        result = stone.process(\n            temp_file_path,\n            image_type=image_type,\n            tone_palette=tone_palette,\n            tone_labels=tone_labels,\n            # other parameters...\n        )\n        result = JSONResponse(content=result)\n    except Exception as e:\n        raise HTTPException(status_code=500, detail=str(e))\n    return result\n```\n\nTo run the demo, please follow these steps:\n\n1. Install required packages:\n    * skin-tone-classifier\n    * [fastapi](https://fastapi.tiangolo.com/)\n    * [uvicorn](https://www.uvicorn.org/)\n    * [python-multipart](https://pypi.org/project/python-multipart/)\n2. Run the server:\n   `uvicorn main:app --reload`\n3. You can refine the implementation according to your project requirements. \n4. Finally, you can use [Postman](https://www.postman.com/) or other HTTP Clients to test the API.\n\n# Citation\n\nIf you are interested in our work, please cite:\n\n```bibtex\n@article{https://doi.org/10.1111/ssqu.13242,\n    author = {Rej\\'{o}n Pi\\tilde{n}a, Ren\\'{e} Alejandro and Ma, Chenglong},\n    title = {Classification Algorithm for Skin Color (CASCo): A new tool to measure skin color in social science research},\n    journal = {Social Science Quarterly},\n    volume = {n/a},\n    number = {n/a},\n    pages = {},\n    keywords = {colorism, measurement, photo elicitation, racism, skin color, spectrometers},\n    doi = {https://doi.org/10.1111/ssqu.13242},\n    url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/ssqu.13242},\n    eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/ssqu.13242},\n    abstract = {Abstract Objective A growing body of literature reveals that skin color has significant effects on people's income, health, education, and employment. However, the ways in which skin color has been measured in empirical research have been criticized for being inaccurate, if not subjective and biased. Objective Introduce an objective, automatic, accessible and customizable Classification Algorithm for Skin Color (CASCo). Methods We review the methods traditionally used to measure skin color (verbal scales, visual aids or color palettes, photo elicitation, spectrometers and image-based algorithms), noting their shortcomings. We highlight the need for a different tool to measure skin color Results We present CASCo, a (social researcher-friendly) Python library that uses face detection, skin segmentation and k-means clustering algorithms to determine the skin tone category of portraits. Conclusion After assessing the merits and shortcomings of all the methods available, we argue CASCo is well equipped to overcome most challenges and objections posed against its alternatives. While acknowledging its limitations, we contend that CASCo should complement researchers. toolkit in this area.}\n}\n```\n\n# Contributing\n\n👋 Welcome to **SkinToneClassifier**! We're excited to have your contributions. Here's how you can get involved:\n\n1. 💡 **Discuss New Ideas**: Have a creative idea or suggestion? Start a discussion in\n   the [Discussions](https://github.com/ChenglongMa/SkinToneClassifier/discussions) tab to share your thoughts and\n   gather feedback from the community.\n\n2. ❓ **Ask Questions**: Got questions or need clarification on something in the repository? Feel free to open\n   an [Issue](https://github.com/ChenglongMa/SkinToneClassifier/issues) labeled as a \"question\" or participate\n   in [Discussions](https://github.com/ChenglongMa/SkinToneClassifier/discussions).\n\n3. 🐛 **Issue a Bug**: If you've identified a bug or an issue with the code, please open a\n   new [Issue](https://github.com/ChenglongMa/SkinToneClassifier/issues) with a clear description of the problem, steps\n   to reproduce it, and your environment details.\n\n4. ✨ **Introduce New Features**: Want to add a new feature or enhancement to the project? Fork the repository, create a\n   new branch, and submit a [Pull Request](https://github.com/ChenglongMa/SkinToneClassifier/pulls) with your changes.\n   Make sure to follow our contribution guidelines.\n\n5. 💖 **Funding**: If you'd like to financially support the project, you can do so\n   by [sponsoring the repository on GitHub](https://github.com/sponsors/ChenglongMa). Your contributions help us\n   maintain and improve the project.\n\n# Disclaimer\n\nThe images used in this project are from [Flickr-Faces-HQ Dataset (FFHQ)](https://github.com/NVlabs/ffhq-dataset), \nwhich is licensed under the [Creative Commons BY-NC-SA 4.0 license](https://github.com/NVlabs/ffhq-dataset/blob/master/LICENSE.txt).\n\nThank you for considering contributing to **SkinToneClassifier**. \nWe value your input and look forward to collaborating with you!\n","funding_links":["https://github.com/sponsors/ChenglongMa","https://www.paypal.me/imchenglong"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchenglongma%2Fskintoneclassifier","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchenglongma%2Fskintoneclassifier","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchenglongma%2Fskintoneclassifier/lists"}