https://github.com/avatechai/avatar-graph-comfyui
A custom nodes module for creating real-time interactive avatars powered by blender bpy mesh api + Avatech Shape Flow runtime.
https://github.com/avatechai/avatar-graph-comfyui
comfyui hacktoberfest
Last synced: 4 months ago
JSON representation
A custom nodes module for creating real-time interactive avatars powered by blender bpy mesh api + Avatech Shape Flow runtime.
- Host: GitHub
- URL: https://github.com/avatechai/avatar-graph-comfyui
- Owner: avatechai
- Created: 2023-09-01T11:30:00.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-06T08:56:30.000Z (about 1 year ago)
- Last Synced: 2024-02-06T09:47:06.241Z (about 1 year ago)
- Topics: comfyui, hacktoberfest
- Language: Python
- Homepage: https://docs.avatech.ai
- Size: 482 KB
- Stars: 161
- Watchers: 7
- Forks: 10
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-comfyui - **Avatar Graph**
README
# avatar-graph-comfyui

Wanna animate or got a question? Join our [Discord](https://discord.gg/Xp6mZ4Ez5P)
A custom nodes module for **creating real-time interactive avatars** powered by blender bpy mesh api + Avatech Shape Flow runtime.
> **WARNING**
> We are still making changes to the nodes and demo templates, please stay tuned.# Demo
|
[Interact π](https://editor.avatech.ai/viewer?avatarId=cce15b92-6d1c-4966-91b9-362d7833cb5d) |
[Interact π](https://editor.avatech.ai/viewer?avatarId=42a8182f-b140-48c0-a556-35cddf0f76f7) |
[Interact π](https://editor.avatech.ai/viewer?avatarId=7c23b8d6-d1a5-41c7-a084-250461dbef22) |
[Interact π](https://editor.avatech.ai/viewer?avatarId=268b32c4-f9b9-4db8-a27c-a7e974f0f0ac) |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|
[Interact π](https://editor.avatech.ai/viewer?avatarId=f97fc5bb-93b0-4b02-bbc0-327dd41d0fc5) |
[Interact π](https://editor.avatech.ai/viewer?avatarId=4d50aa03-26e4-47e7-97b6-c3fe9d8fc96e) |
[Interact π](https://editor.avatech.ai/viewer?avatarId=791014cb-7836-4641-afdb-ac331064b682) |
[Interact π](https://editor.avatech.ai/viewer?avatarId=f97fc5bb-93b0-4b02-bbc0-327dd41d0fc5) |# How to?
- [Basic Rigging Workflow Template](#basic-rigging-workflow-template)
- [Best Practices for image input](#best-practices-for-image-input)
- [Custom Nodes List](#custom-nodes)
- [Shape Flow](#shape-flow)
- [Installation](#installation)
- [Development](#development)
- [Join Discord π¬](https://discord.gg/WNtBYksDwF)# Basic Rigging Workflow Template
### 1. Creating an eye blink and lipsync avatar

For optimal results, please input a character image with an open mouth and a minimum resolution of 768x768. This higher resolution will enable the tool to accurately recognize and work with facial features.
Download: Save the image, and drag into Comfyui or [Simple Shape Flow](https://github.com/avatechai/avatar-graph-comfyui/blob/main/workflow_templates/SimpleEye+MouthMovement.json)
### 2. Creating an eye blink and lipsync emoji avatar
|  |  |
| :--: | :--: |Download: Save the image, and drag into Comfyui
|  |  |
| :--: | :--: |Download: Save the image, and drag into Comfyui or [Dog Workflow](https://github.com/avatechai/avatar-graph-comfyui/blob/main/workflow_templates/Dog_workflow.json)
# Best practices for image input
### 1. Generate a new character image

We need a character image with an open mouth and enable the tool to easily recognize facial features, so please add to the prompt:
`looking at viewer, detailed face, open mouth, [smile], solo,eye-level angle`Download: [Character Gen Template](https://github.com/avatechai/avatar-graph-comfyui/blob/main/workflow_templates/SimpleCharacterGen.json)
### 2. Make existing character image mouth open (Inpaint)

To maintain consistency with the base image, it is recommended to utilize a checkpoint model that aligns with its style.
Download: [Mouth Open Inpaint Template]()
Inpaint Demonstration
### 3. Pose Constraints (ControlNet)

Place normal and openpose image with reference to images.
Download: [ControlNet Gen](https://github.com/avatechai/avatar-graph-comfyui/tree/main/workflow_templates/TemplateGen01)
# Recommend Checkpoint Model List
##### Anime Style SD1.5
- https://civitai.com/models/35960/flat-2d-animerge
- https://civitai.com/models/24149/mistoonanime##### Realistic Style SD1.5
- https://civitai.com/models/4201/realistic-vision-v51
- https://civitai.com/models/49463/am-i-real
- https://civitai.com/models/43331/majicmix-realistic# Custom Nodes
Expand to see all the available nodes description.
All Custom Nodes
| Name | Description | Preview |
| -------------------- | ---------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Segmentation (SAM)` | Integrative SAM node allowing you to directly select and create multiple image segment output. ||
| Name | Description | Preview |
| ---------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Create Mesh Layer` | Create a mesh object from the input images (usually a segmented part of the entire image) ||
| `Join Meshes` | Combine multiple meshes into a single mesh object ||
| `Match Texture Aspect Ratio` | Since the mesh is created in 1:1 aspect ratio, a re-scale is needed at the end of the operation ||
| `Plane Texture Unwrap` | Will perform mesh face fill and UV Cube project on the target plane mesh, scaled to bounds. ||
| Name | Description | Preview |
| ----------------------- | -------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Mesh Modify Shape Key` | Given shape key name & target vertex_group, modify the vertex / all vertexβs transform ||
| `Create Shape Flow` | Create runtime shape flow graph, allowing interactive inputs affecting shape keys value in runtime ||
| Name | Description | Preview |
| -------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Avatar Main Output` | The primary output of the .ava file. The embedded Avatar View will auto update with this node's output ||
# Shape Flow

# Installation
## Method 1 - Windows
1. Download Python environment from [here](https://avatech-avatar-dev1.nyc3.digitaloceanspaces.com/comfyui/ComfyUI_3.10.7z)
2. Unzip it to ComfyUI directory
3. Run the `run_cpu_3.10.bat` or `run_nvidia_gpu_3.10.bat`
4. Install avatar-graph-comfyui from [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager)
## Method 2 - macOS/Linux
Make sure your Python environment is `3.10.x` as required by the [bpy](https://pypi.org/project/bpy/) package. Then go to the [ComfyUI](https://github.com/comfyanonymous/ComfyUI) directory and run:
> Suggest using conda for your comfyui python environment
>
> `conda create --name comfyui python=3.10`
>
> `conda activate comfyui`
>
> `pip install -r requirements.txt`1. `cd custom_nodes`
2. `git clone https://github.com/avatechgg/avatar-graph-comfyui.git`
3. `cd avatar-graph-comfyui && python -m pip install -r requirements.txt`
4. Restart ComfyUI with enable-cors-header `python main.py --enable-cors-header` or (for mac) `python main.py --force-fp16 --enable-cors-header`
# Development
If you are interested in contributing
For comfyui frontend extension, frontend js located at `avatar-graph-comfyui/js`
Web stack used: [vanjs](https://github.com/vanjs-org/van) [tailwindcss](https://github.com/tailwindlabs/tailwindcss)
## Install deps
```
pnpm i
```Run the dev command to start the tailwindcss watcher
```
pnpm dev
```For each changes, simply refresh the comfyui page to see the changes.
p.s. For tailwind autocomplete, add the following to your vscode settings.json.
```json
{
"tailwindCSS.experimental.classRegex": [
["class\\s?:\\s?([\\s\\S]*)", "(?:\"|')([^\"']*)(?:\"|')"]
]
}
```## Update blender node types
To update blender operations input and output types (stored in `blender/input_types.txt`), run:
```bash
python generate_blender_types.py
```# FAQ
## What is `--enable-cors-header` used for?
It is used to enable communication between ComfyUI and our editor (https://editor.avatech.ai), which is in charge of animating static characters. The only messages exchanged between them are the character data like the meshes of eyes and mouth, and the JSON format of our editor graph.When you execute the ComfyUI graph, it sends the character data and the JSON graph to our editor for animating. When you modify and save the graph in our editor, it sends the modified graph back to ComfyUI. To validate it, you can open the `js/index.js`, and log the message in `window.addEventListener("message", ...)` and `postMessage(message)`.
You can also run ComfyUI *without* the `--enable-cors-header`: execute the ComfyUI workflow, then download the .GLB or .GLTF format by right clicking the Avatar Main Output node and Save File option. Yet, this will disable the real-time character preview in the top-right corner of ComfyUI. Feel free to view it in other software like Blender.