https://github.com/0xCrunchyy/10x
Optimized inference and fine-tuning framework for diffusion (image & video) models. Up to 3x faster & 80% less VRAM.
https://github.com/0xCrunchyy/10x
artificial-inteligence diffusion diffusion-models fine-tuning flux gpt inference lora pytorch sdxl
Last synced: 17 days ago
JSON representation
Optimized inference and fine-tuning framework for diffusion (image & video) models. Up to 3x faster & 80% less VRAM.
- Host: GitHub
- URL: https://github.com/0xCrunchyy/10x
- Owner: ntegrals
- License: mit
- Created: 2023-11-24T02:31:09.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2025-10-20T04:24:34.000Z (3 months ago)
- Last Synced: 2025-10-20T04:30:56.270Z (3 months ago)
- Topics: artificial-inteligence, diffusion, diffusion-models, fine-tuning, flux, gpt, inference, lora, pytorch, sdxl
- Language: Python
- Homepage: https://hyper.julian.sc
- Size: 6.22 MB
- Stars: 1,243
- Watchers: 20
- Forks: 114
- Open Issues: 23
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
10x
Up to 20x faster coding - with Superpowers.
---
## Quick Start
```bash
npm install -g 10x-cli
10x
```
## Why 10x?
| Feature | 10x | Claude Code | Cursor | GitHub Copilot |
| -------------------------------------- | ---------------------------------- | ----------------- | ----------------- | ----------------- |
| **Superpowers (Multi-Step Pipelines)** | Chain models for complex workflows | No | No | No |
| **Smart Model Routing** | Auto-picks fastest model per task | Single model | Single model | Single model |
| **Speed** | Up to 20x faster | 1x | ~1x | ~1x |
| **Open Source** | MIT Licensed | Closed source | Closed source | Closed source |
| **BYOK (Bring Your Own Key)** | Full control over costs | Subscription only | Subscription only | Subscription only |
## Superpowers
Multi-step AI workflows that chain different models together. Each step can use a different model tier, automatically routing to the fastest model that can handle it.
| Command | Description |
| ------------------ | ---------------------------------------------------------- |
| `/review ` | Code review with security, performance, and style analysis |
| `/pr` | Generate PR description from staged/committed changes |
| `/refactor ` | Guided refactoring with analysis and implementation |
| `/debug ` | Step-by-step debugging: reproduce, analyze, fix |
| `/explain ` | Deep dive explanation of code architecture |
| `/test ` | Generate comprehensive test suite |
### Custom Superpowers
Define workflows in `.10x/superpowers/` or `~/.config/10x/superpowers/`:
```markdown
---
name: debug
trigger: /debug
---
## Step 1: Understand (model: fast)
{{input}} - Find and read the relevant code.
## Step 2: Fix (model: smart)
Based on {{previous}}, implement a fix.
```
## Model Tiers
| Tier | Model | Speed | Best For |
| -------------- | ------------- | ----- | ------------------------------- |
| ⚡⚡ Superfast | GPT OSS 20B | 20x | Simple queries, explanations |
| ⚡ Fast | Kimi K2 1T | 4x | Code generation, refactoring |
| ◆ Smart | Claude Opus 4 | 1x | Complex reasoning, architecture |
## Configuration
### Project Context
Create `10X.md` in your project root:
```markdown
# Project: MyApp
Tech: TypeScript, React, PostgreSQL
Conventions: Functional components, named exports
```
### Custom Skills
Create prompts in `.10x/skills/` or `~/.config/10x/skills/`:
```markdown
---
name: commit
---
Analyze staged changes and generate a conventional commit message.
```
Invoke with `/`.
## CLI
```
10x Start interactive session
10x --byok Use your own OpenRouter API key
10x --model Set model tier (superfast, fast, smart)
10x --resume Resume a session
10x -x "" Execute prompt and exit
```
## License
[MIT](LICENSE)