https://github.com/wan-huiyan/publish-skill
Claude Code skill for publishing skills to GitHub and awesome-claude-skills lists. Encodes hard-won lessons from real publishing attempts.
https://github.com/wan-huiyan/publish-skill
ai automation claude-code claude-code-skill github publishing skill-publishing
Last synced: 23 days ago
JSON representation
Claude Code skill for publishing skills to GitHub and awesome-claude-skills lists. Encodes hard-won lessons from real publishing attempts.
- Host: GitHub
- URL: https://github.com/wan-huiyan/publish-skill
- Owner: wan-huiyan
- License: mit
- Created: 2026-03-20T13:18:55.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2026-03-30T12:10:45.000Z (28 days ago)
- Last Synced: 2026-03-30T14:20:11.188Z (28 days ago)
- Topics: ai, automation, claude-code, claude-code-skill, github, publishing, skill-publishing
- Size: 133 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Publish Skill - Claude Code Skill
[](https://github.com/wan-huiyan/publish-skill/releases) [](https://claude.com/claude-code) [](LICENSE) [](https://github.com/wan-huiyan/publish-skill/commits)
Publishing a Claude Code skill involves a dozen error-prone steps — repo structure, README, screenshots, marketplace submission, and client data scanning. This skill handles the full workflow so you only have to do each mistake once.
## How It Works
```
Step 0: Client Data Audit → Scan for company names, amounts, field names in
SKILL.md, README, screenshots, git history
Step 1: Repo Structure → .claude-plugin packaging, skills/ directory, LICENSE
(MIT — with independent rationale, not self-referential)
Step 2: Write README → 16-section template: screenshots, installation, limitations,
version history. Pitfalls in strategic-then-implementation order
Step 3: Hook Bundling → If the skill benefits from auto-triggering, include
hooks/ directory with ready-to-use scripts + settings.json snippet
Step 4: Demo Screenshots → Puppeteer-generated. For rich output, capture 2–4 focused
sections (not one massive hero image)
Step 5: Verify Claims → Parallel research agents check thresholds and citations
Step 6: Review Panel → Multi-agent adversarial review of the README
Step 7: GitHub Setup → Create repo, set topics (or suggest in PR if contributing
via fork — API requires owner access)
Step 8: Awesome Lists → (Optional) Submit PRs with correct formatting per list
```
## When to Use
- A skill is ready to share publicly
- You say "publish this skill", "put this on GitHub", or "share this skill"
- You want to submit a skill to awesome-claude-skills marketplaces
- You need to update an already-published skill's GitHub presence
- "Submit to awesome-claude-skills" / "add my skill to the awesome list"
- "How do I let others install my skill?" / "I finished my skill, now what?"
- "Push my skill to a public repo" / "generate a README and publish"
- "Bump the version and republish" / "turn my local skill into a polished repo"
## Hard-Won Lessons Encoded
These rules were learned from real incidents during publishing:
| Lesson | What Happened |
|--------|--------------|
| No "Test plan" in PR body | 3 of 4 awesome-list PRs had unnecessary test plan sections — inappropriate for doc-only PRs |
| Never multi-PR from same fork | Two PRs to the same upstream from one fork — second push auto-closed the first |
| Always add links in Origin | Bare paper references without URLs look incomplete — always add arxiv/DOI/GitHub links |
| Interactive skills need process examples | Example Output showing a static taxonomy dump contradicts the claim that the skill is an "interactive diagnostic" |
| Check list format before submitting | Each awesome-list has its own table/list format — match it exactly |
| GitHub can't delete PRs | Only repo admins can delete PRs — be careful with what you open |
| Fork contributors can't set topics | `gh repo edit --add-topic` requires admin access — suggest topics in PR description instead |
| Multi-section screenshots beat single hero | For rich output skills, 2–4 focused section screenshots show variety better than one long image |
| Don't cite your own skill as authority | "MIT because publish-skill says so" is circular — justify recommendations with independent reasoning |
| Update SKILL.md FIRST, metadata second | Updated eval-suite, README, plugin.json, and marketplace.json to v1.5 — but forgot to add a version field to SKILL.md itself. The skill content is the primary artifact; metadata files follow it, not the other way around |
## Example: What the Skill Produces
When you say "publish this skill", the skill walks through each phase:
Phase 1: Pre-Flight — checks skill readiness
> I've read the skill. A few things before we publish:
>
> - Version: 2.0.0 (update, not new)
> - The skill has an interactive 4-phase process — good, this isn't a static doc
> - No sensitive content detected
> - The GitHub repo `wan-huiyan/ai-trust-evaluation` already exists with v1.0
>
> Ready to update the repo with v2.0?
Phase 3: README — writes a structured README with links
> I've written the README with:
> - "How It Works" process flow diagram
> - "Example Output" showing the interactive diagnostic across 4 phases
> (not a static taxonomy dump)
> - All 8 research papers linked to arxiv/Nature/USENIX
> - Installation instructions
> - Origin section with methodology and paper table
Phase 5: Awesome Lists — optional, correct formatting
> Submitted to ComposioHQ/awesome-claude-skills:
> - Forked repo, created branch `add-ai-trust-evaluation`
> - Added entry alphabetically in the correct table format
> - PR body has Summary section only (no test plan)
> - PR #425: https://github.com/ComposioHQ/awesome-claude-skills/pull/425
## Installation
### Claude Code
**Option 1: Plugin install (recommended)**
```bash
/plugin marketplace add wan-huiyan/publish-skill
/plugin install publish-skill@wan-huiyan-publish-skill
```
**Option 2: Git clone**
```bash
git clone https://github.com/wan-huiyan/publish-skill.git ~/.claude/skills/publish-skill
```
### Cursor
Cursor supports skills via `~/.cursor/skills/` (Cursor 2.4+), though global discovery can be flaky. Options from most to least reliable:
**Option 1: Per-project rule (most reliable)**
```bash
mkdir -p .cursor/rules
# Create .cursor/rules/publish-skill.mdc with the content of SKILL.md
# Add frontmatter: alwaysApply: true
```
**Option 2: npx skills CLI**
```bash
npx skills add wan-huiyan/publish-skill --global
```
**Option 3: Manual global install**
```bash
git clone https://github.com/wan-huiyan/publish-skill.git ~/.cursor/skills/publish-skill
```
## Eval Suite Coverage
The skill includes a comprehensive evaluation suite (`eval-suite.json`) with **45 trigger tests**, **12 functional test cases**, and **16 edge cases**.
Trigger tests (45) — validate when the skill should and shouldn't activate
- 25 positive triggers: "publish this skill", "republish", "share my .claude skill", compound intents, etc.
- 20 negative triggers: "deploy to production", "publish my npm package", "make a GitHub repo" (no skill context), "improve this skill's quality", etc.
Functional test cases (12) — cover every major workflow step
| Test case | What it validates |
|---|---|
| `basic_publish_flow` | Full publish pipeline: SKILL.md, README, LICENSE, plugin.json, marketplace.json, valid install commands |
| `repo_structure_validation` | Correct directory structure with .claude-plugin, skills/, LICENSE |
| `awesome_list_submission` | Fork workflow, alphabetical ordering, categories, description length, no test plan in PR |
| `client_data_audit` | Sanitization, git history warning, orphan branch cleanup, visual asset scanning |
| `update_existing_repo` | Version bump across all 7 locations (SKILL.md, nested copy, plugin.json, marketplace.json, README) |
| `screenshot_generation` | Puppeteer usage, docs/ output, node_modules cleanup, generic demo scenarios |
| `threshold_verification` | Grounded vs heuristic labeling, ranking-based thresholds |
| `portability_check` | Flags vendor-specific references, suggests placeholder names |
| `review_panel` | Multi-agent review invocation, screenshot and limitations checks |
| `link_verification` | Broken link detection, fork vs upstream attribution |
| `hook_bundling` | hooks/ directory, settings.json config, timeout guidance, merge-friendly advice |
| `version_bump_completeness` | All update locations bumped, stale version string check |
Edge cases (16) — error paths, permissions, and unusual inputs
| Edge case | Category |
|---|---|
| `no_skill_md_found` | missing_deps |
| `skill_with_no_frontmatter` | malformed_input |
| `huge_skill_md` | scale_extreme |
| `repo_already_exists` | invalid_path |
| `skill_with_sensitive_data` | dangerous_input |
| `no_github_cli` | missing_deps |
| `unicode_skill_name` | unicode |
| `empty_skill_md` | minimal_input |
| `fork_contributor_permissions` | permissions |
| `multiple_skills_batch` | scale_extreme |
| `network_failure_mid_publish` | missing_deps |
| `yaml_frontmatter_conflict` | malformed_input |
| `marketplace_name_collision` | conflict |
| `skill_with_hooks_already` | existing_content |
| `repo_not_owned_by_user` | permissions |
| `puppeteer_not_available` | missing_deps |
## Related Skills
- [skill-creator](https://docs.anthropic.com/en/docs/claude-code/skills) — for creating skills from scratch
- claudeception — for extracting skills from work sessions (built-in skill, no separate repo)
- [agent-review-panel](https://github.com/wan-huiyan/agent-review-panel) — for stress-testing skills before publishing
## Limitations
- Requires an existing SKILL.md before use — does not create skills from scratch (use skill-creator for that)
- Screenshot generation requires Puppeteer; skipped automatically if not installed
- Does not submit to awesome-claude-skills automatically — the PR step requires manual review
## Origin
Extracted via Claudeception from a multi-session publishing workflow that included:
- Publishing [ai-trust-evaluation](https://github.com/wan-huiyan/ai-trust-evaluation) through v1.0 → v2.0 → v3.0
- Publishing [agent-review-panel](https://github.com/wan-huiyan/agent-review-panel) through v2.0 → v2.5
- Submitting both to [ComposioHQ/awesome-claude-skills](https://github.com/ComposioHQ/awesome-claude-skills) and [travisvn/awesome-claude-code](https://github.com/travisvn/awesome-claude-code)
- Debugging auto-closed PRs, fixing formatting, and learning the hard way about GitHub fork behavior
## Version History
| Version | Date | Changes |
|---|---|---|
| 1.6.0 | 2026-03-31 | Add "update SKILL.md first" lesson after metadata-only version bump missed the skill itself |
| 1.5.0 | 2026-03-31 | Expand eval suite: 35→45 triggers, 6→12 test cases, 8→16 edge cases; strengthen assertions on existing tests |
| 1.4.0 | 2026-03-24 | Enrich trigger description, add eval suite, add composability metadata |
| 1.3.0 | 2026-03-24 | Add hooks bundling, multi-section screenshots, fork guidance, Why MIT rationale, pitfalls restructuring |
| 1.2.0 | 2026-03-23 | Add awesome-claude-skills submission guidelines, domain-distance guidance |
| 1.1.0 | 2026-03-22 | Add Step 0 client data audit, PR editing workflow |
| 1.0.0 | 2026-03-21 | Initial release |
## Acknowledgements
Trigger accuracy and eval suite improved using [schliff](https://github.com/Zandereins/schliff) — an autonomous skill scoring and improvement framework.
## License
MIT