An open API service indexing awesome lists of open source software.

https://github.com/tisu19021997/meta-prompt-mcp-server

The paper "Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding" as an MCP Server.
https://github.com/tisu19021997/meta-prompt-mcp-server

Last synced: 2 months ago
JSON representation

The paper "Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding" as an MCP Server.

Awesome Lists containing this project

README

          

# Meta Prompt MCP

This project is an implementation of the **Meta-Prompting** technique from the paper "[Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding](https://arxiv.org/abs/2401.12954)".

At its core, this MCP transforms a standard Language Model (LM) into a dynamic, multi-agent system without the complex setup. It works by having the LM adopt two key roles:

1. **The Conductor**: A high-level project manager that analyzes a complex problem, breaks it down into smaller, logical subtasks, and delegates them.
2. **The Expert**: Specialized agents (e.g., "Python Programmer," "Code Reviewer," "Creative Writer") that are "consulted" by the Conductor to execute each subtask.

The magic is that this entire collaborative workflow is simulated within a *single LM*. The Conductor and Experts are different modes of operation guided by a sophisticated system prompt, allowing the model to reason, act, and self-critique its way to a more robust and accurate solution. It's like having an automated team of AI specialists at your disposal, all powered by one model.


Meta Prompt Server MCP server

## Demo

[![Demo Video](/public/demo.png)](https://youtu.be/KATOgaj2upI)

## Getting Started

### 1. Clone the Repository

First, clone this repository to your local machine.

```sh
git clone https://github.com/tisu19021997/meta-prompt-mcp-server.git .
cd meta-prompt-mcp-server
```

### 2. Install `uv`

This project uses `uv`, an extremely fast Python package manager from Astral. If you don't have it installed, you can do so with one of the following commands.

**Note**: use `which uv` to know the path of your `uv` installation.

**macOS / Linux:**
```sh
curl -LsSf https://astral.sh/uv/install.sh | sh
```

**Windows (PowerShell):**
```powershell
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
```

For more details, see the [official `uv` installation guide](https://docs.astral.sh/uv/getting-started/installation/).

## Usage

To use this Meta Prompt MCP server, you need to configure your client (e.g., Cursor, Claude Desktop) to connect to it. Make sure to replace the placeholder paths with the actual paths on your machine.

### Cursor

Add the following configuration to your `mcp.json` settings:

```json
"meta-prompting": {
"command": "path/to/your/uv",
"args": [
"--directory",
"path/to/your/meta-prompt-mcp",
"run",
"mcp-meta-prompt"
]
}
```

### Claude Desktop

Add the following configuration to your `claude_desktop_config.json` settings:

```json
"meta-prompting": {
"command": "path/to/your/uv",
"args": [
"--directory",
"path/to/your/meta-prompt-mcp",
"run",
"mcp-meta-prompt"
]
}
```

### Activating the Meta-Prompt Workflow

**Important**: To leverage the full power of this MCP, always start your request by invoking the `meta_model_prompt` (then fill in the query with your prompt, see Demo video) from the `meta-prompting` server. This is the official entry point that activates the Conductor/Expert workflow. Once the prompt is added, simply provide your problem statement.

## How it Differs from the Paper

The core methodology in the [original paper](https://arxiv.org/abs/2401.12954) involves a two-step process for expert consultation:
1. The "conductor" model generates instructions for an expert.
2. A separate, independent LM instance (the "expert") is invoked with only those instructions to provide a response. This ensures the expert has "fresh eyes."

This implementation simplifies the process into a single LLM call. The conductor model generates the expert's name, instructions, and **the expert's complete output** within a single tool call. This is a significant difference that makes the process faster and less expensive, but it deviates from the "fresh eyes" principle of the original research.

## Limitations

The `expert_model` tool in this MCP server is designed to use the `ctx.sample()` function to properly simulate a second, independent expert model call as described in the paper. However, this function is not yet implemented in most MCP clients (such as Cursor and Claude Desktop).

Due to this limitation, the server includes a fallback mechanism. When `ctx.sample()` is unavailable, the `expert_model` tool simply returns the `output` content that was generated by the conductor model in the tool call. This means the expert's response is part of the conductor's single generation, rather than a true, independent consultation.

## Comparison

Below is two conversations I asked Claude to implement the Meta Prompt MCP Server itself, with and without Meta Prompt MCP.

Some artifacts are missing from the conversation but you could see the implementation with Meta Prompt MCP is much better, and it also did kind of "self-reviewing" by consulting a QA expert.

- **Claude Conversations**:
- [Implementation with Meta Prompt MCP](https://claude.ai/share/9f91768a-1c38-46d5-a812-365799ec23f0)
- [Implementation without Meta Prompt MCP](https://claude.ai/share/7f4ffe49-49f0-43b0-b142-f9285aefb9c3)

## References

- **Paper**: [Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding](https://arxiv.org/abs/2401.12954)