https://github.com/cablehead/xs-command-llm-anthropic
nushell + anthropic + tool use + rich documents + cross.stream
https://github.com/cablehead/xs-command-llm-anthropic
agent ai-assisted-development anthropic claude cross-stream event-sourcing nushell stream-driven-development text-editor
Last synced: 2 months ago
JSON representation
nushell + anthropic + tool use + rich documents + cross.stream
- Host: GitHub
- URL: https://github.com/cablehead/xs-command-llm-anthropic
- Owner: cablehead
- Created: 2025-03-03T01:00:37.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-03-14T18:45:23.000Z (7 months ago)
- Last Synced: 2025-07-28T21:44:05.379Z (3 months ago)
- Topics: agent, ai-assisted-development, anthropic, claude, cross-stream, event-sourcing, nushell, stream-driven-development, text-editor
- Language: Nushell
- Homepage:
- Size: 49.8 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# xs-command-llm-anthropic
A [cross.stream](https://github.com/cablehead/xs)
[command](https://cablehead.github.io/xs/reference/commands/) and Nushell
module for interacting with Anthropic's Claude AI models. This add-on leverages
cross.stream's event-sourced architecture to provide persistent, stateful
conversations with Claude that can be integrated into your terminal workflow.
## Requirements
- [cross.stream](https://github.com/cablehead/xs)
- [anthropic-text-editor](https://github.com/cablehead/anthropic-text-editor) (for the --with-tools option):
A micro-CLI to apply tool calls from Anthropic for their [text_editor_20250124](https://docs.anthropic.com/en/docs/agents-and-tools/computer-use)
built-in computer use tool## Onboarding
Quick start with the `llm` module:
1. **Load the module overlay**:
```nushell
overlay use -p ./llm
help llm
```2. **Initialize your API key and register the [cross.stream](https://github.com/cablehead/xs) [command](https://cablehead.github.io/xs/reference/commands/)**:
```nushell
$env.ANTHROPIC_API_KEY | llm init-store
```3. **Make a test call**:
```nushell
llm call
``````
Enter prompt: hola
Text:
¡Hola! ¿En qué puedo ayudarte hoy?
```You're ready to go!
## Features
- An interactive harness for processing [Claude's built-in `bash_20250124` and
`text_editor_20250124`
tools](https://docs.anthropic.com/en/docs/agents-and-tools/computer-use)
- Rich documents, e.g. pdfs
- Message caching: control which messages are cached using the `--cache` flag## To document
- document how to run llm.call without registering it
```
let c = source xs-command-llm.call-anthropic.nu ; do $c.process ("hi" | .append go)
```- Working with the response
```
.head llm.response | .cas | from json
```Adhoc request: translate the current clipboard to english
```
[
(bp) # our current clipboard: but really you want to "pin" a
# snippet of content
"please translate to english" # tool selection
]
# we should be able to pipe a list of strings directly into llm.call
| str join "\n\n---\n\n"
| (.append
-c 03dg9w21nbjwon13m0iu6ek0a # the context which has llm.define and is generally considered adhoc
llm.call
)
```Using the cache flag with large documents or inputs:
```
# Load a large document and process it with caching enabled
open large_document.pdf | llm call --cache
llm call "Summarize the key points from the document"# The document content is marked as ephemeral in Claude's context
# This reduces token usage in subsequent exchanges
# while still allowing Claude to reference the semantic content
```View outstanding calls:
```
.cat | where topic in ["llm.call" "llm.error" "llm.response"] | reduce --fold {} {|frame acc|
if $frame.topic == "llm.call" {
return ($acc | insert $frame.id "pending")
}$acc | upsert $frame.meta.frame_id ($frame | reject meta)
}
```## Reference
### Command Options
The `llm call` command supports the following options:
- `--with-tools`: Enable Claude to use bash and text editor tools
- `--cache`: Mark messages as ephemeral, which prevents them from being used in subsequent responses. This is useful for excluding context-heavy content (like large documents) from being re-tokenized in future exchanges while preserving the semantic understanding from those messages.
- `--respond (-r)`: Continue from the last response
- `--json (-j)`: Treat input as JSON formatted content
- `--separator (-s)`: Specify a custom separator when joining lists of strings (default: "\n\n---\n\n")```mermaid
sequenceDiagram
participant User
participant CLI as llm-anthropic.nu CLI
participant Store as [cross.stream](https://github.com/cablehead/xs) Store
participant Command as llm.call Command
participant API as Anthropic APIUser->>CLI: "Hello Claude" | .llm
CLI->>Store: .append llm.call
Store-->>Command: Executes CommandCommand->>Store: .head ANTHROPIC_API_KEY
Store-->>Command: API KeyCommand->>Store: traverse-thread
Store-->>Command: Previous messagesCommand->>API: HTTP POST /v1/messages
API-->>Command: SSE Stream (text chunks)
loop For each response chunk
Command->>Store: .append llm.recv
Store-->>CLI: Stream response chunk
CLI-->>User: Display streaming text
endCommand->>Store: .append llm.response
alt Tool Use Request
CLI->>User: Display tool use request
User->>CLI: Confirm execution
CLI->>Store: .append with tool results
Store-->>Command: Continue with results
end
```## Why Use This Approach
The [cross.stream](https://github.com/cablehead/xs) framework offers significant advantages over traditional AI
integration approaches:### Event-Sourced Architecture
This system stores all interactions as a linked chain of events, creating
powerful capabilities:- **Streaming Responses:** Any UI (terminal, web, desktop) can subscribe to see
Claude's responses as they arrive
- **Temporal Navigation:** Browse conversation history at any point, fork
discussions from previous messages
- **Resilience:** Interrupted responses retain all partial data
- **Asynchronous Processing:** LLM calls run independently in the background,
managed by the [cross.stream](https://github.com/cablehead/xs) process### Command-Based Integration
By registering `llm.call` as a [cross.stream command](https://cablehead.github.io/xs/reference/commands/):
- Operations run independently of client processes
- State is managed through the event stream rather than memory
- Multiple consumers can observe the same operation
- Persistence is maintained across client restarts### Terminal-Native Workflow
- Seamlessly integrates with developer command-line workflows
- Leverages Nushell's powerful data manipulation capabilities
- Creates composable pipelines between AI outputs and other tools
- Provides a foundation for custom tooling built around LLM interactionsThis approach creates a clean separation between API mechanisms and clients,
making it easier to build specialized interfaces while maintaining a centralized
conversation store.