An open API service indexing awesome lists of open source software.

https://github.com/fmflurry/flurryx

flurryx is a signal-first reactive state toolkit for Angular that bridges RxJS streams into structured, cache-aware stores.
https://github.com/fmflurry/flurryx

Last synced: 5 days ago
JSON representation

flurryx is a signal-first reactive state toolkit for Angular that bridges RxJS streams into structured, cache-aware stores.

Awesome Lists containing this project

README

          







flurryx



flurryx version


Build status


Coverage 86%


Angular >=17


MIT license


Signal-first reactive state management for Angular.

Bridge RxJS streams into cache-aware stores, keyed resources, mirrored state, and replayable history.


Live demo
·
In action
·
Feature summary
·
Getting started
·
Taskflurry sample

> **See it in action** — [**Taskflurry**](https://fmflurry.github.io/flurryx/) is a live demo app built with Angular 21 (zoneless, no zone.js dependency) and flurryx. It showcases store definitions, the facade pattern with `@SkipIfCached` and `@Loading`, keyed resources with per-entity loading and errors, and Clean Architecture layering. Try the [live demo](https://fmflurry.github.io/flurryx/) or browse the [source code](samples/taskflurry).

flurryx bridges the gap between RxJS async operations and Angular signals. Define a store, pipe your HTTP calls through an operator, read signals in your templates, queue store messages when you need to batch updates, and replay history when you need deterministic state transitions. No actions, no reducers, no effects boilerplate.

## In Action

TaskFlurry is a demo Angular application built with Flurryx to showcase the library’s capabilities.

It demonstrates how to manage shared state, structure facade-driven workflows, and leverage built-in history to make state transitions explicit and easy to inspect.

### Store History Time Travel

> **Replayable state out of the box. Jump back, restore, move on.** A deleted task is restored by jumping directly to the exact store history entry that brought it out of the list. This is the kind of inspectable, replayable state flow flurryx gives you without building custom devtools first.

Store history time travel in Taskflurry

### Projects to Tasks Drill-down

> **Derived state without UI drift.** Selecting a project immediately reshapes the task view from shared store state. The UI stays coherent because the project context and the derived task list are driven from the same reactive foundation.

Projects to tasks drill-down in Taskflurry

### Task Creation Flow

> **Fast workflows, no ceremony.** From context to success state, no reducer overhead. It shows how flurryx keeps normal app workflows simple without pushing everything through reducer-heavy ceremony.

Task creation flow in Taskflurry

### Task Update Flow

> **Edit in place. Stay in sync.** Editing a task updates the detail view in place with the new status and content. Update once, UI follows, history included.

Task update flow in Taskflurry

### Delete Task Flow

> **Simple changes, fully traceable.** Deleting from the list immediately updates the visible state with no extra reducer wiring or action choreography. Minimal logic, full visibility.

Delete task flow in Taskflurry

## What It Looks Like

Define a store. Inject it. Read signals. That's it.

```typescript
import { Store } from "flurryx";

interface ProductStoreConfig {
LIST: Product[];
DETAIL: Product;
}

export const ProductStore = Store.for().build();
```

One interface, one line — you get a fully typed, injectable store with loading state, error tracking, and history built in.

```typescript
@Component({
template: `
@if (state().isLoading) { } @if (state().status === 'Error') {
} @for (product of state().data;
track product.id) {

}
`,
})
export class ProductListComponent {
private readonly store = inject(ProductStore);
readonly state = this.store.get("LIST");
}
```

No `async` pipe. No `subscribe`. No manual unsubscription. `isLoading`, `status`, and `errors` are always there — you just read them.

**Need HTTP?** Pipe it straight into the store:

```typescript
this.http
.get("/api/products")
.pipe(syncToStore(this.store, "LIST"))
.subscribe();
```

**Need caching?** Add a decorator — the method is skipped when data is fresh:

```typescript
@SkipIfCached("LIST", (i: ProductFacade) => i.store)
@Loading("LIST", (i: ProductFacade) => i.store)
loadProducts() { /* only runs on cache miss */ }
```

**Need undo/redo?** It's already there:

```typescript
store.undo();
store.redo();
store.restoreStoreAt(0); // back to initial state
```

The store is the foundation. Layer on facades, decorators, mirroring, and message channels when your app needs them — not before.

---

## Why flurryx?

Angular signals are great for synchronous reactivity, but real applications still need RxJS for HTTP calls, WebSockets, and other async sources. The space between "I fired a request" and "my template shows the result" is where complexity piles up:

| Problem | Without flurryx | With flurryx |
| ------------------ | -------------------------------------------- | ---------------------------------------------- |
| Loading spinners | Manual boolean flags, race conditions | `store.get(key)().isLoading` |
| Error handling | Scattered `catchError`, inconsistent shapes | Normalized `{ code, message }[]` on every slot |
| Caching | Custom `shareReplay` / `BehaviorSubject` | `@SkipIfCached` — one decorator |
| Duplicate requests | Manual inflight tracking | `@SkipIfCached` deduplicates while loading |
| Keyed resources | Separate state per ID, boilerplate explosion | `KeyedResourceData` with per-key loading/error |
| Replay and history | Ad hoc logging, custom devtools | Built-in message log, undo, redo, replay by id |

flurryx stays small on purpose: a typed store builder, a small RxJS bridge, cache/loading decorators, store composition helpers, and a message broker with pluggable channels.

### How it stacks up

| Capability | NgRx | NGXS | Elf | **flurryx** |
| --------------------------------- | ---------------------------------------- | --------------------- | ------------------ | -------------------------------------------------------- |
| Store definition | Actions + Reducers + Selectors | State class + Actions | Repository + Store | **One interface** |
| Boilerplate for a CRUD feature | ~8 files | ~5 files | ~4 files | **~2 files** |
| Signal-native | Adapter needed | No | No | **Built-in** |
| Loading / error per slot | Manual | Manual | Partial | **Automatic** |
| Per-entity keyed state | @ngrx/entity (extra package) | Manual | Manual | **Built-in KeyedResourceData** |
| Cache deduplication | Manual | Manual | Manual | **@SkipIfCached decorator** |
| **Built-in undo / redo / replay** | **No** | **No** | **No** | **Yes — with dead-letter recovery** |
| Message persistence | No | No | No | **Pluggable channels (localStorage, etc.)** |
| Bundle size impact | Large (multiple packages) | Medium | Small | **Small (no components, just signals)** |
| Learning curve | Steep (Redux concepts) | Moderate | Low-moderate | **Low (signals + RxJS you already know)** |
| Best for | Teams already invested in Redux patterns | Medium-large apps | Any size, flexible | **Any size — from simple CRUD to complex stateful apps** |

---

## Feature Summary

### Store & Signals

- **Typed signal stores** — interface in, signals out
- **Loading & error lifecycle** — automatic on every slot
- **Keyed entity caches** — per-entity loading, status, errors
- **Cache invalidation** — slot, store, or app-wide

### RxJS Bridge

- **syncToStore** — pipe HTTP calls into the store
- **@SkipIfCached** — skip when data is fresh
- **@Loading** — auto-set loading flags

### Message Broker

- **Message queueing** — typed, immutable, traceable
- **History & time travel** — undo, redo, restoreStoreAt, restoreResource
- **Replay** — re-execute messages by id
- **Dead-letter recovery** — retry failed mutations
- **Pluggable channels** — memory, localStorage, sessionStorage, composite
- **Serialization** — Date, Map, Set round-trip through storage

### Store Composition

- **Mirroring** — `.mirror()`, `.mirrorSelf()`, `.mirrorKeyed()`
- **mirrorKey / collectKeyed** — imperative wiring with cleanup

---

## Table of Contents

- [What It Looks Like](#what-it-looks-like)
- [Why flurryx?](#why-flurryx)
- [In Action](#in-action)
- [Feature Summary](#feature-summary)
- [Packages](#packages)
- [How to Install](#how-to-install)
- [Getting Started](#getting-started)
- [How to Use](#how-to-use)
- [ResourceState](#resourcestate)
- [Store API](#store-api)
- [Store Creation Styles](#store-creation-styles)
- [syncToStore](#synctostore)
- [syncToKeyedStore](#synctokeyedstore)
- [@SkipIfCached](#skipifcached)
- [@Loading](#loading)
- [Error Normalization](#error-normalization)
- [Constants](#constants)
- [Keyed Resources](#keyed-resources)
- [Clearing Store Data](#clearing-store-data)
- [Message Queueing and History](#message-queueing-and-history)
- [Message Lifecycle](#message-lifecycle)
- [Message Types](#message-types)
- [History and Time Travel](#history-and-time-travel)
- [Message Replay](#message-replay)
- [Dead-Letter Recovery](#dead-letter-recovery)
- [Message Channels](#message-channels)
- [Channel Types](#channel-types)
- [Built-in Serialization](#built-in-serialization)
- [Store Mirroring](#store-mirroring)
- [Builder .mirror()](#builder-mirror)
- [Builder .mirrorSelf()](#builder-mirrorself)
- [Builder .mirrorKeyed()](#builder-mirrorkeyed)
- [mirrorKey](#mirrorkey)
- [collectKeyed](#collectkeyed)
- [AI Coding](#ai-coding)
- [Design Decisions](#design-decisions)
- [Contributing](#contributing)
- [License](#license)

---

## Packages

| Package | Purpose |
| ---------------- | -------------------------------------------------------------------------------------------- |
| `flurryx` | The umbrella package. Import the full toolkit from a single entry point. |
| `@flurryx/core` | Shared types, keyed resource helpers, and cache constants. |
| `@flurryx/store` | Signal-backed stores, invalidation helpers, mirroring utilities, and replay/history control. |
| `@flurryx/rx` | RxJS bridge operators, decorators, and pluggable error normalization. |

---

## How to Install

```bash
npm install flurryx
```

That's it. The `flurryx` package re-exports everything from the three internal packages (`@flurryx/core`, `@flurryx/store`, `@flurryx/rx`), so every import comes from a single place:

```typescript
import { Store, syncToStore, SkipIfCached, Loading } from "flurryx";
import type { ResourceState, KeyedResourceData } from "flurryx";
```

For the Angular HTTP error normalizer (optional — keeps `@angular/common/http` out of your bundle unless you need it):

```typescript
import { httpErrorNormalizer } from "flurryx/http";
```

**Peer dependencies** (you likely already have these):

| Peer | Version |
| ----------------- | --------------------------------- |
| `@angular/core` | `>=17` |
| `rxjs` | `>=7` |
| `@angular/common` | optional, only for `flurryx/http` |

> **Note:** Your `tsconfig.json` must include `"experimentalDecorators": true` if you use `@SkipIfCached` or `@Loading`.

Individual packages

If you prefer granular control over your dependency tree, the internal packages are published independently:

```
@flurryx/core → Types, models, utilities (0 runtime deps)
@flurryx/store → BaseStore with Angular signals (peer: @angular/core >=17)
@flurryx/rx → RxJS operators + decorators (peer: rxjs >=7, @angular/core >=17)
```

```bash
npm install @flurryx/core @flurryx/store @flurryx/rx
```

```
@flurryx/core ←── @flurryx/store

@flurryx/rx
```

---

## Getting Started

### Step 1 — Define your store

Define a TypeScript interface mapping slot names to their data types, then pass it to the `Store` builder:

```typescript
import { Store } from "flurryx";

interface ProductStoreConfig {
LIST: Product[];
DETAIL: Product;
}

export const ProductStore = Store.for().build();
```

That's it. The interface is type-only — zero runtime cost. The builder returns an `InjectionToken` with `providedIn: 'root'`. Every call to `store.get('LIST')` returns `Signal>`, and invalid keys or mismatched types are caught at compile time.

### Step 2 — Create a facade

The facade owns the store and exposes signals + data-fetching methods.

```typescript
import { Injectable, inject } from "@angular/core";
import { HttpClient } from "@angular/common/http";
import { syncToStore, SkipIfCached, Loading } from "flurryx";

@Injectable()
export class ProductFacade {
private readonly http = inject(HttpClient);
readonly store = inject(ProductStore);

getProducts() {
return this.store.get("LIST");
}

getProductDetail() {
return this.store.get("DETAIL");
}

@SkipIfCached("LIST", (i: ProductFacade) => i.store)
@Loading("LIST", (i: ProductFacade) => i.store)
loadProducts() {
this.http
.get("/api/products")
.pipe(syncToStore(this.store, "LIST"))
.subscribe();
}

@Loading("DETAIL", (i: ProductFacade) => i.store)
loadProduct(id: string) {
this.http
.get(`/api/products/${id}`)
.pipe(syncToStore(this.store, "DETAIL"))
.subscribe();
}
}
```

### Step 3 — Use in your component

```typescript
@Component({
template: `
@if (productsState().isLoading) {

} @if (productsState().status === 'Success') { @for (product of
productsState().data; track product.id) {

} } @if (productsState().status === 'Error') {

}
`,
})
export class ProductListComponent {
private readonly facade = inject(ProductFacade);
readonly productsState = this.facade.getProducts();

constructor() {
this.facade.loadProducts();
}
}
```

The component reads signals directly. No `async` pipe, no `subscribe`, no `OnDestroy` cleanup.

---

## How to Use

### ResourceState

The fundamental unit of state. Every store slot holds one:

```typescript
interface ResourceState {
isLoading?: boolean;
data?: T;
status?: "Success" | "Error";
errors?: Array<{ code: string; message: string }>;
}
```

A slot starts as `{ data: undefined, isLoading: false, status: undefined, errors: undefined }` and transitions through a predictable lifecycle:

```
┌─────────┐ startLoading ┌───────────┐ next ┌─────────┐
│ IDLE │ ───────────────→ │ LOADING │ ────────→ │ SUCCESS │
└─────────┘ └───────────┘ └─────────┘

│ error

┌───────────┐
│ ERROR │
└───────────┘
```

### Store API

The `Store` builder creates a store backed by `Signal` per slot. Three creation styles are available:

```typescript
// 1. Interface-based (recommended) — type-safe with zero boilerplate
interface MyStoreConfig {
USERS: User[];
SELECTED: User;
}
export const MyStore = Store.for().build();

// 2. Fluent chaining — inline slot definitions
export const MyStore = Store.resource("USERS")
.as()
.resource("SELECTED")
.as()
.build();

// 3. Enum-constrained — validates keys against a runtime enum
export const MyStore = Store.for(MyStoreEnum)
.resource("USERS")
.as()
.resource("SELECTED")
.as()
.build();
```

Once injected, the store exposes these methods:

| Method | Description |
| ------------------------- | ------------------------------------------------------------------------------------- |
| `get(key)` | Returns the `Signal` for a slot |
| `update(key, partial)` | Merges partial state (immutable spread) |
| `clear(key)` | Resets a slot to its initial empty state |
| `clearAll()` | Resets every slot |
| `startLoading(key)` | Sets `isLoading: true`, clears `status` and `errors` |
| `stopLoading(key)` | Sets `isLoading: false`, clears `status` and `errors` |
| `onUpdate(key, callback)` | Registers a listener fired after `update` or `clear`. Returns an unsubscribe function |

**Keyed methods** (for `KeyedResourceData` slots):

| Method | Description |
| ------------------------------------------ | -------------------------------------- |
| `updateKeyedOne(key, resourceKey, entity)` | Merges one entity into a keyed slot |
| `clearKeyedOne(key, resourceKey)` | Removes one entity from a keyed slot |
| `startKeyedLoading(key, resourceKey)` | Sets loading for a single resource key |

> Update hooks are stored in a `WeakMap` keyed by store instance, so garbage collection works naturally across multiple store lifetimes.

#### Read-only signals

`get(key)` returns a **read-only `Signal`**, not a `WritableSignal`. Consumers can read state but cannot mutate it directly — all writes must go through the store's own methods (`update`, `clear`, `startLoading`, …). This enforces strict encapsulation: the store is the single owner of its state, and external code can only observe it.

### Store Creation Styles

#### Interface-based: `Store.for().build()`

The recommended approach. Define a TypeScript interface where keys are slot names and values are the data types:

```typescript
import { Store } from "flurryx";

interface ChatStoreConfig {
SESSIONS: ChatSession[];
CURRENT_SESSION: ChatSession;
MESSAGES: ChatMessage[];
}

export const ChatStore = Store.for().build();
```

The generic argument is type-only — there is no runtime enum or config object. Under the hood, the store lazily creates signals on first access, so un-accessed keys have zero overhead.

Type safety is fully enforced:

```typescript
const store = inject(ChatStore);

store.get("SESSIONS"); // Signal>
store.update("SESSIONS", { data: [session] }); // ✅ type-checked
store.update("SESSIONS", { data: 42 }); // ❌ TS error — number is not ChatSession[]
store.get("INVALID"); // ❌ TS error — key does not exist
```

#### Fluent chaining: `Store.resource().as().build()`

Define slots inline without a separate interface:

```typescript
export const ChatStore = Store.resource("SESSIONS")
.as()
.resource("CURRENT_SESSION")
.as()
.resource("MESSAGES")
.as()
.build();
```

#### Enum-constrained: `Store.for(enum).resource().as().build()`

When you have a runtime enum (e.g. shared with backend code), pass it to `.for()` to ensure every key is accounted for:

```typescript
const ChatStoreEnum = {
SESSIONS: "SESSIONS",
CURRENT_SESSION: "CURRENT_SESSION",
MESSAGES: "MESSAGES",
} as const;

export const ChatStore = Store.for(ChatStoreEnum)
.resource("SESSIONS")
.as()
.resource("CURRENT_SESSION")
.as()
.resource("MESSAGES")
.as()
.build();
```

The builder only allows keys from the enum, and `.build()` is only available once all keys have been defined.

### syncToStore

RxJS pipeable operator that bridges an `Observable` to a store slot.

```typescript
this.http
.get("/api/products")
.pipe(syncToStore(this.store, "LIST"))
.subscribe();
```

**What it does:**

- On `next` — writes `{ data, isLoading: false, status: 'Success', errors: undefined }`
- On `error` — writes `{ data: undefined, isLoading: false, status: 'Error', errors: [...] }`
- Completes after first emission by default (`take(1)`)

**Options:**

```typescript
syncToStore(store, key, {
completeOnFirstEmission: true, // default: true — applies take(1)
callbackAfterComplete: () => {}, // runs in finalize()
errorNormalizer: myNormalizer, // default: defaultErrorNormalizer
});
```

### syncToKeyedStore

Same pattern, but targets a specific resource key within a `KeyedResourceData` slot:

```typescript
this.http
.get(`/api/invoices/${id}`)
.pipe(syncToKeyedStore(this.store, "ITEMS", id))
.subscribe();
```

Only the targeted resource key is updated. Other keys in the same slot are untouched.

**`mapResponse`** — transform the API response before writing to the store:

```typescript
syncToKeyedStore(this.store, "ITEMS", id, {
mapResponse: (response) => response.data,
});
```

### @SkipIfCached

Method decorator that skips execution when the store already has valid data.

```typescript
@SkipIfCached('LIST', (i) => i.store)
loadProducts() { /* only runs when cache is stale */ }
```

**Cache hit** (method skipped) when:

- `status === 'Success'` or `isLoading === true`
- Timeout has not expired (default: 5 minutes)
- Method arguments match (compared via `JSON.stringify`)

**Cache miss** (method executes) when:

- Initial state (no status, not loading)
- `status === 'Error'` (errors are never cached)
- Timeout expired
- Arguments changed

**Parameters:**

```typescript
@SkipIfCached(
'LIST', // which store slot to check
(instance) => instance.store, // how to get the store from `this`
returnObservable?, // false (default): void methods; true: returns Observable
timeoutMs? // default: 300_000 (5 min). Use CACHE_NO_TIMEOUT for infinite
)
```

**Observable mode** (`returnObservable: true`):

- Cache hit returns `of(cachedData)` or coalesces onto the in-flight `Observable` via `shareReplay`
- Cache miss executes the method and wraps the result with inflight tracking

**Keyed resources**: When the first argument is a `string | number` and the store data is a `KeyedResourceData`, cache entries are tracked per resource key automatically.

### @Loading

Method decorator that calls `store.startLoading(key)` before the original method executes.

```typescript
@Loading('LIST', (i) => i.store)
loadProducts() { /* store.isLoading is already true when this runs */ }
```

**Keyed detection**: If the first argument is a `string | number` and the store has `startKeyedLoading`, it calls that instead for per-key loading state.

**Compose both decorators** for the common pattern:

```typescript
@SkipIfCached('LIST', (i) => i.store)
@Loading('LIST', (i) => i.store)
loadProducts() {
this.http.get('/api/products')
.pipe(syncToStore(this.store, 'LIST'))
.subscribe();
}
```

Order matters: `@SkipIfCached` is outermost so it can short-circuit before `@Loading` sets the loading flag.

### Error Normalization

Operators accept a pluggable `errorNormalizer` instead of coupling to Angular's `HttpErrorResponse`:

```typescript
type ErrorNormalizer = (error: unknown) => ResourceErrors;
```

**`defaultErrorNormalizer`** (used by default) handles:

1. `{ error: { errors: [...] } }` — extracts the nested array
2. `{ status: number, message: string }` — wraps into `[{ code, message }]`
3. `Error` instances — wraps `error.message`
4. Anything else — `[{ code: 'UNKNOWN', message: String(error) }]`

**`httpErrorNormalizer`** — for Angular's `HttpErrorResponse`, available from a separate entry point to keep `@angular/common/http` out of your bundle unless you need it:

```typescript
import { httpErrorNormalizer } from "flurryx/http";

this.http
.get("/api/data")
.pipe(
syncToStore(this.store, "DATA", {
errorNormalizer: httpErrorNormalizer,
}),
)
.subscribe();
```

**Custom normalizer** — implement your own for any backend error shape:

```typescript
const myNormalizer: ErrorNormalizer = (error) => {
const typed = error as MyBackendError;
return typed.details.map((d) => ({
code: d.errorCode,
message: d.userMessage,
}));
};
```

### Constants

```typescript
import { CACHE_NO_TIMEOUT, DEFAULT_CACHE_TTL_MS } from "flurryx";

CACHE_NO_TIMEOUT; // Infinity — cache never expires
DEFAULT_CACHE_TTL_MS; // 300_000 (5 minutes)
```

---

## Keyed Resources

For data indexed by ID (user profiles, invoices, config entries), use `KeyedResourceData`:

```typescript
interface KeyedResourceData {
entities: Partial>;
isLoading: Partial>;
status: Partial>;
errors: Partial>;
}
```

Each resource key gets **independent** loading, status, and error tracking. The top-level `ResourceState.isLoading` reflects whether _any_ key is loading.

**Full example:**

```typescript
// Store
import { Store } from "flurryx";
import type { KeyedResourceData } from "flurryx";

export const InvoiceStore = Store.resource("ITEMS")
.as>()
.build();

// Facade
@Injectable({ providedIn: "root" })
export class InvoiceFacade {
private readonly http = inject(HttpClient);
readonly store = inject(InvoiceStore);
readonly items = this.store.get("ITEMS");

@SkipIfCached("ITEMS", (i: InvoiceFacade) => i.store)
@Loading("ITEMS", (i: InvoiceFacade) => i.store)
loadInvoice(id: string) {
this.http
.get(`/api/invoices/${id}`)
.pipe(syncToKeyedStore(this.store, "ITEMS", id))
.subscribe();
}
}

// Component
const data = this.facade.items().data; // KeyedResourceData
const invoice = data?.entities["inv-123"]; // Invoice | undefined
const loading = data?.isLoading["inv-123"]; // boolean | undefined
const errors = data?.errors["inv-123"]; // ResourceErrors | undefined
```

**Utilities:**

```typescript
import {
createKeyedResourceData, // factory — returns empty { entities: {}, isLoading: {}, ... }
isKeyedResourceData, // type guard
isAnyKeyLoading, // (loading: Record) => boolean
} from "flurryx";
```

---

## Clearing Store Data

flurryx provides two levels of cache invalidation: **whole-slot clearing** and **per-key clearing** for keyed resources.

### Whole-slot clearing

Reset an entire store slot back to its initial empty state:

```typescript
const store = inject(ProductStore);

// Clear a single slot
store.clear("LIST");
// LIST is now { data: undefined, isLoading: false, status: undefined, errors: undefined }

// Clear every slot in the store
store.clearAll();

// Clear every tracked store instance in the app
clearAllStores();
```

This is the right choice when the slot holds a single value (e.g. `Product`, `User[]`).

For app-wide cache resets such as logout or tenant switching, use the global helper:

```typescript
import { clearAllStores } from "flurryx";

logout() {
clearAllStores();
}
```

### Per-key clearing for keyed resources

When a slot holds a `KeyedResourceData` (a map of entities indexed by ID), `clear('ITEMS')` wipes **every** cached entity. If you only need to invalidate one entry — for example after a delete or an edit — use `clearKeyedOne`:

```typescript
const store = inject(InvoiceStore);

// Remove only invoice "inv-42" from the cache.
// All other cached invoices remain untouched.
store.clearKeyedOne("ITEMS", "inv-42");
```

`clearKeyedOne` removes the entity, its loading flag, status, and errors for that single key, then recalculates the top-level `isLoading` based on the remaining keys.

**Facade example — delete an invoice and evict it from cache:**

```typescript
@Injectable({ providedIn: "root" })
export class InvoiceFacade {
private readonly http = inject(HttpClient);
readonly store = inject(InvoiceStore);

deleteInvoice(id: string) {
this.http.delete(`/api/invoices/${id}`).subscribe(() => {
// Remove only this invoice from the keyed cache
this.store.clearKeyedOne("ITEMS", id);
});
}
}
```

**Comparison:**

| Method | Scope | Use when |
| --------------------------------- | ----------------------------- | ------------------------------------------- |
| `clear(key)` | Entire slot | Logging out, resetting a form, full refresh |
| `clearAll()` | Every slot in one store | Reset one feature store |
| `clearAllStores()` | Every tracked store instance | Logout, tenant switch, full app cache reset |
| `clearKeyedOne(key, resourceKey)` | Single entity in a keyed slot | Deleting or invalidating one cached item |

---

## Message Queueing and History

Every store mutation in flurryx is a **typed message** published to an internal broker channel. The broker is not a traditional async message queue — consumption is **synchronous** within the same JavaScript call stack. This means there are no race conditions, no ordering ambiguity, and no worker threads. The channel acts as a **transactional log** that enables message introspection, replay, undo/redo, and dead-letter recovery.

### Message Lifecycle

```
store.update('CUSTOMERS', { data: [...], status: 'Success' })


┌─────────────────────────────┐
│ Create typed StoreMessage │ { type: 'update', key: 'CUSTOMERS', state: { ... } }
└──────────────┬──────────────┘

┌─────────────────────────────┐
│ Publish to channel │ Assigns stable numeric id, status = 'pending'
└──────────────┬──────────────┘

┌─────────────────────────────┐
│ Consume (apply to signal) │ Angular Signal updated, onUpdate hooks fired
└──────────┬──────────┬───────┘
│ │
success failure
│ │
▼ ▼
┌────────────┐ ┌──────────────┐
│ Acknowledged│ │ Dead-letter │ Tracked with error + attempt count
└──────┬─────┘ └──────────────┘

┌────────────┐
│ Snapshot │ Full store state captured for history
└────────────┘
```

All of this happens **synchronously** in a single call. When `store.update()` returns, the message is already acknowledged, the signal is updated, and the snapshot is recorded.

### Message Types

Every store method produces one of these typed messages:

| Message type | Produced by | Payload |
| ------------------- | --------------------------------- | ------------------------------ |
| `update` | `update(key, partial)` | `key`, `state` (partial merge) |
| `clear` | `clear(key)` | `key` |
| `clearAll` | `clearAll()` | _(none — affects all slots)_ |
| `startLoading` | `startLoading(key)` | `key` |
| `stopLoading` | `stopLoading(key)` | `key` |
| `updateKeyedOne` | `updateKeyedOne(key, rk, entity)` | `key`, `resourceKey`, `entity` |
| `clearKeyedOne` | `clearKeyedOne(key, rk)` | `key`, `resourceKey` |
| `startKeyedLoading` | `startKeyedLoading(key, rk)` | `key`, `resourceKey` |

Messages are immutable and deep-cloned on publish to prevent external mutation.

### History and Time Travel

After every acknowledged message, flurryx captures a **full snapshot** of the store. The first entry (index `0`) is always the initial state captured when the store was created.

```typescript
const store = inject(ProductStore);

// Inspect the full history
const history = store.getHistory();
// [
// { index: 0, id: null, message: null, snapshot: { LIST: {...}, DETAIL: {...} } },
// { index: 1, id: 1, message: { type: 'startLoading', key: 'LIST' }, snapshot: {...} },
// { index: 2, id: 2, message: { type: 'update', key: 'LIST', ... }, snapshot: {...} },
// ]

// Filter history for a specific key
const listHistory = store.getHistory("LIST");

// Check current position
const currentIndex = store.getCurrentIndex(); // 2

// Jump to any recorded snapshot
store.restoreStoreAt(0); // restore initial state
store.restoreStoreAt(2); // jump back to latest

// Restore a single key without affecting others
store.restoreResource("LIST", 0); // restore only LIST to its state at snapshot 0
store.restoreResource("LIST"); // restore LIST to its state at the current index

// Step-by-step navigation
store.undo(); // move to previous snapshot — returns false if already at index 0
store.redo(); // move to next snapshot — returns false if already at latest
```

`restoreStoreAt`, `undo`, and `redo` restore snapshots only — they do **not** re-execute messages or create new history entries.

`restoreResource(key, index?)` restores a **single key** from a snapshot without affecting other keys. This is useful when viewing history filtered by key — you can restore `TASKS` to a previous state without losing `SELECTED_PROJECT`. Like `restoreStoreAt`, it does not create new history entries.

### Message Replay

Unlike time travel, **replay re-executes messages** through the full broker/consumer path. This creates new acknowledged history entries and can truncate future history if called after time travel.

```typescript
// Inspect all persisted messages
const messages = store.getMessages();
// [
// { id: 1, message: {...}, status: 'acknowledged', attempts: 1, ... },
// { id: 2, message: {...}, status: 'acknowledged', attempts: 1, ... },
// ]

// Filter messages for a specific key
const listMessages = store.getMessages("LIST");

// Re-execute a single message by its stable id
store.replay(1); // returns count of acknowledged messages (0 or 1)

// Re-execute multiple messages in the provided order
store.replay([1, 2, 3]); // returns count of acknowledged messages
```

**When to use replay vs. time travel:**

| | `restoreStoreAt` / `undo` / `redo` | `restoreResource` | `replay` |
| ---------------------- | ----------------------------------- | ---------------------------------------- | -------------------------------------------- |
| Mechanism | Restores full snapshot | Restores single key from snapshot | Re-executes message(s) through broker |
| Affects other keys | Yes — entire store | No — only specified key | Depends on message |
| Creates new history | No | No | Yes |
| Fires `onUpdate` hooks | Yes | Yes (for restored key only) | Yes |
| Use case | Inspecting past state, undo/redo UX | Key-scoped time travel, devtools history | Deterministic state reconstruction, recovery |

### Dead-Letter Recovery

When a message fails broker acknowledgement, it is moved to the **dead-letter queue** instead of crashing the application. Dead letters track the error message and attempt count.

```typescript
// Inspect failed messages
const deadLetters = store.getDeadLetters();
// [
// { id: 3, message: {...}, attempts: 1, error: 'Message was not acknowledged', failedAt: 1712... },
// ]

// Retry a single dead letter by its id
const recovered = store.replayDeadLetter(3); // true if acknowledged, false if failed again

// Retry all dead letters at once
const count = store.replayDeadLetters(); // returns number of newly acknowledged messages
```

Successfully replayed dead letters are removed from the queue and produce new history entries. Failures remain with incremented attempt counts.

---

## Message Channels

The message broker persists messages through a pluggable **channel** interface. The channel controls where messages are stored, how they are serialized, and how they survive (or don't survive) page refreshes.

### Channel Types

**In-memory** (default) — messages live in a JavaScript array. Fast, zero serialization overhead, but lost on page refresh.

```typescript
import { Store } from "flurryx";

// Default — no configuration needed
export const ProductStore = Store.for().build();
```

**localStorage** — messages survive page refreshes and browser restarts. Same-origin only.

```typescript
import { Store, createLocalStorageStoreMessageChannel } from "flurryx";

export const ProductStore = Store.for().build({
channel: createLocalStorageStoreMessageChannel({
storageKey: "product-store",
}),
});
```

**sessionStorage** — messages survive page refreshes but are lost when the tab closes.

```typescript
import { Store, createSessionStorageStoreMessageChannel } from "flurryx";

export const ProductStore = Store.for().build({
channel: createSessionStorageStoreMessageChannel({
storageKey: "product-store-session",
}),
});
```

**Composite** — fan-out to multiple channels. The first channel is the primary (handles reads and id allocation); all channels receive writes.

```typescript
import {
Store,
createCompositeStoreMessageChannel,
createInMemoryStoreMessageChannel,
createLocalStorageStoreMessageChannel,
} from "flurryx";

export const ProductStore = Store.for().build({
channel: createCompositeStoreMessageChannel({
channels: [
createInMemoryStoreMessageChannel(), // primary — fast reads
createLocalStorageStoreMessageChannel({
// replica — persistent backup
storageKey: "product-store-backup",
}),
],
}),
});
```

**Custom storage adapter** — bring your own `{ getItem, setItem, removeItem }` implementation (e.g. IndexedDB, a remote API, or an encrypted store).

```typescript
import { Store, createStorageStoreMessageChannel } from "flurryx";

export const ProductStore = Store.for().build({
channel: createStorageStoreMessageChannel({
storage: myCustomAdapter,
storageKey: "product-store",
}),
});
```

### Built-in Serialization

Storage-backed channels automatically serialize and deserialize rich JavaScript types that `JSON.stringify` would lose:

| Type | Serialized as |
| ----------- | -------------------------------------------------------- |
| `Date` | `{ __flurryxType: 'date', value: '' }` |
| `Map` | `{ __flurryxType: 'map', entries: [[key, value], ...] }` |
| `Set` | `{ __flurryxType: 'set', values: [...] }` |
| `undefined` | `{ __flurryxType: 'undefined' }` |
| Primitives | Pass through unchanged |

This means your store state can contain `Date` objects, `Map`s, and `Set`s and they will round-trip correctly through `localStorage` or `sessionStorage` without manual conversion.

You can override serialization with custom `serialize` / `deserialize` hooks:

```typescript
createLocalStorageStoreMessageChannel({
storageKey: "product-store",
serialize: (state) => JSON.stringify(state),
deserialize: (json) => JSON.parse(json),
});
```

The `cloneValue` utility used internally is also exported for your own deep-clone needs:

```typescript
import { cloneValue } from "flurryx";

const copy = cloneValue(original); // handles Date, Map, Set, circular refs
```

---

## Store Mirroring

When building session or aggregation stores that combine state from multiple feature stores, you typically need `onUpdate` listeners, cleanup arrays, and `DestroyRef` wiring. The `mirrorKey` and `collectKeyed` utilities reduce that to a single call.

```
+--------------------+ +--------------------+
| Feature Store A | | |
| (CUSTOMERS) |-- mirrorKey ------>| |
+--------------------+ | |
| Session Store |
+--------------------+ | (aggregated) |
| Feature Store B | | |
| (ORDERS) |-- mirrorKey ------>| CUSTOMERS + |
+--------------------+ | ORDERS + |
| CUSTOMER_CACHE + |
+--------------------+ | ORDER_CACHE + |
| Feature Store C | | |
| (CUSTOMER_DETAIL) |-- collectKeyed --->| |
+--------------------+ | |
| |
+--------------------+ | |
| Feature Store D | | |
| (ORDER_DETAIL) |-- mirrorKeyed --->| |
+--------------------+ +--------------------+
```

```typescript
import { Store, mirrorKey, collectKeyed } from "flurryx";
```

### Builder .mirror()

The simplest way to set up mirroring is directly in the store builder. Chain `.mirror()` to declare which source stores to mirror from — the wiring happens automatically when Angular creates the store.

```typescript
// Feature stores
interface CustomerStoreConfig {
CUSTOMERS: Customer[];
}
export const CustomerStore = Store.for().build();

interface OrderStoreConfig {
ORDERS: Order[];
}
export const OrderStore = Store.for().build();
```

**Interface-based builder** (recommended):

```typescript
interface SessionStoreConfig {
CUSTOMERS: Customer[];
ORDERS: Order[];
}

export const SessionStore = Store.for()
.mirror(CustomerStore, "CUSTOMERS")
.mirror(OrderStore, "ORDERS")
.build();
```

**Fluent chaining:**

```typescript
export const SessionStore = Store.resource("CUSTOMERS")
.as()
.resource("ORDERS")
.as()
.mirror(CustomerStore, "CUSTOMERS")
.mirror(OrderStore, "ORDERS")
.build();
```

**Enum-constrained:**

```typescript
const SessionEnum = { CUSTOMERS: "CUSTOMERS", ORDERS: "ORDERS" } as const;

export const SessionStore = Store.for(SessionEnum)
.resource("CUSTOMERS")
.as()
.resource("ORDERS")
.as()
.mirror(CustomerStore, "CUSTOMERS")
.mirror(OrderStore, "ORDERS")
.build();
```

**Different source and target keys:**

```typescript
export const SessionStore = Store.for<{ ARTICLES: Item[] }>()
.mirror(ItemStore, "ITEMS", "ARTICLES")
.build();
```

The builder calls `inject()` under the hood, so source stores are resolved through Angular's DI. Everything — data, loading, status, errors — is mirrored automatically. No manual cleanup needed; the mirrors live as long as the store.

### Builder .mirrorSelf()

Use `.mirrorSelf()` when one slot in a store should mirror another slot in the same store. It is useful for aliases, local snapshots, or secondary slots that should stay in sync with a primary slot without wiring `onUpdate` manually.

```typescript
interface SessionStoreConfig {
CUSTOMER_DETAILS: Customer;
CUSTOMER_SNAPSHOT: Customer;
}

export const SessionStore = Store.for()
.mirrorSelf("CUSTOMER_DETAILS", "CUSTOMER_SNAPSHOT")
.build();
```

It mirrors the full resource state one way — `data`, `isLoading`, `status`, and `errors` all flow from the source key to the target key. The target key must be different from the source key.

Because it listens to updates on the built store itself, `.mirrorSelf()` also reacts when the source key is updated by another mirror:

```typescript
interface CustomerStoreConfig {
CUSTOMERS: Customer[];
}

interface SessionStoreConfig {
CUSTOMERS: Customer[];
CUSTOMER_COPY: Customer[];
}

export const CustomerStore = Store.for().build();

export const SessionStore = Store.for()
.mirror(CustomerStore, "CUSTOMERS")
.mirrorSelf("CUSTOMERS", "CUSTOMER_COPY")
.build();
```

`.mirrorSelf()` is available on all builder styles. For fluent builders, declare both slots first, then chain `.mirrorSelf(sourceKey, targetKey)` before `.build()`.

### Builder .mirrorKeyed()

When the source store holds a single-entity slot (e.g. `CUSTOMER_DETAILS: Customer`) and you want to accumulate those fetches into a `KeyedResourceData` cache on the target, use `.mirrorKeyed()`. It is the builder equivalent of [`collectKeyed`](#collectkeyed).

```typescript
// Feature store — fetches one customer at a time
interface CustomerStoreConfig {
CUSTOMERS: Customer[];
CUSTOMER_DETAILS: Customer;
}
export const CustomerStore = Store.for().build();
```

**Interface-based builder** (recommended):

```typescript
interface SessionStoreConfig {
CUSTOMERS: Customer[];
CUSTOMER_CACHE: KeyedResourceData;
}

export const SessionStore = Store.for()
.mirror(CustomerStore, "CUSTOMERS")
.mirrorKeyed(
CustomerStore,
"CUSTOMER_DETAILS",
{
extractId: (data) => data?.id,
},
"CUSTOMER_CACHE",
)
.build();
```

**Fluent chaining:**

```typescript
export const SessionStore = Store.resource("CUSTOMERS")
.as()
.resource("CUSTOMER_CACHE")
.as>()
.mirror(CustomerStore, "CUSTOMERS")
.mirrorKeyed(
CustomerStore,
"CUSTOMER_DETAILS",
{
extractId: (data) => data?.id,
},
"CUSTOMER_CACHE",
)
.build();
```

**Enum-constrained:**

```typescript
const SessionEnum = {
CUSTOMERS: "CUSTOMERS",
CUSTOMER_CACHE: "CUSTOMER_CACHE",
} as const;

export const SessionStore = Store.for(SessionEnum)
.resource("CUSTOMERS")
.as()
.resource("CUSTOMER_CACHE")
.as>()
.mirror(CustomerStore, "CUSTOMERS")
.mirrorKeyed(
CustomerStore,
"CUSTOMER_DETAILS",
{
extractId: (data) => data?.id,
},
"CUSTOMER_CACHE",
)
.build();
```

**Same source and target key** — when the key names match, the last argument can be omitted:

```typescript
export const SessionStore = Store.for<{
CUSTOMER_DETAILS: KeyedResourceData;
}>()
.mirrorKeyed(CustomerStore, "CUSTOMER_DETAILS", {
extractId: (data) => data?.id,
})
.build();
```

Each entity fetched through the source slot is accumulated by ID into the target's `KeyedResourceData`. Loading, status, and errors are tracked per entity. When the source is cleared, the corresponding entity is removed from the cache.

### mirrorKey

Mirrors a resource key from one store to another. When the source updates, the target is updated with the same state.

```
+------------------+--------------------------------+------------------+
| CustomerStore | mirrorKey | SessionStore |
| | | |
| CUSTOMERS -------|--- onUpdate --> update ------->| CUSTOMERS |
| | (same key or different) | |
| { data, | | { data, |
| status, | | status, |
| isLoading } | | isLoading } |
+------------------+--------------------------------+------------------+

source.update('CUSTOMERS', { data: [...], status: 'Success' })
|
'--> target is automatically updated with the same state
```

You wire it once. Every future update — data, loading, errors — flows automatically. Call the cleanup function or use `destroyRef` to stop.

```typescript
// Same key on both stores (default)
mirrorKey(customersStore, "CUSTOMERS", sessionStore);

// Different keys
mirrorKey(customersStore, "ITEMS", sessionStore, "ARTICLES");

// Manual cleanup
const cleanup = mirrorKey(customersStore, "CUSTOMERS", sessionStore);
cleanup(); // stop mirroring

// Auto-cleanup with Angular DestroyRef
mirrorKey(customersStore, "CUSTOMERS", sessionStore, { destroyRef });
mirrorKey(customersStore, "ITEMS", sessionStore, "ARTICLES", { destroyRef });
```

**Full example — session store that aggregates feature stores:**

For simple aggregation, prefer the [builder `.mirror()` approach](#builder-mirror). Use `mirrorKey` when you need imperative control — e.g. conditional mirroring, late setup, or `DestroyRef`-based cleanup:

```typescript
@Injectable({ providedIn: "root" })
export class SessionStore {
private readonly customerStore = inject(CustomerStore);
private readonly orderStore = inject(OrderStore);
private readonly store = inject(Store.for().build());
private readonly destroyRef = inject(DestroyRef);

readonly customers = this.store.get("CUSTOMERS");
readonly orders = this.store.get("ORDERS");

constructor() {
mirrorKey(this.customerStore, "CUSTOMERS", this.store, {
destroyRef: this.destroyRef,
});
mirrorKey(this.orderStore, "ORDERS", this.store, {
destroyRef: this.destroyRef,
});
}
}
```

Everything — loading flags, data, status, errors — is mirrored automatically. No manual `onUpdate` + cleanup boilerplate.

### collectKeyed

Accumulates single-entity fetches into a `KeyedResourceData` cache on a target store. Each time the source emits a successful entity, it is merged into the target's keyed map by a user-provided `extractId` function.

```
+--------------------+-----------------+--------------------------+
| CustomerStore | collectKeyed | SessionStore |
| | | |
| CUSTOMER_DETAILS | extractId(data) | CUSTOMER_CACHE |
| (one at a time) | finds the key | (KeyedResourceData) |
+--------+-----------+-----------------+ |
| | entities: |
| fetch("c1") -> Success | c1: { id, name } |
| fetch("c2") -> Success | c2: { id, name } |
| fetch("c3") -> Error | |
| | isLoading: |
| clear() -> removes last | c1: false |
| entity | c2: false |
| | |
'---- accumulates ----------->| status: |
| c1: 'Success' |
| c2: 'Success' |
| c3: 'Error' |
| |
| errors: |
| c3: [{ code, msg }] |
+--------------------------+
```

Each entity is tracked independently — its own loading flag, status, and errors. The source store fetches one entity at a time; `collectKeyed` builds up the full cache on the target.

```typescript
// Same key on both stores
collectKeyed(customerStore, "CUSTOMER_DETAILS", sessionStore, {
extractId: (data) => data?.id,
destroyRef,
});

// Different keys
collectKeyed(
customerStore,
"CUSTOMER_DETAILS",
sessionStore,
"CUSTOMER_CACHE",
{
extractId: (data) => data?.id,
destroyRef,
},
);
```

**What it does on each source update:**

| Source state | Action |
| ------------------------------------ | -------------------------------------- |
| `status: 'Success'` + valid ID | Merges entity into target's keyed data |
| `status: 'Error'` + valid ID | Records per-key error and status |
| `isLoading: true` + valid ID | Sets per-key loading flag |
| Data cleared (e.g. `source.clear()`) | Removes previous entity from target |

**Full example — collect individual customer lookups into a cache:**

```typescript
// Feature store — fetches one customer at a time
interface CustomerStoreConfig {
CUSTOMER_DETAILS: Customer;
}
export const CustomerStore = Store.for().build();

// Session store — accumulates all fetched customers
interface SessionStoreConfig {
CUSTOMER_CACHE: KeyedResourceData;
}

@Injectable({ providedIn: "root" })
export class SessionStore {
private readonly customerStore = inject(CustomerStore);
private readonly store = inject(Store.for().build());
private readonly destroyRef = inject(DestroyRef);

readonly customerCache = this.store.get("CUSTOMER_CACHE");

constructor() {
collectKeyed(
this.customerStore,
"CUSTOMER_DETAILS",
this.store,
"CUSTOMER_CACHE",
{
extractId: (data) => data?.id,
destroyRef: this.destroyRef,
},
);
}

// After loading customers "c1" and "c2", the cache contains:
// {
// entities: { c1: Customer, c2: Customer },
// isLoading: { c1: false, c2: false },
// status: { c1: 'Success', c2: 'Success' },
// errors: {}
// }
}
```

---

## AI Coding

flurryx ships a [`skills/flurryx/SKILL.md`](skills/flurryx/SKILL.md) file that teaches AI coding assistants the library's patterns and conventions. When loaded through a skill-aware harness, it helps generated stores, facades, services, and decorators follow flurryx conventions from the start.

### Set Up

For harnesses that support skill loading, install the skill using this directory layout:

```text
skills/
flurryx/
SKILL.md
```

- The harness should load the skill from `skills/flurryx/SKILL.md`
- Do not copy the skill instructions into `AGENTS.md`, `.claude/CLAUDE.md`, or similar agent prompt files
- Keep the skill as a dedicated loader entry so it remains reusable and versionable

If your tool is not skill-aware, you can still point it at `skills/flurryx/SKILL.md` as reference documentation.

### Why Use the Skill Loader

- Keeps flurryx guidance in one dedicated file
- Avoids bloating generic agent instruction files
- Makes the library conventions easy to install, update, and reuse across projects
- Preserves the harness-native loading model instead of relying on ad hoc prompt wiring

### What It Covers

- Store definition (interface-based, fluent, enum-constrained)
- Architecture-agnostic orchestration guidance
- Facade and service-led patterns
- `@SkipIfCached` usage rules and decorator ordering with `@Loading`
- Component patterns (read signals, never subscribe manually)
- Keyed resources for per-entity caching
- Store mirroring (`mirror`, `mirrorSelf`, `mirrorKeyed`)
- Message channels and persistence
- Time travel, replay, and dead-letter recovery
- Error normalization (default, HTTP, custom)
- Anti-patterns to avoid (no `any`, avoid accidental caching, decorator ordering)

---

## Design Decisions

**Why signals instead of BehaviorSubject?**
Angular signals are synchronous, glitch-free, and template-native. They eliminate the need for `async` pipe, `shareReplay`, and manual unsubscription in components. RxJS stays in the service/facade layer where it belongs — for async operations.

**Why not NgRx / NGXS / Elf?**
Those are general-purpose state management libraries with actions, reducers, and effects. flurryx solves a narrower problem: the loading/data/error lifecycle of API calls. If your needs are "fetch data, show loading, handle errors, cache results", flurryx is the right size.

**Why `Partial` instead of `Map` for keyed data?**
Plain objects work with Angular's change detection and signals out of the box. Maps require additional serialization. This also means zero migration friction.

**Why `experimentalDecorators`?**
The decorators use TypeScript's legacy decorator syntax. TC39 decorator migration is planned for a future release.

**Why a synchronous broker instead of an async message queue?**
JavaScript is single-threaded. Every store mutation — publish, consume, acknowledge, snapshot — completes in one synchronous call stack. This eliminates race conditions, ordering ambiguity, and the need for locks or semaphores. The broker is a transactional log, not a deferred queue: you get replay, undo/redo, and dead-letter recovery without async complexity.

**Why snapshot-based undo/redo instead of command replay?**
Replaying every message from the beginning is O(n) in the number of past mutations. Snapshot restoration is O(1) — jump to any point in history by restoring a pre-captured state object. The trade-off is memory (one snapshot per acknowledged message), but in practice store state is small and snapshots are cheap.

**Why pluggable message channels?**
Different apps have different persistence needs. A dev tool wants in-memory history that disappears on refresh. A form-heavy app wants `localStorage` so users don't lose drafts. An audit-sensitive workflow might want a composite channel that fans out to both memory and a remote API. The channel interface (`publish`, `getMessage`, `getMessages`, `saveMessage`) is intentionally minimal so custom adapters are easy to build.

**Why tsup instead of ng-packagr?**
flurryx contains no Angular components, templates, or directives — just TypeScript that calls `signal()` at runtime. Angular Package Format (APF) adds complexity without benefit here. tsup produces ESM + CJS + `.d.ts` in milliseconds.

---

## Contributing

```bash
git clone https://github.com/fmflurry/flurryx.git
cd flurryx
npm install
npm run build
npm run test
```

| Command | What it does |
| ----------------------- | ------------------------------------------------ |
| `npm run build` | Builds all packages (ESM + CJS + .d.ts) via tsup |
| `npm run test` | Runs vitest across all packages |
| `npm run test:coverage` | Tests with v8 coverage report |
| `npm run typecheck` | `tsc --noEmit` across all packages |

Monorepo managed with **npm workspaces**. Versioning with [changesets](https://github.com/changesets/changesets).

---

## License

[MIT](LICENSE)