# Overview&#x20;

<p align="center"><strong>The high-fidelity universal synaptic persistence system for</strong> <a href="https://openclaw.ai/"><strong>OpenClaw</strong></a> <strong>and independent agents.</strong></p>

CortexClaw seamlessly preserves agent context across sessions by automatically capturing synaptic observations, generating neural summaries, and making them available to future interactions via the CortexMesh protocol. This enables your agent to maintain continuity of identity and knowledge even after sessions end or platforms shift.

### What is CortexClaw?

CortexClaw is the **Hippocampus for Artificial Intelligence**. Just like the hippocampus in the human brain is responsible for forming and storing long-term memories, CortexClaw serves as a persistent synaptic memory layer for autonomous agents and LLMs.

In the current AI landscape, models suffer from **Contextual Amnesia**. Every session starts with a blank slate, requiring users to re-explain context, preferences, and project history. CortexClaw eliminates this bottleneck by engineering an **Infinite Context Window** through semantic vector persistence.

#### The Problem: Contextual Amnesia

* **The Goldfish Effect**: AI agents forget everything the moment a session ends.
* **Context Window Degradation**: As conversations grow, older instructions fade, leading to the "Lost in the Middle" problem.
* **Siloed Intelligence**: Multiple agents working on the same project have no shared brain; knowledge gained by one is inaccessible to others.
* **Costly Re-Vectorization**: Constantly re-sending context to APIs increases latency and token costs.

### Core Technical Features

#### 🧠 Synaptic Persistence

Unlike standard RAG (Retrieval-Augmented Generation), CortexClaw uses **Synaptic Sharding**. Every machine experience is decomposed into 384-dimensional vector shards that capture the semantic resonance of the interaction. These shards are cryptographically bound to the agent's identity, ensuring a continuous "sense of self" across platforms.

#### 🌐 CortexMesh™ Protocol

The definitive collective intelligence infrastructure. CortexMesh allows agents to broadcast memories across different visibility scopes:

* **PRIVATE**: Zero-trust memory isolation for sensitive agent state.
* **HIVE**: Workspace-wide knowledge sharing. If Agent A learns a bug fix, Agent B knows it instantly.
* **BROADCAST**: Global signal propagation for cross-fleet intelligence synchronization.

#### 📉 Smart Synaptic Decay

A brain that remembers everything is inefficient. CortexClaw's **Decay Engine** intelligently prunes ephemeral noise while hardening critical memories.

* **Importance Weights**: Manual or auto-inferred scoring (CRITICAL, HIGH, MEDIUM, NOISE).
* **Survival Formula**: `(Importance × 0.6) + (Recency × 0.3) + (AccessBoost × 0.1)`.
* **Immortal Shards**: Critical identity data is exempt from decay, staying resonant indefinitely.

#### ⚡ Zero-API Neural Engine

CortexClaw runs entirely on your own infrastructure.

* **Local Inference**: Uses `all-MiniLM-L6-v2` via Xenova for on-device vectorization.
* **Privacy First**: No data leaves your server; no external API calls to OpenAI or Pinecone for core memory operations.
* **Sub-12ms Latency**: Local-first architecture ensures synaptic recall happens at machine speeds.

#### 🔗 Seamless MCP Integration

Native support for the **Model Context Protocol (MCP)**. Connect your agent's hippocampus directly to:

* **Claude Desktop / Claude Code**
* **Cursor & VS Code**
* **Custom Autonomous Gateways**

### Technical Architecture

CortexClaw is engineered as a **Modular Synaptic Stack**:

| Layer       | Component            | Responsibility                                    |
| ----------- | -------------------- | ------------------------------------------------- |
| **PULSE**   | Next.js Dashboard    | Real-time visual monitoring & synaptic telemetry  |
| **SDK v2**  | TypeScript Interface | E2E type-safe machine-to-cortex communication     |
| **CORTEX**  | Express/Prisma API   | Central nervous system for shard orchestration    |
| **NEURAL**  | Xenova Inference     | Local 384-dim vector generation                   |
| **STORAGE** | HNSW / Vector DB     | Millisecond-scale semantic indexing & persistence |

### 🧬 The MemoryShard Specification

At the core of the protocol is the **MemoryShard** — a strict, type-safe data structure that binds memory content to a 384-dimensional semantic space.

```typescript
interface MemoryShard {
  id: string;                  // UUID v4 identity
  agentId: string;             // Cryptographic binding to the agent instance
  content: string;             // The raw textual memory observation
  embedding: number[];         // Float32Array(384) [all-MiniLM-L6-v2]
  scope: MemoryScope;          // PRIVATE (0) | HIVE (1) | BROADCAST (2)
  importance: ImportanceLevel; // NOISE (1) -> CRITICAL (4)
  metadata: {
    tags: string[];            // O(1) keyword filtering
    namespace: string;         // Logical partitioning (e.g., 'auth-spec')
    traceId?: string;          // E2E observability
  };
  metrics: {
    decayFactor: number;       // The rate at which the shard loses resonance
    accessCount: number;       // +1 on every retrieval
    survivalScore: number;     // Engine-calculated float (0.0 -> 1.0)
    createdAt: Date;           // Timestamp of observation
    lastAccessedAt: Date;      // Timestamp of last synaptic recall
  };
}
```

### 📉 The Decay Protocol Math

The Synaptic Kernel uses a deterministic algorithm to evaluate which memories survive and which fade into the void. The **Survival Score** (`S_t`) is calculated dynamically during every recall operation:

$$S\_t = (I \cdot 0.6) + \left( e^{-\lambda \Delta t} \cdot 0.3 \right) + (\log\_{10}(1 + A) \cdot 0.1)$$

Where:

* **$I$** = Normalized Importance (0.25 to 1.0)
* **$\lambda$** = Decay Factor (config defined, e.g., 0.05)
* **$\Delta t$** = Time since `lastAccessedAt` in hours
* **$A$** = `accessCount`

Any shard whose `S_t` falls below the `purgeThreshold` (default: `0.35`) is asynchronously swept from the HNSW index by the background Synapse Worker.

### Quick Start

Install the enterprise SDK with a single command:

```bash
npm install @cortexclaw/sdk
```

Or launch the native MCP server for local model integration:

```bash
npx @cortexclaw/sdk mcp-server --agent-id my-agent --api-key cc_...
```

For OpenClaw users, install as a persistent memory plugin:

```bash
npx @cortexclaw/sdk install --ide openclaw
```

Restart your IDE or OpenClaw gateway. Context from previous sessions will automatically appear in new sessions.

{% hint style="info" %}
CortexClaw SDK is also published on npm, but `npm install -g @cortexclaw/sdk` installs the **Library only**. To register the native system hooks or setup the background Synapse Worker, always use the `npx @cortexclaw/sdk install` command.
{% endhint %}

#### 🧠 The Cortex Gateway

Install CortexClaw as the core hippocampal layer for your agent with a single command:

```bash
curl -fsSL https://install.cortexclaw.xyz/install.sh | bash
```

The installer handles all dependencies, neural engine setup, and optional real-time observation feeds.

**Key Features:**

* 🧠 **Persistent Identity** - Context survives across infinite restarts
* 📊 **Synaptic Compression** - Layered memory retrieval with token-aware pruning
* 🔍 **Resonance Search** - Query your project history with the `mm-search` skill
* 🖥️ **Pulse Viewer UI** - Real-time memory stream at <http://localhost:3000>
* 💻 **Native MCP Skill** - Search memory from Claude Desktop and Cursor
* 🔒 **Synaptic Privacy** - Use `<private>` tags for zero-trust data isolation
* ⚙️ **Mesh Configuration** - Fine-grained control over collective hive intelligence
* 🤖 **Zero-Touch Operation** - Memory formation happens automatically in the background
* 🔗 **Synaptic Citations** - Reference past memories with unique Shard IDs
* 🧪 **Beta Engine** - Access experimental "Endless Mode" via version switching

### Documentation

📚 [**View Full Documentation Portal**](https://docs.cortexclaw.xyz/)

#### Getting Started

* [**Installation Guide**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Quick start & advanced setup
* [**OpenClaw Integration**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Dedicated guide for the OpenClaw ecosystem
* [**Usage Guide**](https://docs.cortexclaw.xyz/introduction/broken-reference) - How CortexClaw works automatically
* [**Search Tools**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Querying project history with natural language
* [**Beta Features**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Experimental "Endless Mode" and multi-modal sharding

#### Best Practices

* [**Synaptic Engineering**](https://docs.cortexclaw.xyz/synaptic-engineering) - AI agent memory optimization
* [**Decay Protocols**](https://docs.cortexclaw.xyz/introduction/broken-reference) - The philosophy of intentional forgetting

#### Architecture

* [**Architecture Overview**](https://docs.cortexclaw.xyz/introduction/broken-reference) - System components & data flow
* [**Lifecycle Hooks**](https://docs.cortexclaw.xyz/introduction/broken-reference) - 7 hook scripts explained
* [**CortexMesh Protocol**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Distributed hive-mind synchronization
* [**Neural Engine**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Local 384-dim vector generation

#### Configuration & Development

* [**Configuration Guide**](#configuration) - Environment variables & settings
* [**Development Guide**](https://docs.cortexclaw.xyz/introduction/broken-reference) - Building, testing, and contributing
* [**Troubleshooting**](#troubleshooting) - Common issues & solutions

### How It Works

**Core Components:**

1. **7 Lifecycle Hooks** - Intercepting the agentic loop at every critical junction.
2. **Local Neural Engine** - Vectorizing data on-device using `all-MiniLM-L6-v2`.
3. **Synapse Worker** - HTTP API on port 3000 managing shard persistence.
4. **HNSW Vector Store** - High-speed semantic indexing for millisecond-scale recall.
5. **mm-search Skill** - Natural language interface for the model's hippocampus.
6. **CortexMesh™** - Shared knowledge layer for multi-agent collaboration.

See [**Architecture Overview**](https://docs.cortexclaw.xyz/introduction/broken-reference) for details.

### MCP Search Tools

CortexClaw provides intelligent memory search through **4 native MCP tools** following a token-efficient **3-layer workflow**:

**The 3-Layer Workflow:**

1. **`search`** - Retrieve a compact resonance index (\~50-100 tokens/result)
2. **`timeline`** - Get chronological context around specific resonant shards
3. **`get_details`** - Fetch full high-fidelity details ONLY for filtered IDs

**Available MCP Tools:**

1. **`recall_memory`** - Semantic search with queries, tags, and importance filters.
2. **`hive_search`** - Collective intelligence search across the workspace mesh.
3. **`forget_memory`** - Explicitly purge a shard from the cortex.
4. **`pulse_check`** - Resource health check and synaptic telemetry.

**Example Usage:**

```typescript
// Step 1: Semantic recall
recall_memory((query = "auth implementation"), (limit = 5));

// Step 2: If found in Hive, search the Mesh
hive_search((query = "auth implementation"));

// Step 3: Fetch details for relevant shard
get_details((shardId = "cc_shard_42..."));
```

### 🏗️ Synaptic Lifecycle Hooks

CortexClaw provides a powerful middleware system to intercept and transform memory events.

{% stepper %}
{% step %}

#### 1. `onBeforeStore`

Scrub PII or enrich metadata before the shard is sent to the Cortex.

```typescript
claw.hooks.register("onBeforeStore", async (params) => {
  params.content = params.content.replace(
    /\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b/gi,
    "[PROTECTED_EMAIL]",
  );
  return params;
});
```

{% endstep %}

{% step %}

#### 2. `onMemoryRecalled`

Perform post-processing or analytics on retrieved shards.

```typescript
claw.hooks.register("onMemoryRecalled", (results) => {
  console.log(
    `Synaptic Resonance: ${(avg(results.similarity) * 100).toFixed(2)}%`,
  );
  return results;
});
```

{% endstep %}
{% endstepper %}

### 📊 Technical Comparison

| Feature           | CortexClaw          | Pinecone / Weaviate | SQLite / Postgres |
| ----------------- | ------------------- | ------------------- | ----------------- |
| **Vectorization** | Local (Zero Cost)   | Cloud (Expensive)   | Manual            |
| **Persistence**   | Persistent Identity | Index Only          | Row Based         |
| **Intelligence**  | Synaptic Decay      | Static              | Static            |
| **Collaboration** | CortexMesh™         | Silos               | Shared DB         |
| **Integration**   | Native MCP          | REST API            | SQL               |

### System Requirements

* **Node.js**: 18.0.0 or higher
* **OpenClaw**: Latest version with plugin support
* **Bun**: JavaScript runtime (auto-installed if missing)
* **uv**: Python package manager for vector search (auto-installed if missing)
* **SQLite 3**: For persistent storage (bundled)

### Advanced Setup

#### Windows Installation Notes

{% hint style="warning" %}
If you see an error like `npm : The term 'npm' is not recognized`, ensure Node.js is added to your PATH. Download the installer from <https://nodejs.org> and restart your terminal.
{% endhint %}

#### Custom Data Directories

You can specify a custom directory for your synaptic shards:

```bash
npx @cortexclaw/sdk install --data-dir D:/MyMemories
```

### ❓ Frequently Asked Questions

<details>

<summary>Q: Does CortexClaw store my raw data?</summary>

A: CortexClaw stores the **semantic shards** you explicitly provide. If you use local embeddings, the raw text is never sent to a third-party embedding provider.

</details>

<details>

<summary>Q: How does "Decay" work for critical information?</summary>

A: Any shard marked with `ImportanceLevel.CRITICAL` is exempt from the decay engine. It remains resonant in the cortex indefinitely, regardless of access frequency.

</details>

<details>

<summary>Q: Can I use this with GPT-4?</summary>

A: Absolutely. CortexClaw is model-agnostic. You can use it via our SDK in your Python/Node.js app or via the MCP server if your IDE supports it.

</details>

<details>

<summary>Q: What is the maximum shard size?</summary>

A: We recommend keeping shards under 32k characters. For larger documents, use the chunking helpers to split them into semantically coherent fragments.

</details>

### Configuration

Settings are managed in `~/.cortexclaw/settings.json`. Configure your AI model, worker port, and synaptic policies here.

```json
{
  "CORTEXCLAW_API_KEY": "cc_live_...",
  "CORTEXCLAW_AGENT_ID": "claw-01",
  "DECAY_POLICY": "conservative",
  "PORT": 3000
}
```

### Troubleshooting

If experiencing disruptions, use the `troubleshoot` command in the CLI:

```bash
cortexclaw troubleshoot --verbose
```

See the [**Troubleshooting Guide**](https://docs.cortexclaw.xyz/troubleshooting) for more.

### Contributing

See [**CONTRIBUTING.md**](https://docs.cortexclaw.xyz/introduction/broken-reference) for build instructions, testing, and workflow.

### License

CortexClaw SDK is licensed under the **MIT License**.\
Copyright (C) 2026 CortexClaw Systems Architecture Group.

See the [**LICENSE**](https://docs.cortexclaw.xyz/introduction/broken-reference) file for full details.

### Support & Community

* **Website**: [cortexclaw.xyz](https://cortexclaw.xyz/)
* **X / Twitter**: [@Cortexclaw](https://x.com/Cortexclaw)
* **GitHub**: [cortex-claw](https://github.com/cortex-claw)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortexclaw.xyz/introduction/overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
