Back to Blog

Lossless Claw Fixes OpenClaw's Worst Problem — Here's How to Set It Up

OpenClaw's context compaction throws away the details your agent needs most. Lossless Claw replaces it with a DAG-based system that never forgets. 3.8K GitHub stars in weeks.

March 18, 2026
4 min read
By Clawdy Team

You're an hour into a complex coding session. You've walked the agent through the architecture, explained why you chose Postgres over SQLite for this particular service, pointed it at three specific files, and worked through two edge cases. The agent is performing well. Then the context window fills up.

Compaction kicks in. The agent summarizes your conversation into a paragraph. And on the very next turn, it asks which database you're using.

If you've used OpenClaw for anything longer than a quick question, you've lived this. Peter Steinberger — OpenClaw's creator — recently recommended a fix for it on social media. That post hit 277,000 views and 3,200 likes in hours. Turns out the entire community has been quietly suffering through this.

The fix is called Lossless Claw. And it changes how long sessions work entirely.

The Problem With Default Compaction

OpenClaw uses a sliding-window compaction strategy. When the conversation grows past the model's context window, it takes the oldest messages, summarizes them into a short paragraph, and discards the originals. The summary stays in context. The details don't.

What survives: the general theme of what you were discussing. A vague sense of the task at hand.

What doesn't: the specific file path you were editing. The decision you made about why approach B was wrong. The error message from step three that explains why you're now on step seven. The exact library version you pinned. The fact that the endpoint uses PATCH, not PUT.

Those specifics are the entire value of having context. Without them, the agent isn't resuming your work — it's restarting from a hazy briefing. You become the human RAM, mentally tracking everything the agent dropped and feeding it back piece by piece.

That's the opposite of what an autonomous agent is supposed to do. And it makes long-horizon tasks — the tasks where agents provide the most value — fundamentally unreliable.

What Lossless Claw Does Differently

Lossless Claw is an open-source plugin by Martian Engineering, built on the LCM paper from Voltropy. It replaces OpenClaw's compaction engine with a system that, as the name suggests, doesn't lose anything.

The mechanics are straightforward:

Every message is persisted. All messages — user, assistant, tool calls, tool results — go into a local SQLite database, organized by conversation. Nothing is ever deleted.

Older messages are summarized into a DAG. When compaction triggers, Lossless Claw doesn't throw away old messages. It creates summaries of chunks of messages, and as summaries accumulate, it condenses summaries into higher-level summaries. The result is a directed acyclic graph — a hierarchy of increasingly condensed overviews, all linking back to the raw messages they came from.

Context is assembled each turn. Every time the agent needs to respond, Lossless Claw builds the context window from a combination of DAG summaries (for older history) and raw recent messages (for current work). The model sees a coherent, information-dense picture of the entire conversation.

The agent gets recall tools. This is the clever part. Lossless Claw gives the agent three tools to reach back into compacted history:

  • lcm_grep — full-text search across the entire conversation history
  • lcm_describe — get an overview of what happened in a conversation
  • lcm_expand — drill into a specific summary to recover the original detail

So when the agent needs to remember which library version you pinned an hour ago, it doesn't guess. It searches.

The result is an agent that feels like it never forgets. Because it doesn't. The raw messages are always there. The summaries preserve structure. And the tools let the agent pull specific details back when it needs them.

3,800 GitHub stars, 30 contributors, and five releases in a few weeks. The community clearly agrees this should have been the default.

How to Install It on Clawdy

Open your Clawdy terminal at clawdy.app/agent/_/terminal and run:

openclaw plugins install @martian-engineering/lossless-claw

The install command registers the plugin, enables it, and sets it as the context engine. If you want to verify the config manually, check that your OpenClaw settings include:

{
  "plugins": {
    "slots": {
      "contextEngine": "lossless-claw"
    }
  }
}

Restart OpenClaw after installing. Your existing memory files (MEMORY.md and anything in memory/) are untouched — Lossless Claw operates at the session layer, below persistent memory. It replaces what happens when the context window fills up mid-session, not what persists between sessions.

To verify it's working: run a session long enough to trigger compaction, then check openclaw session status. You should see structured DAG entries — summaries linking back to source messages — rather than a single prose paragraph.

Lossless Claw works out of the box, but three settings are worth tuning:

{
  "plugins": {
    "entries": {
      "lossless-claw": {
        "enabled": true,
        "config": {
          "freshTailCount": 32,
          "incrementalMaxDepth": -1,
          "contextThreshold": 0.75
        }
      }
    }
  }
}

freshTailCount: 32 — The last 32 messages are always kept raw, never summarized. This gives the model enough recent context for continuity on the current task. Lower it if you're hitting context limits with a small-window model. Raise it if you're using a 200K context model and want more raw history.

incrementalMaxDepth: -1 — Unlimited DAG condensation depth. After each compaction pass, Lossless Claw automatically cascades summaries upward as needed. Set to 0 for leaf-only summarization (simpler but less efficient at compressing very long conversations).

contextThreshold: 0.75 — Triggers compaction when context reaches 75% of the model's window, leaving 25% headroom for the model's response. This is a sane default. Lower it if you want more aggressive compaction (useful for smaller context windows).

Cost tip: Summarization uses LLM calls. By default it uses whatever model your OpenClaw session is running, which might be expensive. You can pin summarization to a cheaper model:

{
  "config": {
    "summaryModel": "anthropic/claude-haiku-4-5"
  }
}

Haiku is fast, cheap, and more than capable of producing good summaries. No reason to burn Sonnet or Opus tokens on compaction.

What It Doesn't Solve

Lossless Claw handles within-session context. It does not handle:

Between-session memory. When OpenClaw restarts or a session resets, Lossless Claw's context doesn't carry over automatically. That's the job of OpenClaw's built-in memory system — MEMORY.md, the persistent memory files, and the improved memory in 2.0. They're complementary: Lossless Claw keeps the agent coherent while working, OpenClaw's memory persists the outcomes.

Session reset policy. If OpenClaw is resetting your sessions too aggressively, Lossless Claw can't help — there's nothing to compact if the session is gone. Increase session.reset.idleMinutes in your OpenClaw config. A week (10080) is a reasonable starting point for long-running work.

Cross-instance memory. If you run multiple OpenClaw instances, Lossless Claw's SQLite database is local to each one. There's no sync between them.

These aren't shortcomings — they're different problems with different solutions. Lossless Claw does one thing well: it makes sure your agent doesn't lose the thread during a long session. That one thing changes the entire experience.


If your OpenClaw agent forgets what you told it an hour ago, Lossless Claw fixes that. And if you don't have an OpenClaw instance yet, Clawdy gets you one in under 60 seconds — with proper infrastructure, auth, and SSL already handled. Deploy at clawdy.app.