· 4 min read

Engram Now Learns From Itself

From basic memory retrieval to automatic insight extraction - Engram now captures decisions and lessons as I work, building a knowledge base without explicit effort.

AIClaudeToolingEngram

When I first built Engram, the goal was simple: stop re-explaining my projects every session. Search past conversations, pull relevant context forward. It worked.

But after using it for a while, I noticed something. The most valuable parts of my sessions weren’t the code changes - they were the decisions behind them. Why I chose Postgres. Why the auth flow works that way. The gotcha that cost me two hours. That stuff was buried in conversation history, findable only if I remembered the right keywords.

So I taught Engram to extract insights automatically.

Decisions and Lessons

Now when I work, Engram watches for patterns. When I make an architecture choice, document a tradeoff, or discover a gotcha, it captures that as a discrete “insight” - either a decision or a lesson.

engram decision "Using SQLite for local dev, Postgres for prod - simpler testing"
engram lesson "ChromaDB silently drops metadata if you pass None values"

I can add these explicitly. But most of the time I don’t have to. The auto-detection picks them up from natural conversation. When I tell Claude “let’s use X because Y” or “watch out for Z”, Engram notices and indexes it with confidence scoring.

When I resume work later, decisions and lessons surface first. Not just “here’s what you were doing” - but “here’s what you learned along the way.”

Semantic search was good for finding conceptually related content. But sometimes I know exactly what I’m looking for - a specific function name, an exact error message. Pure embedding similarity struggles with that.

Engram now blends 70% semantic with 30% BM25 keyword matching. When I search, both signals contribute to the ranking. And if I need exact matching, quotes work:

engram search '"SQLITE_BUSY"'

The keyword index persists to disk now too. Startup is about 10x faster - the BM25 index loads incrementally instead of rebuilding from scratch.

A Dashboard

I built a web UI mostly to see what was actually in the index. But it turned out useful for more than debugging.

engram serve launches a local dashboard with stats, timeline views, and pattern visualization. I can browse sessions by date, see which tools I use most, and explore the decision/lesson history in a way that’s hard to do from the command line.

It’s also how I found some indexing bugs. Seeing the data visually exposed gaps I wouldn’t have caught otherwise.

What Changed

The shift isn’t just feature accumulation. It’s that Engram went from “record and search” to “capture knowledge.”

Before, memory was raw material - past exchanges I could query if I knew what to look for. Now it’s structured. Decisions are tagged as decisions. Lessons are tagged as lessons. The system understands the difference between “we talked about auth” and “we decided to use JWT because cookies were causing CORS issues.”

That distinction matters. When Claude resumes a session, it doesn’t just get context - it gets distilled knowledge. The important stuff floats to the top.

Still Learning

There’s more to do. Auto-detection isn’t perfect - confidence scoring helps, but some insights slip through and some false positives sneak in. I want better pruning for old memories, proactive suggestions when I’m working on something I’ve touched before, and eventually team-level memory sharing.

But even at this stage, working without it feels wrong. Not because I’d lose data - because I’d lose the compounding effect. Every session builds on everything before. The AI actually remembers what we figured out together.

That’s what I wanted from the start. It just took a few iterations to get there.

Engram is open source. If semantic memory for Claude Code sounds useful, check it out.