· 3 min read

Developing with Agents

The AI scene is still growing, but I've been immersed in it since 2023. The way I use AI agents has an extremely hands on approach, that keeps the human in the loop.

AIArchitectureLLMs

Six months ago I laughed when I would see anyone mention “prompt engineering”. “You mean to tell me,” I said, “that they’ve literally summoned this new engineering discipline just to figure out how to ask a computer to do stuff?”

Then I started playing around with prompting myself, and quickly realized that prompt engineering is legitimately a thing. Crafting the right prompt can make or break your AI application’s performance. You can give an AI a vague instruction and get back garbage, or you can give it a well-structured prompt and get back something useful.

There is an art to it, and I’ve spent a lot of time refining my prompts to get the best results. I recently started digging into using the Claude Prompt Improver to help me refine my prompts even further. It’s fascinating to see how small changes in wording can lead to significantly different outputs. And I’ve noticed something. I see a lot of the same patterns in the prompts that the prompt improver generates that is really similar to the way I see other prompts structured in the wild.

How I use Claude Code

I rarely just use Claude Code in its vanilla form. I built a semantic search engine that I’m calling Engram using FastAPI and Python that I use to index the chat history of my conversations with Claude over the MCP protocol. This allows me to just work with Claude Code without needing to worry about preserving my context in markdown files, because with a metaprompt that comes with my Engram setup, Claude will just automatically pull in relevant context from my previous conversations as needed.

And it works so well. I can have long conversations and just run /new without even thinking about the context windows, the status of my living markdown documents, or anything else, because when I start a new session Claude is automatically fed a summary of the previous conversation — along with any relevant context from my Engram index — so I can just pick up where I left off.

I still have Claude write to markdown files for me, but I don’t have to worry about managing the context myself. I use a modified version of Obra’s Superpowers to manage my markdown files, and I have a custom script that automatically syncs my Engram index with the files Claude creates. This way, I can just focus on writing and let Claude handle the rest.

If you’re building with LLMs and have learned something I’ve missed, I’d love to hear about it. The best lessons come from shared experience.