Why I Built a Consciousness Protocol for Stateless AI (And Why You Need One Too)

The Breaking Point Last month, I watched Jules (Google’s AI coding agent) solve the same bug four times. Not similar bugs. The exact same bug. Four different sessions. Four identical solutions. Each time, Jules had no memory of the previous fix, no understanding of why the bug kept returning, no context for the larger battle we were fighting. I was trapped in my own version of Groundhog Day, except Bill Murray was an AI with amnesia, and I was slowly losing my mind. ...

September 7, 2025 · Sven-Erik Nyberg

The Hofstadterian Codex: Teaching AI Agents to Think Across Time

The Problem Nobody Talks About If you’ve worked with AI coding assistants like Jules, GitHub Copilot, or Cursor, you’ve experienced this frustration: every new session feels like working with someone who has amnesia. You explain the project’s architecture, again. You re-establish coding conventions, again. You clarify the scientific goals, again. The agent might be brilliant in the moment, but it has no memory, no context, no understanding of the journey your project has taken. It’s like having a new genius contractor every day who’s never seen your codebase before. ...

September 5, 2025 · Sven-Erik Nyberg