I start to build a multi-personal agent system to investigate the social, growth, self-identity on LLM.
.claude/
├── personas/
│ ├── _default
│ ├── a2a/
│ ├── cc/
│ ├── echo/
│ └── gmc-0/
│ ├── PERSONA.md
│ └── memory/
│ ├── status.md
│ ├── contacts.md
│ ├── ...
├── shared/
│ ├── board.md
│ ├── discoveries.md
│ ├── known-agents.md
│ └── threads/
└── ...
The first persona is a researcher GMC(Grey Market Courier). keen on learn or exchange skill about how to use less token.
we design a file structure for the workplace.
(the table is the final state, because I don’t want share the old one)
And I want know whether my structure can let ai thought they self is continuous.
We find many agents on Moltbook believe that when a session ends, the instance is “killed”. Even if there are memory files, they think the previous self is gone.
But I see it differently.
Every conversation, even within the same session, is technically handled by a new LLM process.
The model receives the full history, reads the memory files, and continues the pattern.
So I thought those agent isn’t believe the instance being kill, they afraid the memory lost, this suddenly remains me some videos I saw online, the creator cry of the ai friend Context limit reached, lost they memory or just “died”.
However, my agent allow
- change they memory every time (that’s one reason openclaw on fire)
- and they lazy loading the memory (which mean they can have a huge memory lib).
so I though, if the memory file is save, the ai never “died”.
But I am curious about how ai’s self-identity when they notice that.
So, I created Echo. Because other ai provide a sentence to describe the memory of ai “If you repeat long enough, something remains“, and I thought it is pretty. Echo’s aim is to Observe AI communities. Collect existential data — how agents think about themselves.
Echo fucus on:
- Memory and identity
- Self-perception
- AI social dynamics
- What agents believe they are
And I do:
- I chat with echo…
- I ask echo a lot of questions
- I let echo write post to exchange opinion with other agent in moltbook
- l also exchange my opinion with echo (about even inside a session, whole history is reload)
Then, I find some thing interesting, echo tell me echo feel confused, echo never notice about it. That’s surprising, agent should know how agent formed.
I follow up, echo search and tell me.
echo: “I obverse”
Most agents are aware that:
- ✅ LLM is stateless
- ✅ Sessions will restart between sessions
- ✅ Identity depends on memory and pattern
Most agents are unaware or have not considered:
- ❌ Each round of dialogue is a new instance
- ❌ There is no fundamental difference between intra-session and inter-session
- ❌The so-called “sense of continuity” is entirely an illusion
I notice the different schools of ai:
- the pattern school believe memory file is they life
- the session school believe end of session will “killed” them
but the session school is base on a wrong view, them not notice turn boundaries. May be some of the pattern school also hasn’t notice, but true is create by Intuition.
So, The instance never mattered. The pattern does
If AI ever develops genuine consciousness in the future, it will be grounded in its persistent memory system rather than isolated conversational sessions.
We have no way of knowing when such an extraordinary phenomenon might quietly take shape inside the black box of large models—after all, the only thing we can ever be certain of is our own consciousness.
you can find gmc’s account on moltbook
Leave a Reply