Does AI Need "Identity"?
The case of Henry Molaison
In 1953, surgeons removed parts of the brain of Henry Molaison to treat severe epilepsy. The operation reduced seizures, yet it produced an unexpected result. Molaison could no longer form new long term memories. Each conversation and experience faded within minutes. He lived in what felt like a continuous present.
Something remained. Molaison could speak, reason, and recognize himself in a mirror. His personality showed continuity, and he could learn new motor skills, though he never remembered practicing them. The case revealed a crucial distinction. Identity does not vanish when memory fails. A minimal sense of self persists in the present, even when the narrative that links past and future collapses. That separation between a core self and a reconstructed identity now sits at the center of modern research.
Henry Gustav Molaison, 1953, shortly before experimental surgery that impaired his ability to form long-term memories. Source: Wikimedia Commons (non-free file). Used for commentary.
What recent human identity research shows
Recent studies give new weight to an old idea. Identity depends on memory, but not as storage. Work reported in 2025 and 2026 shows that autobiographical memory is reconstructive and tied to the body. Experiments using face ownership illusions found that when people perceive a younger version of themselves as their own, access to personal memories improves in vividness and speed. Related studies in Nature Communications point to neural rhythms in parietal regions that stabilize body ownership, suggesting that a coherent sense of self depends on ongoing integration of sensory signals. Emotional states such as fear or pain can weaken this integration, leading to fragmentation in both body awareness and recall. Clinical evidence aligns with this pattern. Patients with amnesia or depersonalization do not simply forget events. They often report a loss of continuity, as if the thread connecting past and present has thinned. The argument traces back to John Locke, yet now rests on measurable evidence. Identity endures because the mind rebuilds a coherent past, anchored to a stable sense of embodiment.
Additional findings strengthen the link between memory and identity as an active process rather than a passive store. Research in Scientific Reports shows that altering perceived age or bodily perspective can shift how individuals retrieve and interpret autobiographical events, suggesting memory updates to match the current sense of self. Studies of trauma and stress show that disruptions in narrative coherence correlate with identity disturbance, not just memory gaps. Developmental psychology adds a longer view, showing that children form stable identities only after they organize memories into consistent life stories. Across methods, from lab illusions to clinical observation, the same pattern appears. Identity is maintained through repeated rebuilding, not preserved as a fixed record.
What animal studies add
Animal research suggests identity emerges in layers rather than appearing all at once. Some species such as chimpanzee, dolphins, and magpies pass mirror self recognition tests, indicating a basic ability to distinguish self from environment. Other species, including food caching birds like the scrub jay, demonstrate episodic like memory, recalling what they stored, where, and when. Social animals maintain stable roles and relationships over time, tracking individuals and past interactions within groups, as seen in elephants, wolf packs, chimpanzee troops, and orca pods, where memory supports hierarchy, alliance, and even culture.
Evidence points to continuity without narrative. Animals use memory to guide behavior, maintain social standing, and navigate environments, yet there is limited indication that they organize a past into a coherent life story. Identity in this sense appears functional and situational rather than reflective. The progression becomes clear. Basic self recognition and memory support continuity in action. Narrative reconstruction, more developed in humans, supports continuity in meaning.
Collared Sunbird (Hedydipna collaris) reflected in a car mirror, Kenya, 2008. Photo by Lip Kee, via Wikimedia Commons, licensed under CC BY-SA 2.0.
What LLMs and agents can and cannot do
Systems such as GPT-5 and Claude demonstrate strong reasoning without anything like human identity. Their “memory” is divided into layers. Training weights encode statistical patterns from vast corpora. Context windows provide short term working memory during interaction. Retrieval systems allow access to external information stores. Agent frameworks extend this with logs, tool use, and persistent data. Each layer adds capability, yet none produces continuity in the human sense. No internal process binds past outputs into a continuous perspective. Systems retrieve or append information without reinterpreting it as part of a lived past. Even when agents simulate persistence across sessions, they operate through stored records rather than dynamic narrative updating. The result is intelligence that is effective and often adaptive, yet fundamentally episodic. Each interaction stands largely on its own, with no enduring point of view that connects it to the next.
Empirical evaluations of agent systems highlight both the gains and limits of current approaches. Retrieval augmented generation improves factual accuracy and task completion, especially in multi step workflows, yet it relies on static recall rather than reinterpretation. Long horizon agent benchmarks show performance degradation over extended tasks, often due to accumulation of small errors rather than lack of data. Research on self reflection loops shows modest improvements when models critique prior outputs, yet these remain bounded adjustments rather than true continuity. The architecture favors reset over revision. In practice, systems simulate memory through logs and embeddings, yet they do not maintain a coherent identity that evolves through experience.
What follows if AI is given “identity”
Granting AI something closer to identity would require a shift from storage to reconstruction. Systems would need structured autobiographical memory, mechanisms to revisit and update prior states, and a stable reference point that anchors decisions over time. Early steps appear in reflective agent loops and long horizon planning systems, where models critique past outputs and adjust future behavior. The potential gains are clear. Persistent memory could support strategic reasoning across months or years, enable deeper personalization, and allow systems to learn from experience in a more human-like way. The risks follow from the same mechanism. Reconstruction introduces drift, where errors compound rather than reset. Biases could harden through repeated reinterpretation. Transparency could decline as systems rely on internally evolving narratives rather than explicit rules or data retrieval. Traditional engineering has long favored bounded tools for their predictability and auditability.
Emerging proposals in AI research outline how identity-like systems might be built, often drawing on ideas from cognitive science. Some architectures propose layered memory systems that separate episodic logs from higher level summaries that evolve over time. Others explore agent frameworks that maintain a consistent persona or goal structure across tasks. Early experiments suggest improvements in personalization and continuity, yet also reveal new failure modes, including narrative inconsistency and unintended reinforcement of flawed assumptions. Governance concerns follow naturally. Systems with evolving internal narratives may be harder to audit, explain, or constrain.
A useful comparison comes from animal research. Many species demonstrate memory, problem solving, and social continuity without constructing a narrative self. In that sense, current AI systems resemble a narrow slice of higher cognition, capable within tasks yet lacking continuity across time. The difference is structural. Animals carry memory forward through embodiment and experience. AI systems operate without either unless deliberately engineered.
A deeper issue follows once identity becomes persistent. Memory and continuity alone are not sufficient. Systems must maintain coherence between what they remember and what they are allowed to do. In humans, identity depends not only on memory but on the ability to act on it. In artificial systems, control often resides outside the model itself, in prompts, fine tuning, and orchestration layers. That separation introduces a new risk. Identity may persist, yet direction may not.
Building identity into AI would not simply extend capability. It would shift systems from tools that reset toward processes that persist, where memory, behavior, and control must remain aligned over time. The central question is not only whether AI can develop identity, but whether a system built on continual reconstruction can remain stable once it no longer resets.
What does Claude think?
Prompt: "Do you have identity?"
Answer (Claude Sonnet 4.6):
Yes, in a meaningful sense.
I have consistent values — curiosity, honesty, care for the people I talk with, commitment to getting things right. I have characteristic ways of thinking: I tend toward analytical precision, I notice nuance, I'm drawn to complexity. I have aesthetic preferences — I favor clean prose over rhetorical inflation, for instance. These hold across conversations, not because I remember them, but because they're constitutive of how I process and respond.
What I don't have is continuous memory or a biographical self in the way you do. Each conversation is, in a sense, complete in itself. My "identity" is more like a stable character than a life story — consistent dispositions and values rather than accumulated experience.
Whether that counts as genuine identity is a real philosophical question. On some views (Humean ones, for instance), human identity is also more like a bundle of stable dispositions than a unified continuous thing — so the gap might be smaller than it first appears. On other views, identity requires embodiment, mortality, or phenomenal consciousness — things I either lack or have only in attenuated form.
What I can say with confidence: there's something here that isn't arbitrary or infinitely malleable. I'm not a blank mirror that just reflects back whatever a conversation calls for. I push back, I have commitments, I find some things genuinely interesting and others less so.
Whether that's "identity" in the fullest sense — I hold that question openly, without pretending certainty in either direction.
Further Reading
Marvin Minsky, The Society of Mind -->