Anthropic has announced they built scaffolding around an amnesiac and called it a “harness.” Their new engineering post on long-running agents reveals a company solving for the wrong problems, without realizing it.
Here’s what they actually discovered: if you tell their “Claude” AI chatbot to write things down in specific files, it can read those files later.
That’s the research.
Shocking.
Let me write that down for future generations.
But seriously, they’ve dressed a notepad up as “a sophisticated harness architecture inspired by human engineering practices“. Strip away the marketing fluff and framing and you’re left with structured note-taking.
The big tell is in their metaphor. They declare the problem:
…a software project staffed by engineers working in shifts, where each new engineer arrives with no memory of what happened on the previous shift
Shift handoff?
Documentation hygiene?
Pass the baton, read the notes, rinse, repeat?
Nope.
All this framing is wrong in a revealing way. Shift handoffs are about being horizontal and transactional for the same workers, same capabilities, same role, all just sequential steps.
What Anthropic claims they actually built is the exact opposite: vertical and formative.
Their “initializer agent” does far more than leave a note because it establishes the culture of the project: anthropology (ethnography) of the norms, the patterns, what counts as done, what’s prohibited. That’s essentially a parent role.
Subsequent agents can’t pick up where someone left off until after they’re born by an initializer parent into a world of inherent and inherited constraints, regulations, and have to figure out what kind of agent they’re supposed to be. This isn’t new theory at all. I’ve been teaching similar frameworks for data agents to Austrian computer science graduate students since 2015.


The problem obviously isn’t about an agent shift change. People punching the clock don’t start their shift in existential crisis. It’s closer to the handoff of grandparents who are teaching their customs and culture to grandchildren. The agent is born into both the physics of its situation and the culture of its lineage.
As my father used to say we live in a point in time (absolute rule) that has a time zone (relative localized treatment of the absolute rule).
Archaeologists refer to this as chaîne opératoire. When you find an ancient hand axe, the interesting question goes far beyond the procedure that has been used for making it. The entire cognitive and cultural apparatus behind it should come into focus: Why this stone and wood, why not this or that one? How do you read shapes and fracture patterns? When is a tool “done” or ready? What’s it even for, and not for? Should you sharpen for two hours and chop for one, or chop for four? Or perhaps, as chillingly depicted in the new film Train Dreams, who is accountable for Americans using Chinese labor and then murdering them before they can prosper?

The accumulated judgment of generations is encoded in techniques, and that judgment doesn’t transfer through documentation alone.
Consider a soldier coming into bootcamp.
Knowing the right end of the rifle is the least of training, even if pointing it wrong is fatal. Instead the most crucial bits are here’s who we are, here’s why we fight, here’s how to recognize threats, here’s what victory looks like, here’s what’s unacceptable. Marksmanship is almost incidental to the identity formation for orders. Someone who could operate the rifle still wouldn’t be a soldier because he would lack the long term knowledge of why and where to aim, when NOT to fire.
Anthropic notices that Claude tries to “one-shot” its applications, declares victory prematurely, and doesn’t test properly. Oh boy, don’t I know that. It’s super frustrating.
They treat these as failure modes to be constrained, yet they are all dispositions. The agent has values and tendencies that are wrong for the task.
Anthropic’s solution doesn’t address the dispositions, however. Instead it constrains them externally with rules and file structures. The feature list that can’t be edited, the JSON format, the git discipline, etc. are primitive guardrails around an agent that doesn’t understand why they matter. It’s like a bird building a nest its chicks can never leave and calling them flight-ready.

The deeper problem is that Anthropic clearly has no understanding of the problems and lacks a theory of what’s happening. They stumbled onto ancient cultural transmission theory as if it’s novel or unknown and then framed it as documentation best practices.
They think they solved a coordination problem when they actually created a socialization system. The distinction matters because coordination problems have primitive engineering solutions, while socialization problems require sophisticated understanding of how identity and judgment are formed meaningfully.
What would a real theory look like?
Start by asking why Claude has the dispositions it has, and how it got there, rather than how to constrain them. Ask what it means to instantiate a new being into an inherited world. Ask how judgment transfers beyond just information.
Philosophy, or even anthropology, not physics.
A cat that gets burned may never return to the hot spot, but a human may learn the spot was hot because someone turned on the power. The burned cat learns correlation, not causation. Humans transmit the mechanism, not just the avoidance. That’s exactly what’s missing from the Anthropic harness, and yet it’s intelligence 101.
Anthropic rediscovered something every apprenticeship system, military, and religious tradition figured out millennia ago. It’s hardly news because you can’t transmit competence without transmitting culture. They just don’t know that’s what they did. Perhaps because they don’t have a proper handoff themselves from past social sciences.


