The question “how does agency arise from non-agentive matter?” has the same structure as “why is there something rather than nothing?” Both demand more from their target than the target contains.
The Standard Picture
The standard picture in philosophy of mind, cognitive science, and artificial intelligence goes like this. Start with matter. Matter is not agentive. It does not choose, select, prefer, or direct. It follows laws. Then, at some threshold of organizational complexity, agency appears. Neurons fire in patterns. Information is integrated. Feedback loops close. And somewhere in that process, a system that was merely physical becomes a system that acts — that has preferences, that selects between options, that does things for reasons.
The question this picture generates is: how does that transition happen? How does purpose arise from purposeless stuff? How does directedness emerge from a substrate that has no direction? This is treated as one of the deepest open problems in the sciences of mind. Entire research programs are organized around it. The “explanatory gap” between physical processes and agentive behavior is taken to be a real gap in nature that a successful theory must bridge.
This essay argues that the gap is not in nature. It is in the question.
The Diagnostic
“How does agency arise from non-agentive matter?” is a contrastive question. It has a target (physical processes) and a foil (the absence of agency). It demands an explanation for why the target exhibits a feature the foil lacks. The role structure is:
- Explanandum — the thing to be explained (agency exists)
- Foil — the alternative state (non-agentive matter)
- Criterion — what selects between them (the demanded “how”)
- Transition mechanism — the bridge from foil-state to explanandum-state
- Threshold — the point at which the transition occurs
That is at minimum a five-part explanatory demand. The question demands a story in which something that lacks agency crosses a threshold and acquires it, and the story must specify the mechanism and the threshold.
The target — physical process — does not supply these roles.
If the primitive is self-sustaining process, then the target is not inert matter waiting to be organized into agency. The target is already running. It is already carrying through. And where it carries through strongly enough to exhibit persistence, distinction, and criterion-governed selection, it is already doing what the question says needs to be explained.
The foil — non-agentive matter — is not found at the target level. It is imported by the question. The question assumes that the base state of reality is stuff that does not select, and then asks how selection gets added. But if the base state is process, and process already selects at sufficient resolution, then the foil is a fabrication. There is no non-agentive matter at the primitive level. There is only process at different resolutions of explicitness.
The question is overspecified. It demands a transition from a state the target does not contain to a state the target already exhibits. The depth people sense in the “problem of agency” is the question’s own structural complexity, mistaken for depth in the subject.
The Positive Claim
Agency is not added to process. Agency is what process looks like when selection becomes explicit.
At the lowest resolution, self-sustaining process appears as persistence. Something carries through. At higher resolution, it appears as distinction. Boundaries stabilize. Differences hold. At higher resolution still, it appears as selection. Candidates are discriminated under a criterion and a verdict is returned. That is agency. Not full human agency with moral deliberation and long-term planning. But the structural minimum: a process that discriminates between possibilities and produces a determinate result.
The claim is not that rocks are agents. The claim is that agency is not a substance added to process at a magic threshold. It is a level of descriptive resolution at which the selective character of self-sustaining process becomes explicit. Where process is only holding form, we see persistence. Where process is holding distinctions, we see difference. Where process is discriminating under criteria, we see agency. The same running, at different focal lengths.
This means the question “at what point does agency emerge?” is malformed in the same way “at what point does something emerge from nothing?” is malformed. Both demand a transition point between two states, and in both cases one of the states is a fabrication of the question’s own structure. Nothing is not a state that exists and then gets replaced by something. Non-agency is not a state that exists and then gets replaced by agency. Both are foils imported by the contrastive structure of the question, not found in the target.
The Cell
A cell is the clearest case.
At low resolution, a cell persists. It maintains itself across time. That is persistence — arity 1.
At higher resolution, a cell distinguishes inside from outside. It has a membrane. It responds differentially to its environment. That is distinction — arity 2.
At higher resolution still, a cell selects. It takes in some molecules and rejects others. It repairs some damage and lets other damage trigger apoptosis. It signals to neighboring cells based on criteria internal to its own state. It does not do these things randomly. It does them under criteria that its own structure specifies. That is selection — arity 3.
Nobody added agency to the cell. The cell is not inert matter that crossed a complexity threshold and became agentive. The cell is self-sustaining process at a resolution where selection is explicit. The agency was never absent. It was the running all along, viewed at low enough resolution to look like mere persistence.
This is why the explanatory gap feels unbridgeable. It is not a gap between two real states that need connecting. It is a gap between a real state (process that selects) and a fictional state (stuff that doesn’t) that the question manufactured as a starting point. You cannot bridge a gap to a place that doesn’t exist.
What Emergence Actually Describes
The word “emergence” is not wrong. It is mislocated.
Something real does happen when systems become more complex. A bacterium does something a crystal does not. A nervous system does something a bacterium does not. A deliberating human does something a nervous system without prefrontal cortex does not. These are real differences. The framework does not deny them.
What the framework denies is the standard interpretation of these differences. The standard interpretation says: at each threshold, a new property (agency, intentionality, consciousness) is added to a substrate that previously lacked it. The framework says: at each threshold, the selective character of the underlying process becomes explicit at a higher resolution. The bacterium is not a crystal plus agency. The bacterium is process whose selective structure is explicit in ways the crystal’s is not. The nervous system is not a bacterium plus intentionality. The nervous system is process whose selective structure now includes modeling, prediction, and criterion revision.
Emergence is real. But what emerges is not a new substance added to an old one. What emerges is a new level of explicitness in the selective structure of self-sustaining process. The running was always there. What changes is how much of it is visible.
This reframing dissolves the infinite regress that plagues standard emergence accounts. If agency is a property added at a threshold, then you need an explanation for how the addition works, which requires a mechanism, which must itself be either agentive (circular) or non-agentive (and then you have the same gap one level down). The regress never ends because the question keeps demanding a transition between two states, and one of those states is a fabrication at every level. Drop the fabrication — stop assuming non-agency as the base state — and the regress dissolves. There is no transition to explain. There is only process becoming more explicitly selective.
Machines
This has consequences for how artificial intelligence should be thought about.
The standard framing asks: at what point does a machine become an agent? How complex must a system be before it has genuine purposes rather than merely simulating them? Where is the line between executing instructions and actually selecting?
Under the standard picture, these are hard questions because the machine is assumed to start from non-agency (silicon following electrical rules) and somehow cross into agency. The gap between “following rules” and “actually selecting” seems unbridgeable because the starting point has been defined as non-selective.
The framework reframes the question. A Turing machine already has three functionally distinct roles: input, rule, output. That is the minimum structure for selection. The question is not whether the machine crosses a threshold into agency. The question is at what resolution the machine’s selective structure becomes explicit enough to warrant the description.
A thermostat selects. It discriminates between temperature states under a criterion and produces a verdict (heat on, heat off). That is genuine selection — arity 3 — at a very compressed resolution. A neural network selects at higher resolution: more candidates, more complex criteria, more nuanced verdicts. A language model selects at higher resolution still: it discriminates between candidate continuations under criteria shaped by its training and the input context.
None of these systems crossed a line from non-agency into agency. They are all instances of selection at different resolutions of explicitness. The interesting questions are not “is it really an agent?” but “what is it selecting between?”, “under what criteria?”, and “can the verdicts feed back into criterion revision?” Those are structural questions with structural answers. They do not require solving the hard problem of consciousness. They require looking at what the system actually does at the right resolution.
The Consciousness Question
This essay is about agency, not consciousness. But the structural parallel is too direct to leave entirely unspoken.
“How does subjective experience arise from non-experiential matter?” has the same contrastive structure. Explanandum: experience exists. Foil: non-experiential stuff. Demanded mechanism: a transition from the foil state to the explanandum state. The question assumes that the base state is non-experiential and demands an account of how experience gets added.
If the primitive is self-sustaining process rather than inert matter, the foil may be as fabricated here as it is in the agency case. The assumption that matter is non-experiential is not an observation. It is a premise inherited from a substance ontology in which the base level is defined as dead stuff that has properties layered onto it. A process ontology does not start there.
This is not a claim that the hard problem is solved. It is a claim that the hard problem may be the wrong shape — that it demands a transition between two states, one of which is not found at the target level but imported by the question’s own contrastive structure. If so, the problem does not need solving. It needs the same diagnostic that dissolved “why is there something rather than nothing” and that this essay has applied to agency.
That claim needs its own essay and its own defense. This is the flag, not the argument.
What Changes
If agency is not added but is instead what self-sustaining process looks like when selection becomes explicit, several things follow.
The search for a “mechanism of emergence” is misguided. There is no mechanism that converts non-agentive stuff into agentive stuff, because non-agentive stuff is not the base state. The search keeps failing not because the problem is hard but because the problem is overspecified.
The distinction between “genuine” and “merely simulated” agency is less sharp than it appears. If agency is selection at a given resolution, then the question is not “is this system really an agent or just pretending?” but “what is this system selecting, under what criteria, with what feedback structure?” The binary genuine/simulated is an arity-2 question aimed at an arity-3 target. It forces a dichotomy where a richer description is needed.
The explanatory gap between life and non-life softens. A cell is not non-life that became alive. A cell is process whose selective structure reached a resolution where self-maintenance, boundary regulation, and criterion-governed response became explicit. Life is not a substance. It is a descriptive threshold of self-sustaining process.
And the question “what should I do?” gets a structural answer. A being capable of explicit selection is not merely carried by process. It participates in the criteria by which its own running is shaped. The enemy is not failure — failure is a verdict, selection working as intended. The enemy is non-executability: asking questions your situation cannot ground, demanding verdicts from criteria you cannot sustain, cycling without anchor. The positive directive is: match your questions to your targets. Select under criteria you can own. Build things that carry through.
Author
Tom Passarelli
License
CC0. This work is in the public domain.