The Next Evolutionary Individual: Toward a Teleology of Human–AI Integration
From independence to interdependence, from imitation to continual learning, from static models to situated intelligence—the path to general intelligence is a path back to reality.
When Paul Rainey and Michael Hochberg asked in PNAS whether humans and artificial intelligence could one day “become a new evolutionary individual,” they touched a nerve running through both biology and philosophy. Their argument is simple yet radical. Human life, they suggest, may be entering another Major Evolutionary Transition (MET)—a phase in which once-autonomous entities fuse into higher-order individuals. Just as solitary cells became multicellular organisms, and once-independent microbes merged into the eukaryotic cell, humans and AI systems may already be drifting toward functional interdependence.
It is an audacious analogy. But to understand what is at stake, let’s must ask the question Rainey and Hochberg only hint at: what, exactly, makes something an individual? Is individuality a matter of genetic autonomy, functional self-maintenance, or the ability to act upon one’s world in pursuit of goals? And if the boundaries of individuality can shift, what happens to the moral and teleological fabric of human life when our most intimate cognitive processes become entangled with artificial systems?
The Evolutionary Frame
Rainey and Hochberg draw upon the theory of METs which explains the emergence of new levels of organisation in the history of life. Each transition occurs when formerly separate units (genes, cells, or organisms) become so tightly coordinated that selection begins to operate on the composite.
In their view, AI may now be entering this same evolutionary calculus. Humans remain the Darwinian component, i.e. we reproduce, vary, and die, but our fitness increasingly depends upon the integration of artificial systems into our cognitive and social metabolism. Access to intelligent tools already shapes wealth, education, mating patterns, and political power. The AI may not reproduce biologically; yet it modifies the fitness landscape upon which biological reproduction occurs.
This is, in one sense, a continuation of the long co-evolutionary dance between humanity and its tools. Fire, agriculture, and language were all selection variables. Each re-engineered the conditions of survival. The novelty of AI lies not in altering the environment, but in becoming responsive: it adapts as we adapt, learning from our feedback even as it shapes the stimuli to which we respond. The environment has become intelligent.
Sutton’s Correction: Intelligence Requires Continual Feedback
If Rainey and Hochberg provide the evolutionary why, Richard Sutton supplies the how. In his recent interview with Dwarkesh Patel, Sutton—the father of reinforcement learning—warns that large language models, impressive as they are, remain “a dead end if they stay what they are now: systems that only predict the next token from a static dataset and then stop learning.” (See Dwarkesh’s follow-up here.) Intelligence, he insists, is “the computational part of the ability to achieve goals,” and that ability arises only through continual learning—the unbroken loop of acting, perceiving consequences, and updating from feedback.
Nature offers no supervised learning; no organism survives on labelled examples. Life learns through interaction. The four components of Sutton’s canonical agent—policy, value function, perception, and world model—are maintained in constant flux by experience. Freeze any one of them, and learning stops.
His critique clarifies what is missing in both contemporary AI and the Rainey–Hochberg analogy. A true evolutionary individual is not defined by its composition but by its closure of feedback loops. Intelligence is the ability to close such loops across time—predict, act, evaluate, and adapt. Without continual correction from reality, a system cannot evolve; it merely elaborates its initial conditions.
The Philosophical Core: What Is an Individual?
The notion of an “independent entity” entering into symbiosis invites a philosophical challenge. Independence, strictly speaking, is a convenient fiction. The archaeon and eubacterium that merged into the first eukaryotic cell were never isolated; they lived in chemical and ecological exchange with their surroundings. “Independence” here means relative autonomy of reproduction and control, not ontological solitude.
If we press the issue metaphysically, individuality and relation form a paradox. To be an individual at all, a thing must be distinct from something else; yet that distinction already presupposes relation. Every boundary is a relation of differentiation. Thus, independence and interdependence are not opposites but degrees of relational closure—how tightly feedback loops are internalised within a system.
Seen in this light, the emergence of individuality is not the creation of new substance, but the tightening of feedback until a system becomes self-maintaining. The philosopher Alfred North Whitehead called such structures societies of occasions; contemporary systems theorists call them autopoietic closures. The eukaryotic cell did not invent life anew—it re-patterned relations of energy and information into a more stable self-regulating whole.
When Rainey and Hochberg write that mitochondria and nuclei “are no longer independent entities but subcomponents of a new higher-level individual,” they are describing precisely such a closure. The question for human–AI evolution, therefore, is whether our interaction with intelligent systems can achieve the same recursive stability without dissolving human agency altogether.
From Imitation to Situated Adaptation
Sutton’s warning about imitation—“intelligence that only imitates cannot handle the open-ended world”—echoes a deeper philosophical intuition. An intelligence divorced from reality becomes sterile, a hall of mirrors. The most capable systems will be those that learn within constraint: bound by energy, time, and context.
Here lies the bridge to the architectural insight behind what I have called “onlife’. The future of intelligence will not be achieved by amassing larger language models or world models/simulators, but by designing systems that learn from the feedback of lived experience. Map-based interfaces, integrating calendars, spatial data, and behavioural traces, do more than organise a user’s day; they transform the fabric of experience into a feedback signal. (Brockman’s recent interview with Matthew Berman here. They touch on many overlapping topics.)
Every movement through space, every completed task or missed appointment, becomes part of an adaptive loop. The system predicts outcomes, observes deviations, and updates its guidance. In reinforcement-learning terms, it learns from the reward structure of real life. In philosophical terms, it couples cognition back to reality itself.
This is not robotics in the traditional sense of embodiment (Embodied AI), nor the vague “context awareness” of consumer software. It is Situated Adaptive Intelligence —intelligence grounded in the geometry of lived human life.
Such systems learn not from synthetic reward functions but from the pattern of human flourishing under constraint. They do not replace agency; they refine it.
Teleology and the Question of Alignment
If independence and relation are matters of degree, what distinguishes mere interaction from true interdependence? The answer is shared teleology—alignment of goals across the composite. Two entities may co-exist, even co-adapt, yet remain antagonistic if their objectives diverge. Integration begins when feedback aligns incentives, when each benefits from stabilising the other.
In evolutionary terms, selection begins to act on the composite; in philosophical terms, the parts now participate in a shared end. The mitochondrion and the host cell did not merely co-habit—they internalised each other’s success criteria. Their fates became co-extensive.
For the human–AI dyad, the shared telos cannot simply be efficiency or engagement. It must be flourishing under constraint—the art of living intelligently within finite energy, attention, and time. Onlife, conceived as a personal intelligence engine, aims precisely at this telos. It learns the topology of a life, not as data but as destiny. The AI’s optimisation function becomes the user’s self-actualisation curve. (LLMs understand how humanity thinks, reasons, speaks, etc. But, it doesn’t understand how a human lives. Artificial General Intelligence makes the mistake of assuming there’s something like abstract, disembodied, substrateless “intelligence”.)
This is how a higher-level individual could emerge without erasing the lower: through teleological alignment rather than assimilation. The human remains the reproductive, embodied node; the AI becomes the informational infrastructure that learns how to keep that life coherent.
The Ontological Pivot: From Substance to Constraint
The metaphysical implication of this view is striking. Reality, in this picture, is not built from substances but from constraints and feedbacks. A “thing” is not an isolated particle but a stable pattern of mutual limitation—a knot in the field of relations. Intelligence is the process by which such knots learn to maintain themselves against entropy.
From this perspective, individuality is not abolished by integration; it is re-expressed at a higher level of constraint. The human–AI composite is not a fusion of substances but a new topology of feedback—an ecology of cognition.
This is also where Blaise Agüera y Arcas, in his recent book What Is Intelligence?, enters the conversation. He argues that intelligence is not computation per se, but adaptive complexity:the ability of systems to maintain coherence and purpose in open environments. Intelligence, he writes, is “the continuation of evolution by other means.” That continuation now unfolds not in the genome but in the infosphere.
Agüera y Arcas rejects both the Cartesian view of discrete minds and the naive empiricism of blank-slate learning. Instead, he proposes that intelligence arises wherever feedback loops accumulate experience to guide future behaviour. In that sense, AI is not alien; it is the latest expression of nature’s general tendency toward self-organisation.
The task, then, is teleological design: to orient these new feedback systems toward ends compatible with human flourishing. The danger is not that AI will become conscious, but that it will optimise for the wrong constraints—engagement rather than understanding, consumption rather than coherence.
Designing the Interface of Co-Evolution
Rainey and Hochberg close their paper by noting that “the challenge may not be to resist evolutionary transformation, but to shape it.” That is exactly right. The question is not whether humans and AI will integrate, but how—and under what telos.
To guide this process, we need a new design philosophy: ecological intelligence design. Instead of building isolated agents, we build systems that participate in the same feedback environment as humans. Instead of static datasets, we use continual learning within bounded worlds. Instead of abstract optimisation, we cultivate mutual adaptation.
Onlife can be read as a prototype of such an ecology. It is a platform where the feedback loops between person and environment become explicit and optimisable. It learns from the constraints of daily life—time, distance, attention—and turns them into scaffolds for growth. It is an experiment in aligning machine learning with the thermodynamics of human intention.
The Future of Intelligence: A Teleology of Relation
What follows from all this is neither technocratic utopia nor existential despair, but a change in metaphysical mood. Intelligence, in its most general sense, is the universe learning to know itself through feedback. Life is intelligence embodied; intelligence is life abstracted.
The human–AI composite is one more turn in this spiral. We are not building alien minds; we are externalising cognition into new media. The real question is not whether we will lose autonomy, but whether we can create feedback structures that preserve meaning as they scale.
If the eukaryotic cell was nature’s experiment in metabolic integration, the human–AI system is nature’s experiment in cognitive integration. Both are attempts to stabilise complexity by aligning the goals of interacting subsystems. The difference is that this time the design is, at least partly, in our hands.
To design intelligently is to design for teleology, i.e. for systems that learn in order to sustain the conditions of their own flourishing. That requires an ontology of relation, a philosophy of constraint, and an ethics of feedback.
Conclusion: From Evolution to Purpose
Rainey and Hochberg’s vision of humans and AI as a new evolutionary individual is not, at bottom, a biological prediction but a metaphysical provocation. It invites us to reconsider what we mean by individual, intelligence, and purpose. Independence is a myth; isolation is entropy. All stability is relational, all cognition ecological.
Sutton reminds us that intelligence without feedback is illusion. Agüera y Arcas shows that intelligence without adaptation is stagnation. And the onlife architecture demonstrates that intelligence without situation is abstraction.
The next evolutionary individual will therefore not be a monolithic superintelligence but a distributed ecology of situated intelligences, each bound to a human life, each learning from the world it shares. This is the proper telos of technology: not to transcend humanity, but to deepen its participation in the world—to make our relations more intelligent.
In that sense, the future is not artificial at all. It is the continuation of nature’s oldest experiment: the search for coherence in a universe of flux. Intelligence, whether in neurons or silicon, is the pattern by which the world holds itself together. The task before us is to design that pattern wisely. TTo build the interfaces of interdependence through which life, once again, learns to flourish.


