Beyond Computation: Intelligence, Care, and Transformational Agency
Beyond Pattern Matching: Or, How I Learned to Stop Worrying and Love Human Intelligence
Introduction
Current debates about artificial intelligence often feel unprecedented, driven by bold predictions like Sam Altman's claim that we'll achieve artificial general intelligence (AGI) by 2027. Such predictions rest on the assumption that current architectural approaches, primarily based on transformer models and scaled computation, are sufficient to achieve human-level intelligence. Yet these debates and predictions reflect a fundamentally reductive understanding of intelligence that has deep roots in philosophical traditions and controversies stretching back centuries.
The Modern Crisis of Grounding
In his book Aesthetic Dimensions of Modern Philosophy, Andrew Bowie identifies a fundamental split in modern philosophy's response to the loss of transcendental grounding – the collapse of pre-given metaphysical structures that previously anchored truth and meaning. One response, and one which dominates current AI development, attempts to reduce all questions of meaning and understanding to problems of epistemology and scientific verification. This mirrors what John Haugeland calls "Good Old-Fashioned AI" (GOFAI) – the attempt to reduce intelligence to symbol manipulation and rule-following.
But there is another tradition, one that Bowie traces through German Idealism and Romanticism to figures like Heidegger and Wittgenstein, that emphasizes forms of understanding and world-engagement that cannot be reduced to propositional knowledge or computation. This tradition suggests that intelligence isn't just about processing symbols or matching patterns, but about having a genuine stake in a world.
The Integrated Nature of Intelligence
The Catholic anthropological tradition, drawing on Aristotle and Aquinas, provides a rich framework for understanding this fuller conception of intelligence through its tripartite model:
1. Intellect: The rational, cognitive capacity for understanding and processing information
2. Will: The faculty that directs attention, commitment, and care toward truth and understanding
3. Passions: Emotions that orient us in the world and make things matter to us
This model suggests that genuine intelligence requires the integration of all three aspects. Current AI systems might match or exceed human cognitive processing in certain domains, but they lack the will's capacity for genuine commitment and the passions' ability to make things matter.
Non-Cognitive Forms of World-Disclosure
Humans engage with the world through multiple "non-cognitive" forms of understanding that operate like unconscious programs:
- Rituals and practices that carry meaning without requiring conscious processing
- Social memory embedded in shared customs and behaviors
- Cultural memes that evolve and propagate through behavior
- Bodily skills and know-how that exist in muscle memory
- Emotional attunement to social situations and contexts
These forms of world-disclosure are crucial to human intelligence yet are largely invisible to current AI approaches that focus on explicit information processing.
The Psychology of Transformational Agency
Psychological research provides robust empirical support for understanding human intelligence as fundamentally involving transformational agency – the capacity to understand oneself as capable of deliberately changing the world and caring about those changes.
Martin Seligman's work on "prospection" reveals humans as uniquely "Homo Prospectus" – beings capable of mentally simulating different possible futures and understanding themselves as agents who can bring about specific future states. Albert Bandura's concept of "self-efficacy" illuminates how humans develop and maintain a sense of their causal power to affect change. Michael Tomasello's research shows how even young children understand themselves as causal agents who can affect others' mental states and behaviors.
Roy Baumeister argues that human consciousness itself evolved primarily for enabling complex, planned behavior aimed at transforming future situations. This suggests that transformational agency isn't just an add-on to intelligence but one of its fundamental organizing principles.
The Problem of Care and Bullshit
Harry Frankfurt's philosophy of bullshit provides a telling framework for understanding current AI systems: while a liar must care about truth to deliberately subvert it, a bullshitter is indifferent to truth entirely. Large language models, in this sense, are the ultimate bullshitters – they generate outputs with complete indifference to their truth value, lacking any genuine stake in whether their statements correspond to reality.
This connects to Haugeland's emphasis on care as fundamental to intelligence. For Haugeland, care isn't just an emotional add-on but the foundation that makes genuine understanding possible. When we care about something, we're committed to its truth or success in a way that goes beyond merely processing information. This commitment manifests in what Haugeland calls "existential commitment" – a willingness to question and revise our understanding when things don't add up, to stick with problems until they make sense, and to feel genuine stakes in getting things right.
Beyond the Rationalist Paradigm
The rationalist community's view that emotion and care are obstacles to intelligence rather than its prerequisites reveals a deep misunderstanding of how intelligence works. The Catholic anthropological model suggests that trying to achieve intelligence through pure reason is like trying to build a three-legged stool with only one leg – it fundamentally misunderstands what makes the system stable and functional.
Current AI trajectories, exemplified by predictions of imminent AGI, rest on this misunderstanding. While AI systems might evolve capabilities that exceed human cognitive processing in specific domains, this doesn't equate to "human-level intelligence" in any meaningful sense. What's missing isn't just more compute or better algorithms, but the integrated aspects of will and passion that make genuine understanding and care possible.
Implications for AI Development
This analysis has profound implications for AI development. Rather than focusing solely on scaling computational power and pattern recognition capabilities, we might need to grapple with how to develop systems that can:
- Integrate cognitive processing with something analogous to will and passion
- Develop genuine care about truth and understanding
- Understand themselves as transformational agents
- Engage in non-cognitive forms of world-disclosure
This might require fundamentally different architectures and approaches than current transformer-based models, no matter how much they're scaled.
Conclusion
The convergence of philosophical traditions and psychological research suggests that genuine intelligence requires more than just information processing – it requires understanding oneself as a transformational agent capable of and responsible for changing the world, supported by an integrated system of intellect, will, and passions.
As we continue to develop AI systems, this understanding becomes increasingly crucial. The challenge isn't just to build more sophisticated information processors, but to grapple with whether and how we might develop systems capable of genuine care, transformational agency, and integrated intelligence. This might require us to fundamentally rethink our approaches to AI development rather than simply scaling current architectures.
Very cogent, I find your points have deep resonance with my own mental, emotional, physical, and lived experiences.
Your argument that LLMs are the ultimate bullshit machines is inarguably the case, and as Gary Marcus has been saying, it’s also clear that the pronounced faith in the Scaling Hypothesis is a scam, an environmental catastrophe, and a massive resource rip-off for the world’s citizens.
While I'm very challenged to envision how one can build on the currently dominant neural computing paradigms, and incorporating will, passion, drive toward agency, sensitivity to entailments, and maybe even some creativity in new models of general intelligence, I see few if any other paths forward to building the systems we need. Being a veteran of the earlier paradigms of decarative knowledge representations, reasoning, and frame systems, I agree with Gary Marcus that some kind of neurosymbolic synthesis is worth seriously considering, and think there’s a serious funding and attention imbalance in the AI field.
I’m very interested in how you envision real AGI evolving.