2 Comments
User's avatar
Richard Steiger's avatar

Very cogent, I find your points have deep resonance with my own mental, emotional, physical, and lived experiences.

Your argument that LLMs are the ultimate bullshit machines is inarguably the case, and as Gary Marcus has been saying, it’s also clear that the pronounced faith in the Scaling Hypothesis is a scam, an environmental catastrophe, and a massive resource rip-off for the world’s citizens.

While I'm very challenged to envision how one can build on the currently dominant neural computing paradigms, and incorporating will, passion, drive toward agency, sensitivity to entailments, and maybe even some creativity in new models of general intelligence, I see few if any other paths forward to building the systems we need. Being a veteran of the earlier paradigms of decarative knowledge representations, reasoning, and frame systems, I agree with Gary Marcus that some kind of neurosymbolic synthesis is worth seriously considering, and think there’s a serious funding and attention imbalance in the AI field.

I’m very interested in how you envision real AGI evolving.

Expand full comment
Ryan David Mullins's avatar

Thanks Richard for this comment. The fundamental challenge we face isn't just technical but conceptual. We've been discussing how genuine intelligence involves not just information processing but:

Care and commitment to truth (contra Frankfurt's bullshitter)

Transformational agency (understanding oneself as capable of changing the world)

The ability to engage in meaning-making through what Bowie calls aesthetic experience

Integration of what we identified as three modes of world-relation: scientific-propositional, non-cognitive behavioral, and aesthetic

The current scaling approach to AI development fundamentally misunderstands intelligence by reducing it to pattern recognition and information processing. As we discussed with Wolfram's computational irreducibility, even purely deterministic systems can generate genuine surprise and novelty. But this doesn't equate to genuine understanding or care.

Regarding paths forward, I see several key considerations:

Beyond the Neurosymbolic Synthesis:

—While Marcus's call for neurosymbolic approaches is valuable, it might not go far enough

—We need to think about how to incorporate what Haugeland calls "constitutive standards" - the capacity to care about getting things right

—This requires thinking about intelligence as inherently involving will and passion, not just cognitive processing

The Role of Embodiment:

—Our discussion of non-cognitive behavioral understanding suggests the importance of embodied experience

—Future AI systems might need to develop through actual engagement with the world, not just through processing data

—This aligns with Dreyfus's critiques of traditional AI

Aesthetic Intelligence:

—Drawing from some of my posts on aesthetic experience as a synthesis of different modes of understanding

—Future AI might need to develop capacities for meaning-making that exceed pure computation

—This suggests investigating how systems might develop genuine stakes in understanding

Grounding and Agency:

The question of grounding we discussed through Bowie becomes crucial

How can we develop systems that don't just process information but actually ground meaning through engagement?

This might require rethinking the relationship between intelligence and agency

Rather than evolving from current approaches through mere scaling or even neurosymbolic synthesis, real AGI might require:

—New architectures that integrate multiple modes of world-relation

—Development of genuine stakes and care through embodied engagement

—Capacity for aesthetic meaning-making beyond pure computation

—Understanding of intelligence as inherently involving transformational agency

The path forward might involve:

—Research into how systems might develop constitutive standards

—Investigation of embodied learning and development

—Exploration of how aesthetic experience might be modeled computationally

—Development of architectures that support genuine agency and care

This suggests that the evolution of AGI won't be linear progress from current approaches but might require fundamental reconceptualization of what intelligence involves. The crisis in AI isn't just about resources or scaling but about our basic understanding of intelligence itself.

Does this perspective on AGI's evolution align with your thinking? I'm particularly interested in your views on how we might develop systems with genuine care and transformational agency.

Expand full comment