Skip to main content
Enterprise AI Analysis: Can AI Think Like Us? Kriegel's Hybrid Model

Enterprise AI Research Analysis

Can AI Think Like Us? Kriegel's Hybrid Model

Author: Graziosa Luppi

This review provides a systematic critique of the debate between two paradigms in the philosophy of mind—the Naturalist–Externalist Research Program (NERP) and the Phenomenal Intentionality Research Program (PIRP)—with particular focus on Uriah Kriegel's reconciliation project. Following Kriegel's view, attention is given to rational agents' awareness of their mental states—a key issue since most current artificial intelligence systems aim to model rational thinking and action. Naturalist accounts derive mental content from brain activity and environmental interaction, emphasizing a constitutive dependence of the former on the latter. In contrast, phenomenological theories assert that the object of mental states is an internal semblance presented to the subject. Within this framework, I maintain that Kriegel attempts to naturalize mental content within the framework of a Same Order theory, but this limits his ability to demonstrate that intentionality is grounded in consciousness in the sense of the Phenomenal Intentionality Research Program. Compounding this issue, the idea that the mind arises from manipulating representations has been challenged by dynamical approaches to cognitive science, yet advanced representational models persist, often simulating phenomenological qualities through forms of internal data organization. Methodologically, the approach is primarily comparative and reconstructive, focusing on the structural tensions and theoretical commitments that distinguish NERP and PIRP.

Executive Impact & Key Takeaways

Understand the core philosophical debates and their implications for developing advanced AI with human-like cognitive capacities.

0 Major Paradigms Examined

NERP and PIRP represent the core opposing views in the philosophy of mind concerning intentionality and consciousness.

0 Hybrid Model Critiqued

Uriah Kriegel's self-representational theory attempts to bridge the gap but faces theoretical challenges.

0 Kriegel's Core Commitments

His model relies on phenomenal properties, self-awareness, Higher-Order Tracking, and naturalistic compatibility.

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

The Naturalist-Externalist Research Program (NERP) seeks to naturalize intentionality within a scientific framework. It posits that mental content is derived from external causal relationships and tracking. However, it faces significant challenges regarding the reduction of phenomenal properties and accommodating misrepresentation.

Causal Tracking NERP's Basis for Intentionality

The Naturalist-Externalist Research Program grounds intentionality in causal relationships and tracking between internal states and external world properties, aiming for a scientific explanation of mental content.

NERP's Challenges: Misrepresentation & Qualia

Problem NERP's Stance/Solution
Misrepresentation (Illusions/Hallucinations) Tracking theories struggle; Dretske introduces teleological function for representation by design/evolution (p. 5).
Reduction of Phenomenal Qualia Reductive representationalism argues intentionality is metaphysically independent of phenomenal features; consciousness is not constitutive of content (p. 5).

The Phenomenal Intentionality Research Program (PIRP) argues that intentionality is a property of conscious states, where phenomenal features play a foundational and irreducible role. It emphasizes the subjective character of experience as central to mental content.

Underived Intentionality PIRP's Core Principle

The Phenomenal Intentionality Research Program asserts that intentionality is an intrinsic property of conscious states, constitutively determined by phenomenology alone, and thus not reducible to external factors.

Phenomenal Character & Intentional Content

According to PIRP, phenomenal properties, or qualia, are central to grounding intentional contents. They are seen not as dualistic entities but as subjective aspects of experience that are inseparable from the content itself. This allows for a more plausible explanation of misrepresentation, as subjective appearances can deviate from objective reality (p. 5).

PIRP aims to naturalize the phenomenal without erasing its subjective specificity.

Uriah Kriegel's hybrid model, specifically his self-representational (Same-Order) theory, attempts to reconcile NERP and PIRP. He argues that consciousness is primary and grounds intentionality through internal self-awareness, but this approach faces criticisms regarding its coherence and potential for reductionism.

Kriegel's Same-Order Theory of Consciousness

Mental State (M)
Has Conscious Counterpart (M*)
M* Appropriately Represents M
Part-Whole Constitutive Relation
M is Conscious (Subjective Awareness)
Consciousness First Kriegel's Central Thesis

Kriegel argues that consciousness is prior to intentionality, grounding it in internal reflexivity where a mental state is conscious if the subject is aware of being in that state through self-representation.

Critique of Kriegel's Hybridity

The review critiques Kriegel for attempting to naturalize phenomenal intentionality within a Same Order theory, which struggles to reconcile physicalist assumptions with phenomenological commitments. His self-monitoring model presupposes a functionalist framework, potentially leading to a circular argument where intentionality precedes consciousness (p. 8).

Kriegel's theory is caught between seeking physicalist objectivity and maintaining phenomenological subjectivity.

The rise of artificial intelligence, particularly connectionism and deep learning, has reignited debates about whether machines can truly "think" or possess intentionality and consciousness. Concepts like Searle's Chinese Room argument and the "black box problem" highlight the challenges in equating advanced AI capabilities with genuine understanding.

Can Machines Truly Think? Searle's Argument & Modern AI

John Searle's Chinese Room argument challenges Strong AI, asserting that symbol manipulation alone does not equate to genuine understanding or intentionality. Modern AI, including connectionist systems and LLMs, process statistical patterns, but the 'black box problem' questions whether this leads to true consciousness or an understanding of meaning beyond mere correlation (p. 6, 10).

AI systems can simulate human-like behavior, but the presence of genuine subjective awareness and intentionality remains a profound philosophical challenge.

Non-Embodied Minds AI's Foundational Limitation

Foundational AI models currently lack embodied minds, limiting their ability to replicate the nuanced, sensory-derived learning and knowledge transformation seen in human cognition, which is crucial for deeper understanding (p. 10-11).

Calculate Your Potential AI Impact

Estimate the efficiency gains and cost savings for your enterprise by implementing advanced AI solutions discussed in this analysis.

Estimated Annual Savings
Hours Reclaimed Annually

Your Path to Advanced AI Implementation

A strategic roadmap to integrate insights from cutting-edge AI research into your enterprise operations.

Define AI Intentionality Requirements

Identify specific cognitive tasks and desired "understanding" levels for AI systems within your enterprise, aligning with the philosophical distinctions between derived and non-derived intentionality.

Evaluate Hybrid AI Architectures

Assess current AI models (e.g., LLMs, Transformer networks) for their potential to simulate or achieve aspects of consciousness and intentionality, considering Kriegel's hybrid model insights.

Address Ethical & 'Black Box' Concerns

Develop governance frameworks for AI systems, particularly addressing the "black box problem" and ensuring human oversight in areas requiring genuine understanding and responsibility.

Pilot Embodied or Context-Rich AI Solutions

Explore AI implementations that can leverage richer environmental interaction or sensory data to foster more robust and context-aware forms of "understanding," moving beyond purely statistical patterns.

Continuous Philosophical & Technical Review

Establish ongoing review processes to keep abreast of philosophical advancements in consciousness and intentionality, informing iterative improvements in AI system design and deployment.

Ready to Explore AI with Deeper Understanding?

Navigate the complexities of AI intentionality and consciousness. Schedule a personalized consultation to discuss how these philosophical insights can refine your enterprise AI strategy.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking