Enterprise AI Analysis
Trust and Reliance on AI in Education: AI Literacy and Need for Cognition as Moderators
As generative AI systems are integrated into educational settings, students often encounter AI-generated output while working through learning tasks, either by requesting help or through integrated tools. Trust in AI can influence how students interpret and use that output, including whether they evaluate it critically or exhibit overreliance. We investigate how students' trust relates to their appropriate reliance on an AI assistant during programming problem-solving tasks, and whether this relationship differs by learner characteristics. With 432 undergraduate participants, students' completed Python output-prediction problems while receiving recommendations and explanations from an AI chatbot, including accurate and intentionally misleading suggestions. We operationalize reliance behaviorally as the extent to which students' responses reflected appropriate use of the AI assistant's suggestions, accepting them when they were correct and rejecting them when they were incorrect. Pre- and post-task surveys assessed trust in the assistant, AI literacy, need for cognition, programming self-efficacy, and programming literacy. Results showed a non-linear relationship in which higher trust was associated with lower appropriate reliance, suggesting weaker discrimination between correct and incorrect recommendations. This relationship was significantly moderated by students' AI literacy and need for cognition. These findings highlight the need for future work on instructional and system supports that encourage more reflective evaluation of AI assistance during problem-solving.
Executive Impact & Key Metrics
Generative artificial intelligence (AI) systems in higher education are now being integrated into learning environments as conversational tutors, writing and feedback assistants, and components of adaptive learning systems [24,28]. Students' adoption of AI tools, both regulated and unregulated by classroom policy, has become widespread, with prior work finding a majority of undergraduates report prior use and frequent engagement with large language models for coursework-related tasks [26,28]. As these tools become part of students' learning routines, attention has begun to focus on how students cognitively engage with AI-generated outputs during learning.
In open-ended problem-solving contexts such as programming, AI systems may provide suggestions, explanations, or proposed solution steps (for example, code) that appear coherent and confident while still being incomplete or incorrect [5,16]. This creates an immediate judgment task for learners: deciding how to interpret the output and whether, when, and how to incorporate it into their work. Prior work documents students' use of generative AI to produce explanations, suggest solution strategies, debug code, and clarify errors [19,24,26]. Such support can reduce friction and help learners make progress when human help is unavailable. Yet, without guardrails, students may begin to rely on AI output in place of their own reasoning [16,22,29].
These concerns have drawn attention to how students interpret and use AI-generated responses. When learners accept model outputs at face value, they may bypass sense-making, reduce self-explanation, and copy solutions without fully reasoning through them [16,27,29]. Studies of AI-supported programming learning show that while AI assistance can increase engagement and positive affect, it can coincide with confusion, frustration, and variable depth of cognitive processing depending on the usefulness and quality of the feedback [20]. Prior work also cautions that when AI functions primarily as an information provider, learners may accept responses with limited evaluation, which can reduce deeper engagement and higher-order thinking [12,29]. This matters for instruction, since effective learning relies on sustaining productive struggle, metacognitive engagement, and active participation during AI-supported tasks [23]. At the same time, a growing body of research shows that generative AI systems can produce misleading outputs that are difficult for users to detect [5,38]. These failures are consequential in educational contexts, where students may adopt flawed solution paths or internalize incorrect explanations [29]. Because AI-generated responses are often fluent and confident, they can exert a persuasive influence, increasing the likelihood that users accept them without sufficient verification [2].
Understanding how students regulate their reliance on AI assistance is therefore critical. In this work, we examine how students' trust in an AI assistant relates to appropriate reliance during AI-assisted programming problem-solving. We test whether trust predicts reliance behavior, and whether this relationship is moderated by learner characteristics that may influence evaluative judgment. Following prior conceptualizations of human-AI reliance, we distinguish among appropriate reliance, overreliance, and underreliance [36]. Appropriate reliance occurs when students accept correct AI recommendations and reject incorrect ones, demonstrating calibrated trust and critical evaluation. Overreliance occurs when students accept flawed AI-generated guidance without sufficient verification, while underreliance occurs when they dismiss accurate and potentially helpful AI support. Accordingly, we address the following research questions: (RQ1) What is the relationship between students' trust in an AI assistant and appropriate reliance during AI-assisted problem-solving?, and (RQ2) Do individual learner characteristics moderate the relationship between trust and appropriate reliance during AI-assisted problem-solving?
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Students appropriately relied on AI support in 61.77% of opportunities, indicating room for improvement in critical evaluation.
Human-AI Reliance Framework
A strong negative correlation indicates higher trust is associated with lower appropriate reliance, suggesting less scrutiny of recommendations.
| Moderator | Impact on Trust-Reliance Slope |
|---|---|
| AI Literacy | Negative interaction (b = -1.03, p = .014). Higher AI literacy associated with steeper negative trust-reliance slope. |
| Need for Cognition | Negative interaction (b = -0.99, p = .040). Higher NFC associated with steeper negative trust-reliance slope. |
| Programming Self-Efficacy | Not statistically significant (p = .121) |
| Programming Literacy | Not statistically significant (p = .437) |
Implications for AI in Education Design
These findings point to the importance of designing learning environments that make evaluation an explicit part of AI-supported work. Instructional activities and AI tools should make verification a routine part of task completion by prompting students to explain their reasoning or justify why they may agree or disagree with an AI-generated recommendation. Future work can build on this by developing systems that embed cognitive forcing functions into the interaction, such as requiring students to commit to an answer before viewing the AI recommendation, identify evidence for or against the suggestion, or complete a brief verification step before submitting a final response. Prior work has suggested that cognitive forcing functions can help reduce overreliance in AI-assisted decision-making. More broadly, these findings suggest that students' appropriate reliance on AI tools can be fostered through instruction and tool design.
Calculate Your Potential AI ROI
Estimate the impact of integrating AI-driven insights and tools into your enterprise operations.
Your Implementation Roadmap
A phased approach to integrate these insights into your enterprise.
Phase 1: Initial Assessment & Strategy Definition
Conduct a thorough assessment of existing educational AI tools and student interaction patterns. Develop a tailored strategy for integrating AI literacy training into curricula, focusing on critical evaluation and appropriate reliance. Timeline: 2-4 Weeks.
Phase 2: AI Tool Enhancement & Instructional Design
Implement cognitive forcing functions and verification prompts within AI educational tools. Design specific instructional activities that require students to justify their acceptance or rejection of AI recommendations. Timeline: 6-8 Weeks.
Phase 3: Pilot Implementation & Iterative Refinement
Pilot enhanced AI tools and instructional strategies in controlled settings. Collect student feedback and behavioral data on reliance patterns. Iterate on designs based on insights gained for continuous improvement. Timeline: 8-12 Weeks.
Ready to Transform Your Enterprise with AI?
Let's discuss how these insights can be tailored to your specific organizational needs and objectives. Schedule a free consultation with our AI strategy experts.