Skip to main content
Enterprise AI Analysis: Model of Acceptance of Artificial Intelligence Devices in Higher Education

Enterprise AI Analysis

Model of Acceptance of Artificial Intelligence Devices in Higher Education

Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance of the use of AI devices (MIDA) in the university context. The model considers contextual variables such as anthropomorphism (AN), perceived value (PV) and perceived risk (PR). It also considers cognitive variables such as performance expectancy (PEX) and perceived effort expectancy (PEE). In addition, it considers emotional variables such as anxiety (ANX), stress (ST) and trust (TR). For its validation, data were collected from 517 university students and analysed using structural equations (CB-SEM). The results indicate that perceived value, anthropomorphism and perceived risk influence the willingness to accept the use of AI devices indirectly through performance expectancy and perceived effort. Likewise, performance expectancy significantly reduces anxiety and stress and increases trust, while effort expectancy increases both anxiety and stress. Trust is the main predictor of willingness to accept the use of AI devices, while stress has a significant negative effect on this willingness. These findings contribute to the literature on the acceptance of AI devices by highlighting the mediating role of emotions and offer practical implications for the design of AI devices aimed at improving their acceptance in educational contexts.

Executive Impact & Key Metrics

The Model of Acceptance of Artificial Intelligence Devices (MIDA) reveals a multifaceted view of AI adoption in higher education, highlighting critical areas for strategic intervention to enhance student engagement and educational outcomes.

0.688 R² Willingness to Accept AI (WA) Explained Variance
0.746 R² Performance Expectancy (PEX) Explained Variance
0.662 R² Trust (TR) Explained Variance
517 University Students Surveyed

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Understanding the interplay between cognitive, contextual, and emotional factors is crucial for successful AI adoption. Trust emerges as a primary driver, while stress can significantly hinder acceptance. These findings highlight the importance of designing AI systems that are not only effective but also emotionally resonant and user-friendly within the educational ecosystem.

β=0.714 Direct Positive Influence of Trust on Willingness to Accept AI Devices (p < 0.001)
β=-0.228 Direct Negative Influence of Stress on Willingness to Accept AI Devices (p < 0.001)

Mediating Role of Performance Expectancy

Performance Expectancy (PEX) acts as a crucial mediator, linking contextual factors like Anthropomorphism and Perceived Value to emotional responses (Anxiety, Stress, Trust).

Example: Higher PEX leads to significantly lower anxiety (β=-0.303, p<0.001) and stress (β=-0.307, p<0.001), while significantly increasing trust (β=0.812, p<0.001). This indicates that when students believe AI will genuinely enhance their academic performance, they experience more positive emotions and greater trust.

Perceived Effort and Negative Emotions

Perceived Effort Expectancy (PEE) significantly increases anxiety (β=0.778, p<0.001) and stress (β=0.766, p<0.001), without a significant effect on trust. This suggests that if AI devices are perceived as difficult to use, it intensifies negative emotional responses, which in turn can reduce willingness to accept.

The MIDA model provides actionable insights for universities and AI developers aiming to foster greater AI acceptance among students. Strategies should focus on building trust, mitigating stress, and clearly demonstrating performance benefits.

Key Strategic Areas for AI Adoption
Strategic Area Recommendation Rationale from MIDA
Building Trust
  • Ensure transparency in AI functioning and decision-making.
  • Clearly communicate academic benefits.
  • Implement ethical guidelines for AI use.
  • Trust is the strongest predictor of acceptance.
  • Lack of transparency reduces confidence.
Mitigating Stress & Anxiety
  • Design intuitive, easy-to-use interfaces.
  • Provide progressive training and ongoing technical support.
  • Avoid cognitive overload in AI-assisted tasks.
  • Stress negatively impacts acceptance.
  • High perceived effort increases anxiety and stress.
Enhancing Performance Expectancy
  • Integrate AI with clear learning objectives.
  • Demonstrate AI's functional and academic value.
  • Highlight how AI improves academic performance.
  • PEX reduces anxiety and stress while increasing trust.
  • Anthropomorphism and perceived value positively influence PEX.

This research employed a robust methodology to validate the MIDA model, utilizing structural equation modeling (CB-SEM) with data from a diverse sample of university students, ensuring both reliability and validity of the findings.

Enterprise Process Flow

Contextual Appraisal (PV, AN, PR)
Expectancy Appraisal (PEX, PEE)
Emotional Responses (ANX, ST, TR)
Behavioral Outcomes (WA, OU)

Data Collection & Sample

Data was collected from 517 university students in Lima, Peru, using convenience sampling between August and November 2025. A total of 541 responses were received, with 95.5% deemed valid. The sample comprised 57.3% females and 42.7% males.

Instruments: Validated questionnaires using a five-point Likert scale. Constructs adapted from established scales: Anthropomorphism, Performance Expectancy, Effort Expectancy, Willingness to Accept/Object (Gursoy et al., 2015); Perceived Risk (Kolar et al., 2024); Perceived Value (Sattu et al., 2024); Anxiety (Iyer & Bright, 2024); Stress (Zhou et al., 2024; Wang et al., 2024); Trust (Zhou et al., 2024; Chowdhury et al., 2022).

Analytical Approach

Confirmatory Factor Analysis (CFA): Used to establish construct validity and reliability, with CR values between 0.693-0.872 and most AVE values > 0.5. Discriminant validity confirmed via HTMT ratio (mostly < 0.85).

Structural Equation Modeling (SEM): Employed with AMOS SPSS version 26. Given multivariate non-normality (Mardia kurtosis = 290.97), bootstrapping with 5000 samples was used for robust estimates.

Model Fit: Overall fit indices demonstrated good fit: CMIN/DF = 2.794, RMSEA = 0.059, CFI = 0.881, TLI = 0.868, IFI = 0.882. Values indicate an acceptable fit for a complex model with a large sample.

Advanced ROI Calculator: Project Your AI Impact

Estimate the potential annual savings and reclaimed human hours by strategically adopting AI within your enterprise, based on the insights from this research.

Estimated Annual Savings $0
Estimated Annual Hours Reclaimed 0

Implementation Roadmap: Your Path to AI Adoption

A phased approach, informed by the MIDA model, ensures successful integration of AI devices, fostering acceptance and maximizing benefits.

Phase 1: Contextual Assessment & Pilot Programs

Conduct a thorough assessment of specific academic needs and cultural contexts. Pilot AI devices in controlled environments, focusing on anthropomorphic design and demonstrating clear perceived value to build initial positive expectations. Prioritize use cases that minimize perceived risk and cognitive load for students.

Phase 2: Expectancy Management & Training

Roll out broader training programs that clearly articulate the performance expectancy of AI devices (how they enhance learning) and manage effort expectancy (making them easy to use). Develop intuitive user interfaces and provide accessible support to reduce the perception of difficulty.

Phase 3: Emotional Resonance & Trust Building

Focus on fostering trust through transparent AI operations and consistent positive user experiences. Implement mechanisms for feedback and refinement. Actively monitor and address student stress related to AI use, providing resources or modifying AI integration to alleviate negative emotional responses.

Phase 4: Continuous Evaluation & Ethical Integration

Establish ongoing evaluation cycles to measure acceptance, identify emerging challenges, and refine AI strategies. Integrate ethical considerations into all AI initiatives, ensuring responsible use and addressing privacy or academic integrity concerns to sustain long-term acceptance.

Ready to Transform Your Enterprise with AI?

Book a personalized strategy session with our AI experts to apply these insights and design a tailored AI adoption plan for your organization.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking