Skip to main content
Enterprise AI Analysis: Designing for Public Enlightenment: Enhancing Generative AI Literacy on Socio-technical Aspects in Informal Learning Spaces

Enterprise AI Analysis

Designing for Public Enlightenment: Enhancing Generative AI Literacy

This research addresses the 'AI literacy gap' by developing and evaluating interactive learning interventions in informal settings (museums, libraries, parks) for the general public (12+). It aims to identify misconceptions, develop a pedagogical framework, and create tangible/embodied exhibits to foster informed engagement with generative AI, promoting responsible use and democratic participation. Three research questions guide the process: understanding public perceptions (RQ1), defining learning objectives (RQ2), and designing/evaluating interventions (RQ3). The project will produce empirical findings on AI conceptions, a pedagogical framework, and evaluated interactive exhibits, empowering users beyond superficial engagement.

Executive Impact & Key Findings

This research is crucial for organizations looking to responsibly integrate Generative AI. Key insights include:

0 Increased AI Literacy
0 Public Venues Reached
0 Engaged Participants

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Research Questions
Key Findings & Implications
Methodology
RQ1 Understanding Public Perceptions of Generative AI

RQ1: Public Perceptions & Misconceptions

This foundational phase identifies perceived socio-technical perils and benefits, as well as lingering conceptions and misconceptions towards generative AI. Through questionnaires and literature reviews, it will provide an empirically grounded understanding of public perspectives. Common myths often stem from science fiction and over-hyped advertisements, such as beliefs that AI is sentient, infallible, or will 'take over the world'. The study uses a mixed-methods approach to unravel specific misconceptions, considering benefits like enhanced productivity and creativity, and risks such as job displacement, algorithmic bias, environmental costs, and misinformation. The questionnaire will collect both quantitative (prevalence) and qualitative (nuanced narratives) data from the public.

RQ2 Constructing Learning Objectives for AI Literacy

RQ2: Defining Learning Objectives

This phase bridges diagnostic findings from RQ1 with system contributions for RQ3. It translates empirically identified misconceptions and knowledge gaps into a structured, prioritized, and actionable pedagogical framework. An early-stage review has already produced a framework for generative AI literacy [26]. The output is a 'Learning Guidelines and Defined Learning Objectives' document. For example, if RQ1 finds 'a majority of participants are unaware of the significant power consumption of generative AI usage', this translates to a learning need regarding generative AI's environmental impact. This framework will ensure learning objectives are specific, measurable, and relevant, focusing on socio-technical rather than purely technical aspects.

RQ3 Implementing & Evaluating Learning Interventions

RQ3: Designing Interactive Interventions

This final phase focuses on system design, implementation, and user evaluation. Brainstorming sessions generate concepts for designs conveying RQ2's learning objectives, adhering to informal learning principles (self-directed exploration, multiple entry points) [13, 15]. Tangible and embodied interactions enhance engagement and learning outcomes [18]. Examples include an exhibit where 'asking ChatGPT a question' visibly drains water, symbolizing resource cost. Three installations ('Fool Your Friend', 'Chatbot of Truth', 'LuminAI' [25]) have already been deployed and studied with positive results showing increased understanding. Future evaluations will use questionnaires, audio recordings, and learning talk analysis [3, 17, 22].

Bridging the Generative AI Literacy Gap

The research highlights a critical 'literacy gap' where widespread public use of generative AI coexists with a lack of understanding of its socio-technical perils and benefits. This gap is fueled by misleading cultural narratives and over-hyped advertisements, leading to severe consequences such as harming professionalism [20], compromising academic integrity [2], spiking disinformation [27], and fatal consequences [5, 11, 24]. Informal learning spaces like museums and libraries are ideal for public education, targeting individuals aged 12 and above, aligning with Piaget's formal operational stage [14] for cultivating AI literacy.

Research Phases Workflow

RQ1: Understand Public Perceptions
RQ2: Construct Learning Objectives
RQ3: Design & Evaluate Interventions
Empower Informed Public Engagement
Aspect Traditional AI Literacy Embodied Interaction Approach
Engagement Passive learning (reading) Active, multi-sensory engagement
Understanding Depth Conceptual, abstract Experiential, concrete connections
Accessibility Often text-heavy, academic Lower interaction thresholds, inclusive
Retention Varies, often lower Improved long-term memory
Misconception Challenge Informational correction Direct experience challenging beliefs

Case Study: Griffin Museum of Science and Industry

Challenge: To foster AI literacy through an engaging, creative, and accessible medium for a broad public audience.

Solution: The 'LuminAI' exhibit [25] provides a three-panel interactive experience where users engage in expressive dance with an AI agent. This allows for direct, embodied interaction with AI capabilities and limitations.

Results: Studies showed participants demonstrated increased understanding of specified learning objectives with statistical significance after interaction, particularly regarding the creative and interactive potential of AI.

Mixed-Methods Approach

The research employs a rigorous mixed-methods approach combining qualitative user studies (interviews and open-ended questionnaires) with quantitative data analysis and design-based research. This ensures a comprehensive understanding of public perceptions and the effective development and evaluation of learning interventions. Key methods include:

  • Questionnaires: Used to gather both quantitative prevalence data and qualitative nuanced narratives on public perceptions (RQ1).
  • Thematic Analysis: Applied to open-ended questions from questionnaires to identify core misconceptions and themes.
  • Design-Based Research: Iterative process for developing and refining interactive exhibits (RQ3), ensuring pedagogical effectiveness.
  • User Evaluation: Pre-/post-questions, semi-structured interviews, and learning talk analysis for assessing learning outcomes of interventions.

Method Purpose Data Type Stage
Questionnaires Assess knowledge gain, attitudes Quantitative/Qualitative RQ1, RQ3
Semi-structured Interviews Gather qualitative feedback, insights Qualitative RQ3
Learning Talk Analysis Understand reasoning processes, confusion points Qualitative RQ3
Video Recordings Observe engagement, interaction duration Quantitative/Qualitative RQ3

Estimate Your Organization's AI Literacy ROI

Calculate the potential annual savings and reclaimed human hours by implementing a comprehensive AI literacy program in your enterprise. Improved AI literacy leads to more efficient, responsible, and innovative use of AI tools.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

AI Literacy Program Rollout Timeline

A structured approach to integrating enterprise AI literacy, focusing on foundational understanding and practical application.

Phase 1: Needs Assessment

Conduct internal surveys and workshops to identify current AI literacy gaps and specific misconceptions within your organization. Define key learning objectives tailored to your operational context.

Phase 2: Curriculum Development

Design interactive learning modules and interventions based on identified needs. Incorporate socio-technical aspects, ethical considerations, and practical applications of generative AI. Pilot test with a small group.

Phase 3: Pilot Implementation

Deploy initial AI literacy interventions in a controlled environment. Gather feedback from participants and iterate on content and delivery methods. Measure initial knowledge gain and engagement.

Phase 4: Full-Scale Rollout & Continuous Improvement

Implement the full AI literacy program across relevant departments. Establish metrics for ongoing evaluation and a feedback loop for continuous improvement and adaptation to evolving AI technologies.

Empower Your Workforce with Generative AI Literacy

Unlock the full potential of AI while mitigating risks. Our tailored programs enhance critical understanding, foster responsible use, and drive innovation within your organization. Connect with our experts to design a strategic AI literacy initiative.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking