Enterprise AI Analysis
Secondary Students as Co-Researchers on Generative AI in Learning: Empowering Youth to Shape National Education Policy
Authors: Colton Botta, Judy Robertson, Christina Mcmellon, Jamie Lawson, Jedidah Ajala, Sofia Lugo Gonzalez, Amina Hamidi, Nia Hicks, Kara McDonald, Ross John Paterson, Peter Scott, Jessie Tang
Affiliations: University of Edinburgh, Scottish Qualifications Authority, George Watson's College, Castlebrae High School, Dunblane High School
As Generative Artificial Intelligence (GenAI) becomes an increasingly significant part of young people's lives, educators worry about its impact on learning and attainment. However, understanding this impact requires more than simply studying young people's behaviours or soliciting their opinions. It is essential to involve them actively as co-researchers, allowing their unique perspectives to shape the conversation around GenAI in schools. This project moves beyond seeing young people as research subjects, positioning them instead as co-designers, co-researchers, and potential influencers of national policy on GenAI in education. We recruited eight young people (aged 16-18) from three Scottish high schools to serve as Young People Co-Researchers (YPCR). Together, we explored their perspectives on GenAI at school, including their current usage, views on appropriate tasks for AI, and opinions on teachers' use of AI. The YPCR organised and conducted semi-structured focus groups with 50 peers and collaboratively analysed the findings with adult researchers. Our results show that young people are cautiously optimistic about GenAI's potential for learning and do not support outright bans in schools. They clearly distinguish between AI use for learning and in assessments, often expressing confusion over current policies and wishing for clearer guidance. The YPCR stated a strong desire to deepen their understanding of AI's advantages and risks and for schools to teach responsible, effective use. Their insights are valuable for national policy development and for AI literacy initiatives.
Executive Impact & Key Findings
This research provides critical insights into how secondary students engage with Generative AI, offering foundational data for educational policy and AI integration strategies.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Learners repeatedly described GenAI as a useful tool for simplification and clarification of content, often pasting class notes or textbook sections to condense, rephrase, or summarise information. For example, one pupil noted, 'I use it to make notes or if I don't know a word, it helps me understand the word, shorten paragraphs...' (FG1). Another mentioned submitting a poem to AI 'just to get it to explain it to me' (FG2).
Students frequently use GenAI as a brainstorming partner to overcome writer's block, generate initial research ideas for essays, or structure their writing. One participant found it useful for getting 'an exact structure, your introduction then how your big paragraphs should be structured' (FG3). They also used it for broader layouts of work like portfolios.
GenAI is often employed for studying, acting as a tutor or coach. Learners create practice questions, mock tests, or request explanations of answers. A student highlighted its efficiency for test revision: 'When there's a test coming up and it's quite tight, I just try to do active recall with ChatGPT. I first give it information so it's just asking me questions and I'm able to answer' (FG1). Some even used apps to get worked solutions for entire math problems.
While ChatGPT was the tool of choice, other tools like Gemini, Grammarly, Snap AI, Copilot, Gauth, Knowunity, Deepseek, and Quizlet were also mentioned. Quality perceptions varied, with some praising accuracy ('I use ChatGPT and especially Copilot and I think it's really accurate because it gives me exactly what I asked for' - FG1) while others found tools like SnapAI 'not very good' (FG8). Privacy concerns led one advanced user to run Deepseek locally.
The majority of students exercised caution, not blindly trusting GenAI due to potential inaccuracies. They reported instances where AI 'confidently spits out an answer but it's wrong' or 'makes up facts that aren't true' (FG3). Learners mitigated this risk by fact-checking with teachers, their own knowledge, or other trusted sources, especially for high-stakes work.
Learners consistently drew a clear line between using AI to enhance learning and using it for assessed work. They viewed GenAI as a valuable tool for scaffolding thinking, structuring arguments, clarifying content, and low-stakes practice, but emphasized it must be kept separate from anything that is assessed. As one learner stated, 'Never in tests, not in the exams... but helping with homework and stuff' (FG3) is acceptable. Brainstorming and analysis assistance was considered legitimate intellectual support, provided the learner did the substantive writing themselves. Copy-pasting AI output was widely condemned as plagiarism.
The underlying principle articulated was authorship: 'If you didn't create it, it shouldn't count as your work' (FG6). Acceptable uses were framed as formative (drafting outlines, practicing recall, checking understanding), summarised as 'As long as it's not for grades, it's OK' (FG3). Any use that substituted for assessed performance was considered grounds for disqualification.
A second boundary for GenAI use was drawn based on subject area. In STEM contexts like maths, GenAI was seen as a useful tool for procedural guidance and instant feedback, helping with 'maths equations' (FG3). Conversely, in humanities (e.g., English), where authorship and expression are key, learners argued AI 'defeats the purpose' (FG4). The line was even sharper for expressive arts, where personal voice and aesthetic are paramount.
Acceptable Use (Formative) | Unacceptable Use (Assessments) |
---|---|
|
|
|
|
|
|
|
|
|
|
Learners generally supported teachers using GenAI for impersonal, low-stakes administrative tasks and resource creation. This included drafting lesson slides, generating extra practice material, and creating quizzes ('Making revision games, like Blookets or Kahoots, is fine.' - FG2). However, a common expectation was that teachers must fact-check any GenAI content before using it in class ('fine as long as they check it' - FG8).
Many students sympathized with teacher workload, accepting AI use to ease administrative burdens. The consensus was that machine-drafted templates were tolerable only if teachers 'made it theirs.' As one learner summarized, 'For huge admin things, maybe, but not for things that matter to us' (FG4).
Regarding feedback and reporting, opinions were mixed but leaned negative. Writing pupil reports and formative comments were seen as core teacher responsibilities requiring first-hand knowledge of the learner, making GenAI inadequate. 'It defeats the purpose of the report a bit because it's supposed to be individualised feedback, but the actual tool knows nothing about you' (FG5). Some viewed staff reliance on GenAI as undermining the teacher's role or 'lazy' (FG4). Others were okay with GenAI for basic report generation if personal commentary remained human or if it helped strip out bias. Crucially, many participants felt it was 'hypocritical' (FG8) for teachers to ban student AI use while using it themselves, raising questions about the school system's consistency.
A significant minority of students avoid GenAI altogether, citing concerns about reliability and accuracy ('it's wasted time using it, getting wrong answers isn't worth it' - FG1) or fearing academic consequences of undetected errors ('If I was to write down something wrong in Advanced Higher I fail, so I don't use that at all' - FG1). Others worried about developing an over-dependence that could erode future competence ('When you use these kind of stuff, it's like you're depending on them... that feeling of not being able to do any work is quite scary' - FG1). Some stated their risk-aversion bluntly: 'I haven't used it. I don't risk these kind of things' (FG1).
Rules on AI varied significantly across classes and teachers, leading to uncertainty and anxiety. This inconsistency ('if I use AI for business and [teacher 1] the head of department saw me, I would get [reprimanded]. So whereas [teacher 2] and [teacher 3], they wouldn't probably actually mind that much. So I don't think it is consistent around the school' - FG3) caused worry about accidental misconduct and calls for clearer guidance.
Students attributed inconsistent GenAI policies to uneven staff expertise, with many teachers defaulting to prohibition if they lacked understanding ('Some teachers don't even really know what ChatGPT is, so they just say 'don't use it'' - FG8). They called for compulsory professional development for teachers to model responsible practice and help students avoid plagiarism or misinformation. Learners wanted explicit instruction on how to interrogate AI output, check sources, and recognize bias, extending beyond mere prompting to critical evaluation and skepticism. They also raised concerns about AI's environmental cost.
Learners voiced anxieties about long-term dependence on GenAI and its potential to dilute personal style ('If you use AI for like creative purposes like music or art, you don't really develop your own style with that' - FG4) and weaken cognitive skills ('if you use AI... you can't do your working, so you're obviously not learning' - FG6). They feared relying too heavily on AI could lead to mistakes in professional contexts (e.g., medicine) and make it difficult to write by hand again after using AI for essays.
The Imperative for AI Literacy
The research highlights a significant gap in AI literacy among both students and teachers, leading to inconsistent policies and anxiety. Many students report that 'Some teachers don't even really know what ChatGPT is, so they just say 'don't use it'' (FG8). This lack of informed guidance puts students at risk of accidental misconduct and misinformation. There is a strong call for mandatory professional development for teachers and explicit instruction for students on responsible AI use, including critical evaluation, source checking, and understanding ethical implications like bias and environmental cost. Addressing this literacy gap is crucial for fostering safe, equitable, and empowering engagement with GenAI in education.
Impact Statement: Empowering both educators and students with comprehensive AI literacy is critical to navigating the evolving educational landscape, ensuring responsible use, and developing informed national policies.
Co-Research Methodology Flow
Estimate Your AI Implementation ROI
Quantify the potential time and cost savings for your enterprise by strategically integrating AI tools. Adjust the parameters to see a tailored estimate.
Your Enterprise AI Transformation Roadmap
A structured approach to integrating Generative AI into your educational institution, based on the insights from student co-researchers.
Phase 1: Needs Assessment & Policy Framework
Conduct internal workshops with students, teachers, and administrators to identify specific AI integration needs and concerns. Develop clear, consistent institutional policies for GenAI use, informed by student input, covering learning, assessment, and ethical guidelines. Prioritize open communication channels.
Phase 2: Pilot Programs & Teacher Training
Implement pilot programs in select departments, focusing on formative AI uses (e.g., brainstorming, practice questions). Launch mandatory professional development for teachers, covering GenAI capabilities, limitations, responsible use, and how to teach AI literacy to students. Encourage experimentation in low-stakes environments.
Phase 3: Student AI Literacy Curriculum
Integrate AI literacy into the curriculum, teaching students not just how to use GenAI, but also how to critically evaluate outputs, verify sources, understand bias, and recognize the environmental impact. Foster agency by involving students in co-designing AI learning modules and discussing ethical considerations.
Phase 4: Scaling & Continuous Feedback
Expand successful pilot programs across the institution. Establish ongoing feedback loops with students and teachers to continually refine AI policies, adapt training, and explore new beneficial applications. Regularly review and update policies to keep pace with evolving AI technology and educational needs.
Ready to Innovate Your Educational Approach with AI?
The insights from young people are invaluable. Partner with us to transform these findings into actionable strategies and national policies that empower your students and teachers.