PREPRINT
How Students Use Generative AI for Computational Modeling in Physics
Generative artificial intelligence (genAI) is becoming increasingly prevalent and capable in physics, particularly for programming-related tasks. We interviewed 19 students who had recently completed an open-ended computational assignment that encouraged the use of genAI, asking them how they used it. We found that genAI significantly impacts several aspects of students' computational modeling, such as the planning, implementing, and debugging of computational models. GenAI can also help students find resources and introduce them to new computational tools.
Executive Impact: GenAI in Physics Education
GenAI offers significant opportunities for efficiency in computational modeling tasks, but also introduces new pedagogical challenges related to student learning and critical thinking.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Production Practices with GenAI
Planning: Students often struggled to apply electromagnetism concepts numerically. GenAI assisted by providing a starting point or helping adjust model complexity. However, over-reliance on GenAI for planning sometimes led to incorrect assumptions and wasted time, as students risked not understanding the results.
Implementing: GenAI was widely used for implementing specific, well-understood code segments, such as data manipulation, looping structures, or adding complexity to simulations. Some students offloaded larger tasks like code translation, using GenAI-generated code as a starting point despite recognizing initial mistakes.
Optimization: GenAI helped students optimize code for speed, introducing them to advanced techniques like vectorization, which they might not have encountered otherwise. This was seen as a learning opportunity, even if GenAI performed most of the work.
Theory and Analysis: Students generally remained skeptical of GenAI for physics theory and equations, perceiving it as less accurate than for programming. They valued working through analytical physics problems independently but found GenAI useful for explaining theories or manipulating equations that could be easily double-checked.
Critique Practices with GenAI
Debugging: This was the most common use case for GenAI, with roughly half of students directly pasting code into the chatbot for fixes. While GenAI often provided quick solutions, it sometimes introduced new bugs or failed to address underlying physics errors, leading to frustration. Productive users focused on small, concrete issues.
Validation, Testing and Visualization: Students primarily validated their code by checking results against known values or intuition. One student formalized testing with GenAI's help. GenAI was frequently used to generate plots, saving students time on documentation lookup for specific styling arguments.
Inspecting AI-generated Code: All students inspected GenAI-generated code, but thoroughness varied. A common approach was to check if the code "looked right" based on their existing knowledge. Some used GenAI to explain parts of the code they didn't understand, while a few admitted to using complex AI-generated code they couldn't fully grasp.
Leveraging GenAI as a Resource
Students sometimes preferred GenAI over instructors/TAs for coding help due to its constant availability and perceived efficiency, especially for simpler questions. They found GenAI superior to web searches for tailored explanations of complex concepts or model selection, particularly when problems extended beyond course material.
However, students consistently verified GenAI's factual information with trusted sources like instructors or textbooks, as they had low confidence in its accuracy for theoretical physics. Some preferred traditional textbooks for learning new concepts to avoid over-complication from GenAI.
GenAI's Influence on Products
The computational essay, including its text, code, and figures, was the main product. Most students did not use GenAI for writing the main essay text, but some used it for grammar checks or LaTeX equation formatting. Only one student generated an entire essay with AI, which resulted in errors and lacked cohesion.
GenAI was more commonly used for producing or refining code, often by directly copying or adapting AI-generated suggestions. It was also used to add comments to existing code and was particularly suitable for generating graphs and even illustrative images for essays. Students also produced AI chat logs as a resource, managing chat contexts for optimal GenAI performance.
Shifting Student Objectives with GenAI
Epistemic Objectives: Students prioritized understanding the material and were mindful of GenAI's potential to hinder learning. They moderated its use, focusing on quality assurance and using it to explain concepts rather than directly solve problems. Some preferred self-solving to avoid reliance on AI.
Pragmatic Objectives: Time pressure was a significant factor for about half of the students, leading them to use GenAI for efficiency, particularly in coding. Some students who offloaded work due to time constraints felt they lost learning opportunities. One student even increased project scope to compensate for perceived learning loss.
GenAI's Impact on Debugging
63% of Students Leveraged GenAI for Debugging Tasks, Significantly Reducing Problem-Solving TimeEnterprise Process Flow: GenAI Integrated Modeling Cycle
| Productive GenAI Use | Unproductive GenAI Use |
|---|---|
|
|
Case Study: The Pitfalls of Over-Reliance
One student, Ivar, used GenAI to rapidly complicate a basic problem, reaching a point where he "couldn't even explain the code that I had now created." He admitted, "I fought with the equation way too late," highlighting how offloading too much to GenAI without foundational understanding led to a broken process and unmanageable code.
This illustrates a critical challenge: while GenAI can accelerate progress, it can also bypass essential learning steps if users don't maintain control and understanding of the underlying physics and programming principles.
Calculate Your Potential AI Impact
Estimate the time and cost savings AI can bring to your team's computational modeling and development workflows.
Our Phased Implementation Roadmap
We guide enterprises through a structured process to integrate AI effectively, ensuring maximum impact and minimal disruption.
Phase 1: Discovery & Strategy
We begin with an in-depth analysis of your current workflows and identify key areas where AI can deliver the most significant benefits, aligning with your strategic objectives.
Phase 2: Pilot & Proof-of-Concept
A tailored AI solution is developed and deployed in a controlled pilot environment, demonstrating its capabilities and measuring initial ROI against predefined metrics.
Phase 3: Full-Scale Integration
Based on successful pilot results, we proceed with comprehensive integration across your enterprise, ensuring seamless adoption and robust performance.
Phase 4: Optimization & Scaling
Continuous monitoring, fine-tuning, and expansion of AI capabilities ensure sustained value and adaptation to evolving business needs, maximizing long-term impact.
Ready to Transform Your Operations?
Connect with our AI specialists to explore how generative AI can drive efficiency and innovation in your enterprise.