Skip to main content
Enterprise AI Analysis: Automatically Inferring Teachers' Geometric Content Knowledge: A Skills Based Approach

Educational AI

Automatically Inferring Teachers' Geometric Content Knowledge: A Skills Based Approach

Assessing teachers' geometric content knowledge is essential for geometry instructional quality and student learning, but difficult to scale. The Van Hiele model characterizes geometric reasoning through five hierarchical levels. Traditional Van Hiele assessment relies on manual expert analysis of open-ended responses. This process is time-consuming, costly, and prevents large-scale evaluation. This study develops an automated approach for diagnosing teachers' Van Hiele reasoning levels using large language models grounded in educational theory. Our central hypothesis is that integrating explicit skills information significantly improves Van Hiele classification. In collaboration with mathematics education researchers, we built a structured skills dictionary decomposing the Van Hiele levels into 33 fine-grained reasoning skills. Through a custom web platform, 31 pre-service teachers solved geometry problems, yielding 226 responses. Expert researchers then annotated each response with its Van Hiele level and demonstrated skills from the dictionary. Using this annotated dataset, we implemented two classification approaches: (1) retrieval-augmented generation (RAG) and (2) multi-task learning (MTL). Each approach compared a skills-aware variant incorporating the skills dictionary against a baseline without skills information. Results showed that for both methods, skills-aware variants significantly outperformed baselines across multiple evaluation metrics. This work provides the first automated approach for Van Hiele level classification from open-ended responses. It offers a scalable, theory-grounded method for assessing teachers' geometric reasoning that can enable large-scale evaluation and support adaptive, personalized teacher learning systems.

Executive Impact

Leverage cutting-edge AI to enhance teacher training and assessment, driving significant improvements in educational outcomes and operational efficiency.

0% Accuracy Improvement
0% Teacher Engagement
0% Assessment Time Saved
0X Scalability Factor

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Van Hiele Model of Geometric Reasoning
Skills-Aware Classification Approaches
Data Collection and Annotation

Van Hiele Model of Geometric Reasoning

The Van Hiele model describes five hierarchical levels of geometric reasoning, from visualization to formal rigor. Our analysis shows how LLMs can effectively classify teachers' Van Hiele levels from their open-ended responses, overcoming traditional manual assessment limitations.

Skills-Aware Classification Approaches

We developed two novel approaches: Retrieval-Augmented Generation (RAG) and Multi-Task Learning (MTL). Both integrate explicit skills information from our structured dictionary, significantly outperforming baselines without such data.

Data Collection and Annotation

A dataset of 226 question-response pairs from 31 pre-service teachers was collected via a custom web platform. Expert researchers annotated each response with Van Hiele levels and demonstrated skills, ensuring a high inter-rater reliability (Cohen's κ = 0.84).

0.84 Cohen's Kappa (Inter-Rater Reliability)

Our annotation protocol achieved a strong inter-rater reliability, indicating robust and objective ground-truth labeling of Van Hiele levels by expert mathematics educators.

Enterprise Process Flow

Design Resources (Question Bank, Skills Dictionary)
Studies With Pre-Service Teachers
Data Collection & Annotation (Question, Answer, Skills, Van Hiele Level)
Van Hiele Classification Models (RAG, Supervised Fine-Tuning)
Method F1-Macro (Skills-Aware vs. Baseline) MAE (Skills-Aware vs. Baseline) Key Advantage
RAG
  • 0.695 vs. 0.624
  • Statistically significant improvement (p=0.0437)
  • 0.376 vs. 0.47
  • Statistically significant reduction (p=0.0065)
Leverages annotated examples & skills dictionary for LLM context.
MTL
  • 0.725 vs. 0.646
  • Statistically significant improvement (p=0.043)
  • 0.403 vs. 0.523
  • Statistically significant reduction (p=0.0185)
Integrates auxiliary skills prediction task & attention for richer representations.

Scalable Assessment for Teacher Development

A key challenge in geometry education research is the scalability of teacher assessment. Traditional methods rely on manual expert analysis, which is time-consuming, costly, and prevents large-scale evaluation. Our automated approach, leveraging LLMs and skills-aware modeling, directly addresses this. By inferring Van Hiele levels from open-ended responses, it enables researchers to study geometric reasoning development at scale across large teacher cohorts. This not only supports research but also facilitates adaptive professional development systems, allowing for personalized learning paths based on teachers' current Van Hiele levels and skill profiles.

Advanced ROI Calculator

Estimate the potential cost savings and efficiency gains for your organization by automating teacher assessment and professional development processes using AI.

Annual Savings
$0
Hours Reclaimed
0

Implementation Timeline

Our structured approach ensures a smooth and effective integration of AI into your teacher assessment and professional development workflows.

Phase 1: Skills Dictionary Customization

Collaborate with your subject matter experts to refine and customize the geometry skills dictionary for your specific curriculum and assessment goals. This ensures the AI model aligns perfectly with your pedagogical framework.

Phase 2: Initial Data Annotation & Model Training

Annotate a subset of your existing teacher responses with Van Hiele levels and associated skills. This dataset will be used to fine-tune our LLM, establishing a robust baseline for automated classification tailored to your context.

Phase 3: Integration & Pilot Deployment

Integrate the AI assessment engine into your existing teacher professional development platform. Conduct a pilot program with a small cohort of teachers, gathering feedback and making iterative improvements.

Phase 4: Scalable Rollout & Continuous Improvement

Expand the AI assessment to larger teacher cohorts. Implement a feedback loop for continuous model improvement, ensuring high accuracy and relevance as your pedagogical needs evolve.

Ready to revolutionize teacher assessment and professional development?

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking