Can A Language Model Represent Math Strategies?”: Learning Math Strategies from Big Data using BERT

AI models have shown a remarkable ability to perform representation learning using large-scale data. In particular, the emergence of Large Language Models (LLMs) attests to the capability of AI models to learn complex hidden structures in a bottom-up manner without requiring a lot of human expertise. In this paper, we leverage these models to learn Math learning strategies at scale. Specifically, we use student interaction data from the MATHia Intelligent Tutoring System to learn strategies based on sequences of actions performed by students. To do this, we develop an AI model based on BERT (Bidirectional Encoder Representations From Transformers) that has two main components. First, we pre-train BERT using an approach known as Masked Language Modeling to learn embeddings for strategies. The embeddings represent strategies in a vector form while preserving their semantics. Next, we fine-tune the model to predict if students are likely to apply a correct strategy to solve a novel problem. We demonstrate using a large dataset collected from 655 schools that our approach where we pre-train to learn strategies from a sample of schools can be fine-tuned with a small number of examples to make accurate predictions over student data collected from other schools.

See the Resource

Previous
Previous

Automated feedback improves teachers’ questioning quality in brick-and-mortar classrooms: Opportunities for further enhancement

Next
Next

De-identifying student personally identifying information in discussion forum posts with large language models