| Owner: |
Jiayi Zhang
|
| Owner Email: |
joycez@upenn.edu
|
| Paper Title: |
Examining the Robustness of Large Language Models Across Language Complexity
|
| Session Title: |
AI in Writing and Language Learning: Teachers and Learners
|
| Paper Type: |
Place-Based Paper
|
| Presentation Date: |
4/26/2025
|
| Presentation Location: |
Denver, CO
|
| Descriptors: |
Artificial Intelligence, Equity, Student Behavior/Attitude
|
| Methodology: |
Quantitative
|
| Author(s): |
Jiayi Zhang, University of Pennsylvania
|
| Unit: |
SIG-Technology, Instruction, Cognition & Learning
|
| Abstract: |
Large language models (LLMs) are increasingly used in the field of education to analyze and assess students’ learning through textual artifacts. However, the robustness of these models in relation to language complexity remains largely unexamined, leaving questions like whether these models work better for simpler or more complex language unanswered. Recent studies show that language complexity can indeed impact LLM performance, making models less accurate with ungrammatical or uncommon language. Given students' varied language backgrounds and writing skills, it is critical to assess the robustness of these models to ensure consistent performance. This study examines LLM performance in detecting self-regulated learning in math problem-solving, comparing model performance on texts with high and low language complexity based on three linguistic measures.
|
| DOI: |
https://doi.org/10.3102/2185358
|