PLoS ONE (Jan 2025)
A comparative analysis of syntactic complexity in argumentative essays from rhetorical perspective: ChatGPT vs. English native speakers.
Abstract
This study investigates the syntactic complexity of argumentative essays generated by ChatGPT in comparison to those written by native speakers. By examining cross-rhetorical-stage variation in syntactic complexity, we explore how ChatGPT's writing aligns with or diverges from human argumentative writing. The results reveal that ChatGPT and native speakers exhibit similar patterns in mean length of sentence in the thesis stage, mean length of T-unit and complex nominals per T-unit in the conclusion stage. However, ChatGPT showed a preference for coordination structures across all stages, relying more on parallel constructions, and native speakers used subordination structure and verb phrases more frequently across all stages. Additionally, ChatGPT's syntactic complexity was characterized by lower variability across multiple measures, indicating a more uniform and formulaic output. These findings underscore the differences between ChatGPT and native speakers in syntactic complexity and rhetorical functions in argumentative essays, therefore contributing to our understanding of ChatGPT's argumentative writing performance and providing valuable insights for ChatGPT integration into writing instruction.