Synopsis
This paper investigates how 40 prospective German teachers assess an argumentative text from ChatGPT 3.5 using the SRDP assessment grid for the Austrian Matura in comparison to a human text. For this purpose, the public and scientific discourse on current developments in AI research is first outlined and the results of previous studies on the writing abilities of ChatGPT are presented. On this basis, the two texts are first analysed qualitatively. Then the quantitative results of the text assessment of the pre-service teachers are presented and discussed. The results of the study first show that none of the pre-service teachers would have noticed that one of the texts is a ChatGPT text. Moreover, the study indicates that ChatGPT is able to produce a coherent, elaborate, and linguistically largely correct argumentative text, which is assessed on average with the overall grade Good. At the same time, when paraphrasing the source text, the AI copies entire text passages almost word-forword and shows a syntactically and lexically highly repetitive use of language.

