Yuanyue Hao / University of Oxford; New Oriental Education & Technology Group
Different from traditional writing tasks, integrated tasks call for the use of two or more language modalities for task completion, thus likely tapping into multiple language abilities. Previous validation studies are primarily concerned with strategy use, source text use and lexico-grammatical features of the written response, whereas few have looked at the features at discourse level. Using multidimensional approach to register variation, this paper investigated the discoursal features of written texts in a listening-to-write task in a university-based English proficiency test.
The integrated writing task in focus requires test takers to listen to a recorded academic lecture on a common topic, and then write an essay consisting of both a summarization of the main ideas (i.e., the “summary” part) and a critical comment of the opinions in the lecture (i.e., the “comment” part). 200 writing samples were randomly selected from a live test administration in 2015. The samples were transcribed and analyzed by Multidimensional Analysis Tagger, a computer program developed for genre analysis of texts. Based on the tagging of linguistic features, scores on six textual dimensions were generated by MAT. Then texts were assigned to eight text types according to the six dimension scores.
Results from Chi-square test indicated that the “summary” and “comment” writings elicited significantly different text types. Paired-samples t-test revealed that four out of six dimension scores showed significant differences, suggesting that summary and comment writings differed in terms of informational density, persuasive expression, abstraction and online informational elaboration. Regression analysis indicated that discoursal features could fairly predict candidates’ writing performance in addition to traditional lexical and syntactic indices. Findings suggested that different degrees of discourse synthesis were elicited in the two types of writings and across proficiency levels. This study has implications for the validation of integrated writing assessment and rating scale development.