As an important member of integrative writing assessment, summary writing has been employed in many large-scale English tests home and abroad, for instance, TOEFL iBT, TEM4/8(since 2016), NMET Shanghai version (since 2017). Few endeavors have been made to construct rating scales for summary writing based on test- takes’ actual performance, let alone validations of such rating scales. Therefore, the present research, through a data-based approach, aims to validate a newly constructed rating scale for summary writing (Wu, 2018, forthcoming) from multiple perspectives. 7 college English teachers and 63 sophomores majoring in English were invited to participate the research, the former as raters and the latter as test-takers who accomplished a summary writing task (length of source text: 501 words; length of summary: no more than 100 words). All 7 raters, after engagement in rater training, rated the 63 summary scripts independently. MFRM analysis of the rating results revealed that the rating scale was satisfactory in its discriminative power and could well guarantee intra-rater consistency. However, raters found it hard to make objective judgements and were prone to random use of scores because each level on the dimensions of the rating scale consisted of a range of scores. There also exited some rater-student and rater-dimension biased interactions. TAPs were employed to probe into the rating procedure of all the 7 raters, as well as difficulties or confusion they underwent during the rating work. Semi-structured interviews were employed for the validation to elicit opinions concerning raters’ comments and perceptions of the rating scale. On the whole, raters held that the rating scale was clear in diction, convenient in use and suitable in actual application. Yet they also pointed out some of its deficiencies, which, to a large extent, were in consistency with what was found through MFRM analysis and TAPs. Recommendations were then put forward based on the comprehensive and integrative consideration of validation results for further modifications of the rating scale for summary writing. It is expected that the present research could be used for reference in validation of rating scales of summary writing on other proficiency levels or even rating scales for other English writing assessment tasks.