Abstract
A growing amount of research has delved into the role of self-assessment (SA) of language abilities; however, SA as a metacognitive tool has not been investigated extensively with adolescent Chinese learners of English. The present study aims to explore self-ratings of reading and writing abilities by adolescent Chinese learners and the relationship between self-ratings and subsequent reading comprehension and writing production. A total of 106 (ages12 to 14) completed a Reading Comprehension Test (captured by three tasks – Free Recall, Sentence Completion and Multiple-Choice Questions), a Writing Task (a picture-based writing prompt), and criterion-referenced SA Items. Correlational analysis revealed that SA of reading ability was significantly correlated with subsequent reading comprehension. SA of writing ability was also significantly correlated with subsequent writing production. The study concluded that adolescent Chinese learners of English can accurately self-assess their strengths and weaknesses in reading and writing. Self-assessment could serve as a useful tool in the classroom to help identify strengths and weaknesses in English language learning among adolescent Chinese learners of English.
Keywords: Self-Assessment (SA); English reading and writing; Chinese adolescents
“I Know English”: Self-Assessment (SA) of Foreign Language (FL) Reading and Writing Abilities among Adolescent Chinese Learners of English
1. Introduction
More and more countries have started to teach young and adolescent learners foreign languages at schools (Rea-Dickins, 2000; Zangl, 2000). In China, English has been a compulsory subject for educational requirement at all levels (Jin, Wu, Alderson & Song, 2017). However, becoming literate in a foreign language for young and adolescent learners is not easy. Reading and writing in a foreign language involve heavy cognitive and social processes (Koda, 2005; Grabe, 2009; Silva & Matsuda, 2002; Alderson, 2006). It is even more challenging if the target language has an orthography different from learners’ native language (Huss, 1995). Considering these challenges as well as characteristics of young learners such as sensitivity to “failure” (Hasselgren, 2000), there is a consensus that the assessment should incorporate a variety of engaging tasks (Hasselgren, 2000; Rea-Dickins & Gardner, 2000) and use multiple procedures to capture different aspects of language learning (Rea-Dickins, 2000). It should be a potential tool to monitor language learning process and promote positive attitudes towards language learning (Weigle, 2002).
Among a variety of assessments, alternative evaluations accompanied by a component of self-assessment has been highlighted for its appropriate use as a metacognitive tool. Self-assessment (SA), as defined as “procedures by which learners themselves evaluate their language skills and knowledge” (Bailey, 1998, p. 227), is an “internal” assessment from learners’ own perspectives (Oscarson, 1989) to self-rate what they “can do” in the target language and self-identify their strengths and weaknesses. Research has found that SA engages learners in making decisions about their language ability and helps to set learning goals and objectives (Chapelle & Brindley, 2010; Chen, 2008). In practice, SA as a critical component of alternative assessment that has been adopted by, for instance, Common Europe’s Framework of Reference (CEFR), European Language Portfolio (ELP), and the Bergen “Can-Do” project (see details in Hasselgren, 2000) to capture and understand language performance.
To date, however, since the implementation of English learning in secondary schools in China in late 20th century (Wang & Lam, 2009; Hu, 2002), Chinese students are evaluated by massive large-scale standardized exams which make unsuccessful learners become easily unenthusiastic and hold negative English learning experiences (Carless & Wong, 2000, as cited in McKay, 2006). Students are not always provided with the opportunities to independently self-assess their own strengths and weaknesses in English learning. One reason may be a concern about the power relationship between teachers and students. Teachers view SA as a violation of their authority (Towler & Broadfoot, 1992). Another reason lies in the limited empirical research on SA with adolescent Chinese learners of English. Without adequate research justifications, teachers cast doubts on the rationale for an implementation of SA in classrooms. To shed light on the use of SA in the context of English education at secondary schools in China, the present study was guided by two overarching research questions. First, how do adolescent Chinese learners of English self-assess or self-perceive their English reading and writing abilities? Second, what is the relationship between SA ratings of language abilities and subsequent English reading comprehension and writing production? In other words, could adolescent Chinese learners of English accurately self-assess their English reading and writing abilities? Results of the study will help uncover the self-perceived strengths and weaknesses in English learning by adolescent Chinese learners of English, and may provide justification for the implementation of SA as a metacognitive tool among adolescents in secondary schools in China.
2. Literature Review
2.1. FL reading and writing among young and adolescent language learners
Reading and writing in a foreign language is not easy for young learners, as both involve heavy cognitive process and multifaceted social process. Koda (2007) defines reading as “a product of a complex information-processing system” including three major components: decoding, text-information building, and reader-model construction. Decoding is reader’s extraction of written text information based on their linguistic knowledge. Text-information building is how readers organize the information they extract from the written text. Reader-model construction is reader’s synthesis and interpretation of the written text based on their background knowledge (Koda, 2007). With different background knowledge, different readers would have different understandings or interpretations of the same written text (Brantmeier, 2002; Koda, 2007; Grabe, 2009; Bernhardt, 2010). Alderson (2006) also specified that reading is a complex process impacted by both text level variables such as topic familiarity, genre, and text organization, and variable beyond text such as linguistic skills, learning motivation, affect and learner characteristics. In short, reading is an interactive process in which readers themselves, reading texts, and the language itself play critical roles (Bernhardt, 2010). Such process integrating cognitive and social dimension challenges young learners especially when they are still developing their first language literacy (McKay, 2006).
Similar to reading, writing process interleaves cognitive, social and cultural dimensions. Writing may be a social act if it is goal-directed and serves to communicate for a particular group of audience (Grabe & Kaplan, 1996). It is also a cultural phenomenon as researchers have found that cultural norms influence the variations in writing patterns (Grabe & Kaplan, 1996) and coherence of text (Leki, 1992). The cognitive load for writing is heavy as writers need to engage in time-consuming process of pre-writing, writing, revising, and editing (Weigle, 2002). In addition, impact from L1 rhetorical knowledge complicates the process and increases the intricacies of L2/FL writing. L2 writers’ knowledge of appropriate genres is constructed differently from L1 knowledge of genres in various aspects such as communicative purposes, register use, and intertextuality (Silva & Matsuda, 2002). For young writers of a foreign language, they have to make great efforts to learn this complex process while at the same time “[dealing] with the cognitive demands of early literacy, either in their first or the target language, or in both” (Weigle, 2002, p. 245).
2.2. Assessing young and adolescent language learners
Considering challenges and complexities of FL reading and writing young and adolescent learners are faced with, assessment for this group of learners should not focus on the assessment of learning outcomes; instead, it needs to be a tool to develop their language ability, monitor their learning process, promote positive emotion and motivation to learn the target language (Weigle, 2002). Multiple assessment formats need to be utilize to capture a full range of young learners’ language performance in diverse contexts (Rea-Dickins & Gardner, 2000) and serves as a tool to build a language profile for a better understanding of language performance (Rea-Dickins, 2000). Assessment tasks need to be varied and engaging to highlight what they can do in the target language, which is especially important when considering young learners’ characteristics such as short attentional span and sensitivity to “failure” (Hasselgren, 2000). In consensus by research, assessment for young learners need to recognize their cognitive development and should take into consideration of their motivation, vulnerability and interests.
In the context of English teaching and learning in China, Chines students are evaluated by numerous large-scale standardized exams developed by local (provincial or municipal) education authorities for different purposes (Jin, Wu, Alderson & Song, 2017). However, lots of studies have suggested the inappropriateness of using high-stake standardized exams among young and adolescent students (e.g., Chik and Besser, 2011; McKay, 2006; Haggerty and Fox, 2015). Compared with adults, young and adolescent students are more vulnerable by assessment (Chik & Besser, 2011), and more easily to lose learning motivation (Haggerty & Fox, 2015). Those learners with unsuccessful performance become easily unenthusiastic and hold negative English learning experience (Carless & Wong, 2000, as cited in McKay, 2006). In addition, from the pedagogical implication perspectives, standardized exams on young learners is not enlightening to classroom teaching (McKay, 2006) and fail to bring the expected washback effects (Qi, 2004, 2005, 2007). In contrast, alternative assessment has it great value helping teachers to gain an understanding of young learners’ progress in language learning so that teachers could be responsive to individual learner differences, learner needs as well as teaching pedagogies (Rea-Dickins & Gardner, 2000; Rea-Dickins, 2000).
2.3. Self-assessment (SA) of FL/L2 abilities
Self-assessment (SA) is defined as the “procedures by which learners themselves evaluate their language skills and knowledge” (Bailey, 1998, p. 227). Research has found that SA raises learners’ self-awareness of learning (Oscarson, 1989; Babaii, Taghaddomi & Pashmforoosh, 2016), guides learners to think about the learning process (McKay, 2006), promotes self-regulated learning and autonomy to self-identify their strengths and weaknesses (Butler & Lee, 2010; Oscarson, 1989; Dann, 2002), and engages learners to self-assess in an interactive and low-anxiety way (Bachman & Palmer, 1996). SA has also been found positively associated with learners’ self-confidence and performance (De Saint-Leger, 2009; Little, 2009; Butler & Lee, 2010). In addition, SA narrows the gap between learner perception and actual performance (Andrade & Valtcheva, 2009) and minimizes the mismatches between learner assessment and teacher assessment (Babaii et al., 2016). SA also serve a number of alternative purposes such as expanding the range of assessment (Oscarson, 1989), supporting a learner-centered curriculum (Little, 2005), and fostering the perception of assessment as a “shared responsibility” between teachers and learners (Oscarson, 1989; Little, 2005).
Self-assessment (SA) of FL/L2 abilities has been explored by researchers across four language learning skills: reading, writing, listening and speaking. As prior research indicates, whether or not SA is an accurate predictor of language abilities varies by the type of constructs of SA items (Brantmeier, Vanderplank & Strube, 2012; Ross, 1998), L2 proficiency level (Sahragard & Mallahi, 2014; Heilenman, 1990), specific language skills (Sahragard & Mallahi, 2014; Wan-a-rom, 2010; Brantmeier, 2005; Matsuno, 2009), and task types (Brantmeier et al., 2012; Butler & Lee, 2006). In spite of those variables, Ross’ (1998) meta-analysis indicated that the correlation coefficients between SA scores and objective performance ranged between .52 to .65 across four language skills: reading, writing, listening and speaking. Ross (1998) also found that, in general, the correlation between SA and receptive skills (reading and listening) was stronger than the correlation between SA and productive skills (writing and speaking). Higher proficiency learners more accurately assess their L2 abilities than lower proficiency learners (Alderson, 2006), which was echoed by Brantmeier et al. (2012) that there is a threshold beyond which learners can self-assess their language abilities in a more accurate manner. Table 1 selects studies particularly examining SA of L2/FL reading and writing abilities.
Table 1 Selected Studies on SA of L2/FL Reading and Writing Abilities (Modified from Brantmeier et al., 2012, p. 146)
Author(s)/Year | Participants | SA Skills | Findings |
Sahragard and Mallahi (2014) | N=30 |
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more