Granthaalayah
THE IMPACT OF EIGHTH GRADERS’ SELF-REPORTED METACOGNITIVE READING STRATEGIES ON 2019 NAEP INFORMATIONAL READING SCORES

The Impact of Eighth Graders’ Self-Reported Metacognitive Reading Strategies on 2019 NAEP Informational Reading Scores

 

Katie Menko 1, Mingyuan Zhang 1

 

1 College of Education and Human Services, Central Michigan University, USA

 

A picture containing logo

Description automatically generated

ABSTRACT

This study presented a secondary analysis of the National Assessment of Educational Progress (NAEP) dataset. It examined the impact of eighth graders’ self-reported metacognitive reading strategies on their 2019 NAEP informational reading scores.  A quantitative descriptive research design was utilized to analyze secondary data extracted from the 2019 NAEP dataset.  The findings are: (1) The average subscale score of students who feel that they definitely can recognize when they don’t understand something they are reading is significantly higher than those who reported a lower ability level. (2) The average subscale score of students who feel that they definitely can figure out the meaning of a word they don’t know by using other words in the text is significantly higher than those who reported a lower ability level. (3) The average subscale score of students who feel that they definitely can identify the main idea of a text is significantly higher than those who reported a lower ability level.  These findings indicate that metacognitive reading strategy instruction could be beneficial to students’ informational reading comprehension, and therefore, the need for further educator professional development on metacognitive reading instruction is warranted.

 

Received 21 December 2022

Accepted 23 January 2023

Published 06 February 2023

Corresponding Author

Mingyuan Zhang, zhang1m@cmich.edu

DOI 10.29121/granthaalayah.v11.i1.2023.4983   

Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Copyright: © 2023 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.

With the license CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.

 

Keywords: NAEP, National Data, Reading Comprehension, Informational Text, Metacognitive Reading Strategy

 

 

 


1. INTRODUCTION

Students are often immersed in nonfiction or informational text, both in formal educational experiences and informal life experiences. While traditional reading instruction comprised mainly literature, that focus has shifted to informational text over the past decade Duke (2004), McCown and Thomason (2014). This change can be attributed to multiple factors, including research and assessment data that evidenced students' difficulties with informational text across the nation and the practical realization that expository text skills are needed across various content areas McCown and Thomason (2014), Schugar and Dreher (2017). Additionally, developments in information technology have exposed students to more digital forms of informative text they engage with on a regular basis McCown and Thomason (2014), Mullis et al. (2017). Finally, the introduction of the Common Core State Standards (CCSS) in 2010 also increased the need for K-12 classroom instruction with nonfiction texts across all content areas. As the importance of informative text in reading instruction has increased, evidence-based strategies to help students succeed with these forms of text are needed. Studies have demonstrated that the use of metacognitive strategies specifically can have a positive effect on students’ reading comprehension across a variety of contexts Baye et al. (2019), Camahalan (2006), Ghaith and El-Sanyoura (2019), Muhid et al. (2020). Students’ reported abilities with these metacognitive strategies may have an impact on their informational reading success.

Numerous standardized assessments measure students’ achievement with expository texts. At the national level, the U. S. National Assessment of Educational Progress (NAEP) examines student progress in this area. Unfortunately, the 2019 NAEP results demonstrated that eighth-graders’ average scale scores for informational reading were lower than in previous assessment years National Center for Educational Statistics (NCES). (2022a). Students need a variety of skills and strategies to employ in order to have success with expository texts Afflerbach and Cho (2009), Baye et al. (2019), Muhid et al. (2020). Metacognitive strategies incorporate multiple planning, organizing, monitoring, and self-reflection tools students can utilize to support their reading as well as their development of procedural knowledge and learning agency. NAEP also offers data concerning students’ self-reported use of these strategies for their general reading behaviours National Center for Educational Statistics (NCES). (2022a). The purpose of this study is to specifically explore the impact of students’ self-reported metacognitive reading strategies on the 2019 NAEP eighth grade informational reading subscale scores.

Informational reading comprehension has been demonstrated to be an essential skill for students across content areas and into various post-secondary learning contexts Duke (2004), McCown and Thomason (2014). However, various data suggests that K-12 students continue to lack the strategies and skills to comprehend informative texts Duke (2004), Mullis et al. (2017), United States Department of Education. (2019). Informational texts have different text structures and features than those of literary texts and, therefore, may require different strategies Shanahan and Shanahan (2015), Shanahan and Shanahan (2018). Students’ use of metacognitive strategies has been demonstrated to be beneficial for reading across multiple contexts at the secondary level. For instance, Ghaith and El-Sanyoura (2019) found that high school students had improved comprehension when utilizing problem-solving metacognitive strategies. Correspondingly, literature has also evidenced the benefits of metacognitive strategies for reading comprehension with secondary students who are English Language Learners and those who are differently abled Camahalan (2006), Muhid et al. (2020). These findings suggest the promising potential for metacognitive reading strategies to also have a positive effect on the comprehension of informative text specifically.

While research has revealed the affordances of metacognitive strategies for reading comprehension, little research has examined their effect on secondary students’ achievement with informational text. Additionally, while a few studies, such as one conducted by Schugar and Dreher (2017), have analyzed NAEP data from the informative reading scales of the national assessment, there is a dearth of literature available in thiss area as well, particularly for the scores of eighth graders who took the assessment in recent years. Research is needed to examine not only the informative reading scores of eighth graders from the most recent NAEP results (2019) but also to explore the potential impact that students’ self-reported use of metacognitive reading strategies might have on those scores.

Exploring the potential impact of metacognitive reading strategies on informational reading scores could provide valuable knowledge for administrators, instructional designers, and educators in literacy leadership positions as they develop informative reading instruction. A greater focus on teaching multiple planning, organizing, attentional, and reflective strategies could be integrated into curriculums for informational reading. Additionally, the findings will offer important information for secondary educators across all content areas who have taken on the role of informative literacy instructor over the past decade. The results could be significant particularly for science and social studies teachers at the secondary level who are integrating disciplinary literacy with their textual class resources.

Specifically, the present study will explore the following research questions:

1)     How does students’ self-reported ability to recognize when they do not understand something they are reading impact their informational reading achievement?

2)     How does students’ self-reported ability to figure out the meaning of a word they don’t know by using other words in the text impact their informational reading achievement?

3)     How does students’ self-reported ability to figure out the main idea of a text impact their informational reading achievement?

 

2. Methodology

Our theoretical framework for this research adopts a scientific inquiry-based approach. The framework was described in great details in The Impact of Conversations on Fourth Grade Reading Performance - What NAEP Data Explorer Tells? Bond and Zhang (2017). Briefly, the research methods combined the inquiry process with scientific knowledge, reasoning, and critical thinking. We started with an extensive exploration of the dataset, and that led to the designing of the research questions. The research questions further guided us to mine the data with great in-depth. The methods involved in this study include the use of data from the 2019 National Assessment of Educational Progress (NAEP). The following details the sampling strategies followed for the NAEP and information concerning the scale and variables selected for this study. Finally, the process utilized for the secondary analyses conducted by this study is also discussed.

 

2.1. Participants and Sampling

The data for this study was gathered as part of the 2019 NAEP reading assessment for eighth graders. NAEP was federally mandated in 1969 and is the largest ongoing assessment of children’s academic progress in the United States National Center for Educational Statistics (NCES). (2019), Schugar and Dreher (2017). It measures students’ achievement in the areas of reading, mathematics, civics, science, writing, US history, geography, and the arts Reilly et al. (2019), Schugar and Dreher (2017). The NAEP reading assessment is administered periodically and typically follows a schedule of every two to three years with a larger sample size covered for students in grades 4 and 8 Klecker (2014), Reilly et al. (2019). All U.S. states participated in the 2019 NAEP reading assessment. Its results can be used as a common metric for students’ academic progress over time National Center for Educational Statistics (NCES). (2022b).

 

2.1. NAEP Sampling and Data Collection

Data for the 2019 NAEP eighth-grade reading assessment was collected using a multistage sampling design that included both stratification and clustering strategies Klecker (2014), Schugar and Dreher (2017). Within this multistage model, schools were selected based on explicit and implicit strata qualifications, and then within the selected schools, students were clustered based on their enrolled school’s strata Schugar and Dreher (2017). The sampling frame also includes students with disabilities and English Language Learners with the intention of including at least 85% of students who are identified as students with disabilities or English Language Learners Reilly et al. (2019).

In 2019, the NAEP reading assessment was administered to approximately 143,100 eighth-grade students and consisted of a reading achievement test and student, teacher, and administrator questionnaires. The reading achievement test was divided into two subscales: (1) reading for literary experience and (2) reading for information Schugar and Dreher (2017). This study used data specifically from the reading for information subscale and student questionnaires. The informational reading subtest asked eighth graders to read grade-level appropriate text passages and then answer corresponding multiple-choice questions or provide brief or extended constructed responses NCES. (2022c), Schugar and Dreher (2017). This data helps to demonstrate students’ reading skills and progress.

 

2.2. Public School Selection in State Assessment Years

The data used for this study was taken specifically from the sample of national public schools that participated in the 2019 NAEP reading assessment. The selection of public school students to be administered this national assessment involves an intricate multistage sampling design Klecker (2014). The design follows these basic steps:

·        select public schools within the designated geographical areas.

·        select students in the relevant grades within the designated schools.

·        allocate selected students to assessment subjects National Center for Educational Statistics (NCES). (2019)

 

3. Data Analysis

Data from the NAEP assessment and questionnaires is made publicly available for secondary analysis via the NAEP Data Explorer, which is hosted by the National Center for Education Statistics (NCES) https://www.nationsreportcard.gov/ndecore/xplore/NDE National Center for Educational Statistics (NCES). (2022d). This web-based system also provides a criteria report creator that composes descriptive tables and performs statistical tests for the user. The 2019 NAEP eighth-grade informational reading subscale scores and standard deviations were selected for the secondary analyses performed in this study. As noted, students also completed a questionnaire that focuses on in-school and out-of-school reading behaviours and factors. The data from this questionnaire provides variables that can be utilized for descriptive analysis through the NAEP Data Explorer. The selected variables for this study included “student factors” with a subcategory of “affective disposition.” Specifically, questionnaire items that were relevant to students’ metacognitive and critical reading skills were chosen. The three coded questions selected through Data Explorer National Center for Educational Statistics (NCES). (2022d) were:

·        Do you think you would be able to do each of the following when reading? Recognize when you don't understand something you are reading (student-reported) ID: R849605

(Options: I definitely can't, I probably can't, Maybe, I probably can, I definitely can)

·        Do you think you would be able to do each of the following when reading? Figure out the meaning of a word you don't know by using other words in the text (student-reported) ID: R849601 (Options: I definitely can't, I probably can't, Maybe, I probably can, I definitely can)

·        Do you think you would be able to do each of the following when reading? Figure out the main idea of a text (student-reported) ID: R849603 (Options: I definitely can't, I probably can't, Maybe, I probably can, I definitely can) ˘

Descriptive tables were calculated and presented with the use of Data Explorer National Center for Educational Statistics (NCES). (2022d). Additionally, some of the descriptive tables were altered for formatting purposes and did not include any changes to the data. Effect size statistics are also presented between the subscale scores and the selected variables, commonly referred to as Cohen’s d Cohen (1988). Cohen’s d effect sizes Cohen (1988) were calculated by using an online effect size calculator found at http://www.uccs.edu/~lbecker/ Becker (2000). Cohen’s d is usually presented with t-test and ANOVA results and is frequently used for meta-analysis Mcleod (2019).

 

4. Results

This section presents the results of the impact of eighth-grade students’ answers to the NAEP questionnaire items concerning metacognitive and critical reading skills on their 2019 informational reading subscale scores. This will be demonstrated through tables displaying the average scores and standard deviations for students in each variable-based skill area. Additionally, the results of independent t-tests with an alpha level of 0.05 will be reported and analyzed in conjecture with previous research on this subject.

 

4.1. Data Analysis

The average nationwide scale score for all eighth-grade students on the 2019 NAEP Gain Information Reading Assessment subscale was 265 (scale-range 0-500) with a standard deviation of 38. Throughout the results section, the differences in informational reading subscale scores by student questionnaire item and responses are shared. The expected value (N) is not present in the tables and data analysis below because the NAEP Data Explorer doesn’t include the total number of students (N) in the secondary data analyses it facilitates Klecker (2014). The results are presented in accordance with each research question below.

RQ #1 - "How does students’ self-reported ability to recognize when they do not understand something they are reading impact their informational reading achievement?"

 

Table 1 demonstrates eighth graders’ average informational reading subscale scores from the 2019 NAEP based on their self-reported identification with recognizing when they do not understand what they are reading.

Table 1

Table 1 Average Scale Scores and Standard Deviations byRecognize When You Do Not Understand What You Are ReadingVariable

Year

Jurisdiction

Recognize when you do not understand what you are reading

Average scale score

Standard deviation

2019

National Public

I definitely can’t

227

38

2019

National Public

I probably can’t

236

39

2019

National Public

Maybe

247

37

2019

National Public

I probably can

265

35

2019

National Public

I definitely can

277

34

 

Note. Some apparent differences between estimates may not be statistically significant. From U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Center for Educational Statistics (NCES). (2019) Reading Assessment.

The average subscale scores for the informational reading assessment for eighth-grade students in 2019 are demonstrated above based on how they identified with the questionnaire statement of “recognize when you do not understand something you are reading” National Center for Educational Statistics (NCES). (2019). The average score of students who reported “I definitely can’t” for recognizing when they do not understand something they’re reading was 227 (SD=38). For students who reported “I probably can’t,” their average subscale score was 236 (SD=39). The average subscale score of students who reported “maybe” was 247 (SD=37). Students who reported “I probably can” had an average subscale score of 265 (SD=35) on the informational reading assessment. Lastly, the average subscale score for students who reported “I definitely can” was 277 (SD=34).

Table 2 demonstrates the mean differences and independent t-test results for the frequency of reported student responses for “recognizing when you do not understand what you are reading.”

Table 2

Table 2 Difference Between Average Scale Scores for Variable “Recognize When You Do Not Understand What You Are Reading”

 

I definitely can't

I probably can't

Maybe

I probably can

I definitely can

I definitely can't

 

 

 

 

 

 

I probably can't

 

> 

Diff = 9

P-value = 0.0000

Family size = 10

 

 

 

 

 

Maybe

 

> 

Diff = 20

P-value = 0.0000

Family size = 10

> 

Diff = 11

P-value = 0.0000

Family size = 10

 

 

 

 

 

 

I probably can

 

> 

Diff = 38

P-value = 0.0000

Family size = 10

> 

Diff = 29

P-value = 0.0000

Family size = 10

> 

Diff = 18

P-value = 0.0000

Family size = 10

 

 

 

 

I definitely can

 

> 

Diff = 50

P-value = 0.0000

Family size = 10

> 

Diff = 41

P-value = 0.0000

Family size = 10

> 

Diff = 30

P-value = 0.0000

Family size = 10

> 

Diff = 12

P-value = 0.0000

Family size = 10

 

 

LEGEND:

 

 

 

 

 

< 

Significantly lower.

 

 

 

 

> 

Significantly higher.

 

 

 

 

x

No significant difference.

 

 

 

 

 

Note. Within jurisdiction, comparisons on any given year are dependent with an alpha level of 0.05. From U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Center for Educational Statistics (NCES). (2019) Reading Assessment.

Table 2 was created using the NAEP Data Explorer, demonstrating the differences in means and the results of independent t-tests concerning students’ answers to the variable “recognize when you do not understand what you are reading.” Data analyses conducted with NAEP Data Explorer had an alpha set at 0.05. The average informational reading subscale score of students who reported “I probably can’t” to recognizing when they do not understand what they are reading (M=236, SD=39) was significantly (p <0.001) higher than the average scale score of students who reported “I definitely can’t” (M=227, SD=38). The average subscale score of students who reported “maybe” to recognizing when they do not understand what they are reading (M=247, SD=37) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=227, SD=38). Additionally, the average subscale score of eighth graders who reported “I probably can” to recognizing when they do not understand what they are reading (M=265, SD=35) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=227, SD=38). Finally, the average subscale score of students who reported “I definitely can” to recognizing when they do not understand what they are reading (M=277, SD=34) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=227, SD=38).

The average informational reading subscale score of students who reported “maybe” to recognizing when they do not understand what they are reading (M=247, SD=37) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=236, SD=39). The average subscale score of eighth graders who reported “I probably can” to recognizing when they do not understand what they are reading (M=265, SD=35) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=236, SD=39). Also, the average subscale score of students who reported “I definitely can” to recognizing when they do not understand what they are reading (M=277, SD=34) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=236, SD=39). Next, the average subscale score of eighth graders who reported “I probably can” to recognizing when they do not understand what they are reading (M=265, SD=35) was significantly (p <0.001) higher than those who reported “maybe” (M=247, SD=37). Additionally, the average subscale score of students who reported: “I definitely can” to recognizing when they do not understand what they are reading (M=277, SD=34) was significantly (p <0.001) higher than those who reported “maybe” (M=247, SD=37). Finally, the average subscale score of eighth graders who reported: “I definitely can” to recognizing when they do not understand what they are reading (M=277, SD=34) was significantly (p <0.001) higher than those who reported “I probably can” (M=265, SD=35).

Table 3 demonstrates the Cohen’s d effect size of the significant differences in mean subscale scores for the variable “recognize when I can’t understand something I am reading”.

Table 3

Table 3 Effect Sizes of Differences in Subscale Scores When Students Can Recognize When They Do Not Understand Something They Are Reading

Frequencies

Means (SD)

Frequencies

Means (SD)

Cohen’s d

I probably can’t

236 (39)

I definitely can’t

227 (38)

0.12

Maybe

247 (37)

I definitely can’t

227 (38)

0.53

Maybe

247 (37)

I probably can’t

236 (39)

0.29

I probably can

265 (35)

I definitely can’t

227 (38)

1.04

I probably can

265 (35)

I probably can’t

236 (39)

0.78

I probably can

265 (35)

Maybe

247 (37)

0.50

I definitely can

277 (34)

I definitely can’t

227 (38)

1.39

I definitely can

277 (34)

I probably can’t

236 (39)

1.12

I definitely can

277 (34)

Maybe

247 (37)

0.84

I definitely can

277 (34)

I probably can

265 (35)

0.35

 

Effect size measures are commonly utilized to examine the significance level of mean differences Mcleod (2019). An effect size of 0.2 to less than 0.5 is considered small, an effect size of 0.5 to less than 0.8 is considered medium, and an effect size of 0.8 or greater is considered large Mcleod (2019). The small Cohen’s d effect sizes for “recognize when I do not understand something I’m reading” were existent between the responses “maybe” and “I probably can’t” (d=0.29) and between the responses “I definitely can” and “I probably can” (d=0.35). Effect sizes that are considered medium for eighth graders recognizing when they do not understand what they are reading were between the responses “maybe” and “I definitely can’t” (d=0.53), between “I probably can” and “I probably can’t” (d=0.78), and between the responses “I probably can” and “maybe” (d=0.50). Lastly, the large effect sizes for this variable were between the responses “I probably can” and “I definitely can’t” (d=1.04), between “I definitely can” and “I definitely can’t” (d=1.39), and between “I definitely can” and “I probably can’t (d=1.12)”.

RQ #2- "How does students’ self-reported ability to figure out the meaning of a word they don’t know by using other words in the text impact their informational reading achievement?”

Under the category of “affective disposition,” eighth-grade students also responded to a questionnaire item that focused on the critical reading skill “identifying the meaning of a word using other words in the text,” often referred to as using textual context clues. Table 4 demonstrates eighth graders’ average informational reading subscale scores from the 2019 NAEP based on their self-reported ability level with identifying the meaning of a word using the other words in the text.

Table 4

Table 4 Average Scale Scores and Standard Deviations by “Identify Meaning of a Word Using Other Words in Text” Variable

Year

Jurisdiction

Identify meaning of a word using other words in text

Average scale score

Standard deviation

2019

National Public

I definitely can’t

219

37

2019

National Public

I probably can’t

233

38

2019

National Public

Maybe

244

36

2019

National Public

I probably can

271

33

2019

National Public

I definitely can

277

35

 

Note. Some apparent differences between estimates may not be statistically significant. From U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Center for Educational Statistics (NCES). (2019) Reading Assessment.

The average subscale scores for the informational reading assessment for eighth-grade students in 2019 are demonstrated above based on the frequency of their self-reported level of their ability to “identify meaning of a word using other words in text” National Center for Educational Statistics (NCES). (2019). The average score of students who reported “I definitely can’t” to being able to use context clues to define a word when they are reading was 219 (SD=37). For students who reported “I probably can’t,” their average subscale score was 233 (SD=38). The average subscale score of students who reported “maybe” was 244 (SD=36). Students who reported “I probably can” had an average subscale score of 271 (SD=33) on the informational reading assessment. Lastly, the average subscale score for students who reported “I definitely can” was 277 (SD=35).

Table 5 presents the independent t-test results and the differences in means for the frequency of reported student responses for “identify meaning of a word using other words in text.”

Table 5

Table 5 Difference Between Average Scale Scores for Variable “Identify Meaning of a Word Using Other Words in Text”

 

I definitely can't (219)

I probably can't (234)

Maybe (244)

I probably can (271)

I definitely can (277)

I definitely can't (219)

 

 

 

 

 

 

 

I probably can't (234)

 

> 

Diff = 14

P-value = 0.0000

Family size = 10

 

 

 

 

 

Maybe (244)

 

> 

Diff = 25

P-value = 0.0000

Family size = 10

> 

Diff = 11

P-value = 0.0000

Family size = 10

 

 

 

 

 

I probably can (271)

 

> 

Diff = 52

P-value = 0.0000

Family size = 10

> 

Diff = 37

P-value = 0.0000

Family size = 10

> 

Diff = 27

P-value = 0.0000

Family size = 10

 

 

 

 

I definitely can (277)

 

> 

Diff = 58

P-value = 0.0000

Family size = 10

> 

Diff = 44

P-value = 0.0000

Family size = 10

> 

Diff = 33

P-value = 0.0000

Family size = 10

> 

Diff = 7

P-value = 0.0000

Family size = 10

 

 

LEGEND:

 

 

 

 

 

< 

Significantly lower.

 

 

 

 

> 

Significantly higher.

 

 

 

 

x

No significant difference.

 

 

 

 

 

Note. Within jurisdiction, comparisons on any given year are dependent with an alpha level of 0.05. From U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Center for Educational Statistics (NCES). (2019) Reading Assessment.

Table 5 was also created using the NAEP Data Explorer, demonstrating the differences in means and the results of independent t-tests concerning students’ answers to the variable “identify meaning of a word using other words in text.” Data analyses conducted with NAEP Data Explorer had an alpha set at 0.05. The average informational reading subscale score of students who reported “I probably can’t” to identifying the meaning of a word using other words in the text (M=234, SD=38) was significantly (p <0.001) higher than the average scale score of students who reported “I definitely can’t” (M=219, SD=37). The average subscale score of students who reported “maybe” to identifying the meaning of a word using other words in the text (M=244, SD=36) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=219, SD=37). Additionally, the average subscale score of eighth graders who reported: “I probably can” to identifying the meaning of a word using other words in the text (M=271, SD=33) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=219, SD=37). Finally, the average subscale score of students who reported: “I definitely can” to identifying the meaning of a word using other words in the text (M=277, SD=35) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=219, SD=37).

The average informational reading subscale score of students who reported “maybe” to identifying the meaning of a word using other words in the text (M=244, SD=36) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=234, SD=38). The average subscale score of eighth graders who reported “I probably can” to identifying the meaning of a word using other words in text (M=271, SD=33) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=234, SD=38). Also, the average subscale score of students who reported “I definitely can” to identifying the meaning of a word using other words in text (M=277, SD=35) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=234, SD=38). Next, the average subscale score of eighth graders who reported “I probably can” to identifying the meaning of a word using other words in text (M=271, SD=33) was significantly (p <0.001) higher than those who reported “maybe” (M=244, SD=36). Additionally, the average subscale score of students who reported “I definitely can” to identifying the meaning of a word using other words in text (M=277, SD=35) was significantly (p <0.001) higher than those who reported “maybe” (M=244, SD=36). Finally, the average subscale score of eighth graders who reported “I definitely can” to identifying the meaning of a word using other words in text (M=277, SD=35) was significantly (p <0.001) higher than those who reported “I probably can” (M=271, SD=33).

Table 6 demonstrates the Cohen’s d effect size of the significant differences in mean subscale scores for the variable “identify meaning of a word using other words in text”.

Table 6

Table 6 Effect Sizes of Differences in Subscale Scores When Students Can Identify the Meaning of a Word Using Other Words in the Text

Frequencies

Means (SD)

Frequencies

Means (SD)

Cohen’s d

I probably can’t

234 (38)

I definitely can’t

219 (37)

0.40

Maybe

244 (36)

I definitely can’t

219 (37)

0.68

Maybe

244 (36)

I probably can’t

234 (38)

0.27

I probably can

271 (33)

I definitely can’t

219 (37)

1.48

I probably can

271 (33)

I probably can’t

234 (38)

1.04

I probably can

271 (33)

Maybe

244 (36)

0.78

I definitely can

277 (35)

I definitely can’t

219 (37)

1.61

I definitely can

277 (35)

I probably can’t

234 (38)

1.18

I definitely can

277 (35)

Maybe

244 (36)

0.93

I definitely can

277 (35)

I probably can

271 (33)

0.18

 

The Cohen’s d effect sizes for mean differences are presented above in Table 6. The small effect size for “identify the meaning of a word using other words in text” was existent between the responses “maybe” and “I probably can’t” (d=0.27). Cohen’s effect sizes that are considered medium for eighth graders identifying the meaning of a word using other words in the text were between the responses “maybe” and “I definitely can’t” (d=0.68) and between the responses “I probably can” and “maybe” (d=0.78). Lastly, the large effect sizes for this variable were between the responses “I probably can” and “I definitely can’t” (d=1.48), between “I probably can” and “I probably can’t” (d=1.04), between “I definitely can” and “I definitely can’t” (d=1.61), between “I definitely can” and “I probably can’t” (d=1.18), and between “I definitely can” and “maybe” (d=0.93).

RQ #3- "How does students’ self-reported ability to figure out the main idea of a text impact their informational reading achievement?"

On the student questionnaire, eighth-grade students also responded concerning how much they identify with the skill of “identifying the main idea of text”. Table 7 presents eighth graders’ average informational reading subscale scores from the 2019 NAEP based on the frequency of their self-reported identification levels with recognizing the main idea in text.

Table 7

Table 7 Average Scale Scores and Standard Deviations by “Identify Main Idea of Text” Variable

Year

Jurisdiction

Identify main idea of text

Average scale score

Standard deviation

2019

National Public

I definitely can’t

223

39

2019

National Public

I probably can’t

233

38

2019

National Public

Maybe

246

38

2019

National Public

I probably can

269

35

2019

National Public

I definitely can

275

35

 

Note. Some apparent differences between estimates may not be statistically significant. From U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Center for Educational Statistics (NCES). (2019) Reading Assessment.

The average subscale scores for the informational reading assessment for eighth-grade students in 2019 are displayed above based on their level of identification with the questionnaire statement of “identifying main idea of text” National Center for Educational Statistics (NCES). (2019). The average score of students who reported “I definitely can’t” to having the ability to recognize the main idea of text was 223 (SD=39). For students who reported “I probably can’t”, their average subscale score was 233 (SD=38). The average subscale score of students who reported “maybe” was 246 (SD=38). Students who reported “I probably can” had an average subscale score of 269 (SD=35) on the informational reading assessment. Lastly, the average subscale score for students who reported “I definitely can” was 275 (SD=35).

Table 8 presents the mean differences and independent t-test results for the frequency of reported student responses for “identify main idea of a text”.

Table 8

Table 8 Difference Between Average Scale Scores for Variable “Identify Main Idea of Text”

 

I definitely can't (223)

I probably can't (233)

Maybe (246)

I probably can (269)

I definitely can (275)

I definitely can't (223)

 

 

 

 

 

 

 

I probably can't (233)

 

> 

Diff = 11

P-value = 0.0000

Family size = 10

 

 

 

 

 

 

 

 

Maybe (246)

 

> 

Diff = 23

P-value = 0.0000

Family size = 10

> 

Diff = 12

P-value = 0.0000

Family size = 10

 

 

 

 

 

 

I probably can

(269)

 

> 

Diff = 47

P-value = 0.0000

Family size = 10

> 

Diff = 36

P-value = 0.0000

Family size = 10

> 

Diff = 24

P-value = 0.0000

Family size = 10

 

 

 

 

I definitely can (275)

 

> 

Diff = 52

P-value = 0.0000

Family size = 10

> 

Diff = 41

P-value = 0.0000

Family size = 10

> 

Diff = 29

P-value = 0.0000

Family size = 10

> 

Diff = 5

P-value = 0.0000

Family size = 10

 

 

LEGEND:

 

 

 

 

 

< 

Significantly lower.

 

 

 

 

> 

Significantly higher.

 

 

 

 

x

No significant difference.

 

 

 

 

 

Note. Within jurisdiction comparisons on any given year are dependent with an alpha level of 0.05. From U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Center for Educational Statistics (NCES). (2019) Reading Assessment.

Table 8 was also developed using the NAEP Data Explorer and demonstrates the differences in means and the results of independent t-tests concerning students’ answers to the variable “identify main idea of text.” Data analyses conducted with NAEP Data Explorer had an alpha set at 0.05. The average informational reading subscale score of students who reported “I probably can’t” to identifying the main idea of a text (M=233, SD=38) was significantly (p <0.001) higher than the average scale score of students who reported “I definitely can’t” (M=223, SD=39). The average subscale score of students who reported “maybe” to identifying the main idea of a text (M=246, SD=38) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=223, SD=39). Additionally, the average subscale score of eighth graders who reported “I probably can” to identifying the main idea of a text (M=269, SD=35) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=223, SD=39). Finally, the average subscale score of students who reported “I definitely can” to identifying the main idea of a text (M=275, SD=35) was significantly (p <0.001) higher than those who reported “I definitely can’t” (M=223, SD=39).

The average informational reading subscale score of students who reported “maybe” to identifying the main idea of a text (M=246, SD=38) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=233, SD=38). The average subscale score of eighth graders who reported “I probably can” to identifying the main idea of a text (M=269, SD=35) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=233, SD=38). Also, the average subscale score of students who reported “I definitely can” to identifying the main idea of a text (M=275, SD=35) was significantly (p <0.001) higher than those who reported “I probably can’t” (M=233, SD=38). Next, the average subscale score of eighth graders who reported “I probably can” to identifying the main idea of a text (M=269, SD=35) was significantly (p <0.001) higher than those who reported “maybe” (M=246, SD=38). Additionally, the average subscale score of students who reported “I definitely can” to identifying the main idea of a text (M=275, SD=35) was significantly (p <0.001) higher than those who reported “maybe” (M=246, SD=38). Finally, the average subscale score of eighth graders who reported “I definitely can” to identifying the main idea of a text (M=275, SD=35) was significantly (p <0.001) higher than those who reported “I probably can” (M=269, SD=35).

Table 9 demonstrates the Cohen’s d effect size of the significant differences in mean subscale scores for the variable “identifying the main idea of a text.”

Table 9

Table 9 Effect Sizes of Differences in Subscale Scores When Students Can Identify the Main Idea of a Text

Frequencies

Means (SD)

Frequencies

Means (SD)

Cohen’s d

I probably can’t

233 (38)

I definitely can’t

223 (39)

0.26

Maybe

246 (38)

I definitely can’t

223 (39)

0.60

Maybe

246 (38)

I probably can’t

233 (38)

0.34

I probably can

269 (35)

I definitely can’t

223 (39)

1.24

I probably can

269 (35)

I probably can’t

233 (38)

0.99

I probably can

269 (35)

Maybe

246 (38)

0.63

I definitely can

275 (35)

I definitely can’t

223 (39)

1.40

I definitely can

275 (35)

I probably can’t

233 (38)

1.15

I definitely can

275 (35)

Maybe

244 (36)

0.87

 

The Cohen’s d effect sizes for mean differences are presented above in Table 9. The small effect sizes for “identify the main idea of a text” were existent between the responses “I probably can’t” and “I definitely can’t” (d=0.26), between “maybe” and “I probably can’t” (d=0.34). Cohen’s d effect sizes that are considered medium for eighth graders identifying the main idea of a text were between the responses “maybe” and “I definitely can’t” (d=0.60) and between the responses “I probably can” and “maybe” (d=0.63). Lastly, the large effect sizes for this variable were between the responses “I probably can” and “I definitely can’t” (d=1.24), between “I definitely can” and “I definitely can’t” (d=1.40), between “I definitely can” and “I probably can’t” (d=1.15), between “I definitely can” and “maybe” (d=0.87).

 

5. Discussion

This study explored the impact of eighth-grade students’ self-reported metacognitive reading strategies on their 2019 NAEP’s Informational Reading subscale scores. Descriptive statistics were employed to analyze the data from each research question. The use of descriptive statistics can assist in the qualitative description of mean scores and help to identify trends in data Creswell and Creswell (2018). The selected research questions involved the analysis of three student-reported abilities: to recognize when they do not understand what they’re reading, to identify the meaning of a word using other words in the text, and to identify the main idea of a text. The results from each research question demonstrated that these abilities have a significant impact on students’ success with informational text. As students’ self-reported ability levels with each of the three strategies increased, their mean subscale comprehension scores did as well.

 

5.1. Monitoring Understanding During Reading

Results from this study demonstrated that eighth graders’ self-reported ability to recognize when they do not understand something they are reading did have a significant impact on their informational reading subscale score on the 2019 NAEP. Students who responded as any other ability than “I definitely can’t” scored higher on the informational reading subscale than those who answered at the lowest ability level. With each increase in self-reported ability level, students’ respective scores also increased. The greatest significance was found between those who responded that they probably could or definitely could recognize when they do not understand what they’re reading in comparison to those who said they definitely could not. The Cohen’s effect size was also large between the eighth graders who reported that they definitely can recognize when they do not understand reading material and those who selected “I probably can’t.” These results reveal a significant connection between students’ confidence with being able to recognize when they do not understand text that they’re reading and their informational reading achievement. This is consistent with Forrest-Pressley and Waller (2013) assertion that the metacognitive aspect of successful reading comprehension must include the ability to know when one does understand or does not understand a text. This foundational recognition can then be used by students to monitor their reading comprehension and select supportive strategies as necessary Forrest-Pressley and Waller (2013).

The positive correlation this study discovered between an increased perceived metacognitive ability level during reading and one’s reading achievement is also parallel with the findings of Muhid et al. (2020) and Huang (2011). Muhid et al. (2020) discovered that secondary students who used metacognitive strategies, including self-monitoring their understanding during reading, had significantly improved reading comprehension scores on a posttest measure. Meanwhile, students who did not utilize metacognitive strategies had no significant improvement from their pretest to posttest for reading comprehension. Similarly, the findings from this present study showed that students who feel confident they can monitor their comprehension and know when they do not understand something scored higher than students who felt less confident in this ability. This suggests the importance of providing students with direct instruction and situated practice with monitoring their comprehension throughout the reading process, particularly with informational text.

 

5.2. Using Textual Context Clues

This research also revealed that students’ self-reported ability to identify the meaning of a word by using other words in the text can have a significant impact on their informational reading scores. Students who responded as “I definitely can” to being able to use context clues to define the meaning of an unknown word scored significantly higher than eighth graders who provided any of the other less confident responses. Additionally, the findings for this research question demonstrated multiple large Cohen's effects. For instance, the most significant effect was discovered between the scores of students who responded that they definitely could use other words in the text to discover the meaning of an unknown word and those who responded that they definitely could not. Additionally, the second largest effect size was indicated between students who answered that they probably could use contextual clues to define a word and those who answered that they definitely could not. Students’ ability to use this specific metacognitive reading skill directly impacted their informational reading achievement on the 2019 NAEP. This finding parallels the results of Ghaith and El-Sanyoura (2019) study concerning the impact of using different types of metacognitive strategies on secondary students’ reading comprehension scores. Their findings demonstrated that the use of problem-solving metacognitive strategies, including using clues in the text to define the meaning of an unknown word, during the reading process positively impacted students’ informational reading comprehension achievement Ghaith and El-Sanyoura (2019). This effect was significant with both literal and higher-order questions taken from expository text passages Ghaith and El-Sanyoura (2019).

Additionally, this present study’s results aligned with those of Ghaith and El-Sanyoura (2019) in relation to problem-solving metacognitive strategies having the most significant effects on students’ scores. For instance, Ghaith and El-Sanyoura (2019) indicated that the use of problem-solving metacognitive strategies, in which they included the ability to use context clues to guess the meaning of a word, had a greater positive impact on students’ reading comprehension scores than other types of metacognitive strategies. Similarly, this study’s analysis of students’ responses of “I definitely can” and “I definitely can’t” for being able to use textual context clues to define an unknown word illustrated the largest effect size between the analyses for all four research questions (1.61). While the strategies of recognizing when one doesn’t understand what they are reading, identifying the main idea of a text, and explaining the meaning of what one has read also all had a statistically significant impact on eighth graders’ informational reading comprehension scores, their effect sizes were not as large. This indicates that this metacognitive strategy, or problem-solving strategies in general, may have an even greater impact on students’ informational reading comprehension than other types of metacognitive reading strategies.

5.3. Identifying Main Idea in Informational Text

This study found that students who responded at any confidence level other than “I definitely can’t” for being able to identify the main idea in a text had higher scores on the 2019 NAEP Informational Reading subscale. The scores continued to increase as students’ response in ability level increased, with a response of “I definitely can” being connected to the best informational reading score outcomes. The ability level of “I definitely can” had a large effect size over the ability level responses of “I definitely can’t,” “I probably can’t,” and “maybe.” Additionally, students’ responses of “I definitely can’t” identify the main idea of a text demonstrated a large effect size over the response “I probably can.” Whether considered a metacognitive monitoring skill built on metacognition, previous research posits that finding the main idea of a text is a crucial skill for informational reading comprehension success. For instance, this study’s results are supported by the findings of Boulware-Gooden et al. (2007), Klingner (2004), and McCown and Thomason (2014) that showed elementary age students who received strategy instruction on recognizing the main idea outperformed their peers on various measures of informational reading comprehension, including the Gates-MacGinitie Reading Test. Additionally, previous research has demonstrated that students who do not apply the strategy of main idea identification during reading are correlated with poor reading comprehension achievement Brown (1980). This connects with the lower NAEP informational reading scores discovered for eighth graders who reported that they definitely could not or probably could not identify the main idea of a text.

According to Şen (2009), students who learned to successfully find the main idea of informational text had increased reading comprehension scores on posttest measures. This aligns with this study’s finding that students who felt that they were definitely or probably able to determine the main idea of expository text scored significantly better than those who felt that they maybe could, probably could not, or definitely could not. However, it is important to note that Şen (2009) study examined the achievement of students who were specifically taught how to identify the main idea with use of metacognitive monitoring strategies while the present study looked for significance in students’ self-reported ability levels with this skill. Overall, multiple findings suggest that explicit instruction on determining the main idea as a monitoring process during reading could be beneficial for students’ overall comprehension of informational text.

While this study demonstrated that students’ level of metacomprehension impacts their reading comprehension achievement, literature has also suggested that students’ metacomprehension is dependent on their reading comprehension success Kintsch (1998), Soto et al. (2019). According to Dunlosky et al. (2002), students base their judgments of their understanding of the text on various cues, including the number of disruptions they encounter during the reading process. Success with informational text requires the need for both inferential and literal comprehension, but if students encounter too many disruptions and difficulties at the level of literal comprehension, then their overall reading comprehension can suffer which can cause them to have a lower judgment of their understanding, or meta comprehension Soto et al. (2019). Overall, these findings show a significant correlation between students’ perceptions of their ability to explain the meanings of a text and their actual reading comprehension achievement. This implies that helping students to improve their reading skills and strategies at both literal and inferential levels could help to decrease the number of disruptions they encounter and consequently increase their meta comprehension level.

 

6. Conclusion

6.1. Key Conclusions

This study explored the 2019 NAEP informational reading achievement scores for eighth-grade students and how different variables related to metacognitive strategy use impacted those scores. Those variables included students’ self-reported ability to recognize when they do not understand what they are reading, identify the meaning of a word by using other words in the text, and identify the main idea of a text. The results of this study indicated that each of these factors did have a significant impact on students’ informational reading scores.

 

6.1.1.  The use of self-monitoring strategies is an essential skill for students 

Students who reported they definitely could recognize when they do not understand something they are reading had a higher mean informational reading score than students who felt less confident in this ability. As students’ reported ability levels increased in confidence, so did their associated mean scores. The ability to know when one doesn’t understand something in the text during the reading process is usually categorized as a metacognitive self-monitoring skill and has also been demonstrated to improve students’ reading comprehension in other studies Huang (2011), Muhid et al. (2020). This result indicates that the use of self-monitoring strategies is an essential skill for students to utilize in order to select appropriate supportive strategies when they notice an issue with their comprehension.

 

6.1.2.  The ability to define an unknown word with the use of context clues is one of the greatest metacognitive predictors of students’ reading comprehension success

Students who reported they definitely could identify the meaning of a word by using other words in the text had a higher mean informational reading score than students who felt that it was less likely that they could use this skill. As students’ reported ability to use textual context clues increased, so did their associated mean scores. Additionally, the analysis of this variable revealed the largest effect size (d=1.61) in this study between students who responded that they definitely could and those who responded that they definitely could not. Similarly, Ghaith and El-Sanyoura (2019) also discovered that the use of problem-solving metacognitive strategies, including the ability to use textual context clues to define an unknown word, had a more significant impact on students’ informational text comprehension than other types of metacognitive reading strategies. This leads to the conclusion that the ability to define an unknown word by using other words in the text is one of the greatest metacognitive predictors of students’ success with informational reading comprehension.

 

6.1.3.  Students who can identify the main ideas of informational text experience more success with informational reading comprehension.

Students who reported they definitely could identify the main idea of a text had a higher average informational reading score than students who felt less confident in this ability. As students’ reported ability levels increased in confidence, so did their associated mean scores. Those eighth-grade students who reported that they definitely could not identify the main idea had a significantly lower mean score than all other reported levels, with a large effect between those who said they definitely could or probably could. This connects with previous findings that show students who receive strategy instruction on how to identify the main idea of expository text, and then utilize the strategy, score better on measures of reading comprehension achievement than those who do not employ this strategy Brown (1980), Klingner (2004), McCown and Thomason (2014). This shows that students who are able to identify the main idea of a text are more successful with informational reading comprehension than students who cannot identify this text structure. It also indicates that explicit strategy instruction on main idea identification could help students to improve their reading comprehension with expository text.

 

 

 

 

6.2. Implications

The results of this study offer various implications for the field of reading comprehension instruction, specifically for informational reading. Overall, the findings suggest that students’ ability to use various metacognitive strategies is essential for their success with informational text. Explicit instruction on how to utilize self-monitoring, problem-solving, text feature identification, and self-assessment strategies at the secondary level could be significantly beneficial for students. This would have the potential to increase students’ perceived abilities with multiple metacognitive strategies and improve their overall informational reading scores. Additionally, specific attention should be attributed to helping students develop the ability to use textual context clues to define unknown words, as this may have the greatest impact on their reading comprehension growth. These strategies are particularly important with informational text as it typically contains text features that students are less familiar with, such as main ideas and domain-specific vocabulary words. The results of this study imply that direct instruction and practice with the use of these strategies would be beneficial across all content areas at the secondary level as students experience domain-specific expository text. Finally, these findings suggest the importance of implementing metacognitive strategy instruction in teacher preparation programs in order to better equip educators in all content areas to guide students with these skills.

 

6.3. Limitations

This present study has a few potential limitations to be considered. As a secondary analysis that utilized secondary data, this research has inherited potential validity problems through the collection of data. Specifically, the variables selected for this study were pre-decided through the NAEP student survey, and some of the correlation analyses might appear non-natural National Center for Educational Statistics (NCES). (2022e). Thus, these should not be interpreted as cause-and-effect relationships. Also, the analysis methods were limited and are only based on the descriptive statistical models available in the NAEP Data Explorer National Center for Educational Statistics (NCES). (2022e). Another possible limitation is related to the lack of longevity of this study. This study explored only the 2019 NAEP data, and in order to produce results with greater validity, the same analyses should be conducted with the NAEP results from additional years. Finally, many other factors may impact students’ informational reading scores, including other cognitive, environmental, and instructional factors that were not considered as an element of this study.

 

6.4. Recommendations for Future Research

Future research could expand upon the findings of this study in order to provide results with greater validity and more potential for generalization. For instance, a longitudinal secondary analysis that explores the same metacognitive variables and eighth-grade students’ informational reading scores on the NAEP from other available years could demonstrate if the findings from 2019 are consistent across other years. Also, exploring the impact of students’ self-reported use of metacognitive strategies on informational reading scores at other grade levels on the NAEP could increase the generalization of the results. Finally, further research could include a primary analysis that utilizes mixed methods to explore educators’ use of metacognitive strategy instruction within reading instruction across various content areas. This could help to further guide researchers and educators on the specific skills that are most beneficial for students as they encounter expository text across different domains.

 

CONFLICT OF INTERESTS

None. 

 

ACKNOWLEDGMENTS

None.

 

REFERENCES

Afflerbach, P., and Cho, B. (2009). Determining and Describing Reading Strategies. In H. S. Waters and W. Schneider (Eds.), Metacognition, Strategy Use, and Instruction. Guilford Press. 201–225.  

Baye, A., Inns, A., Lake, C., and Slavin, R. E. (2019). A Synthesis of Quantitative Research on Reading Programs for Secondary Students. Reading Research Quarterly, 54(2), 133–166. https://doi.org/10.1002/rrq.229. 

Becker, L. A. (2000). Effect Size (ES).

Bond, J., and Zhang, M. (2017). The Impact of Conversations in Fourth Grade Reading Performance – What NAEP Data Explorer European https://doi.org/10.12973/eu-jer.6.4.407.

Boulware-Gooden, R., Carreker, S., Thornhill, A., and Joshi, R. M. (2007). Instruction of Metacognitive Strategies Enhances Reading Comprehension and Vocabulary Achievement of Third-Grade Students. Reading Teacher, 61(1), 70–77. https://doi.org/10.1598/RT.61.1.7.  

Brown, A. L. (1980). Metacognitive Development and Reading. In R. J. Spiro, B. B. Bruce and W. F. Brewer (Eds.), Theoretical Issues in Reading Comprehension. Lawrence Erlbaum. 453–481.  

Camahalan, F. M. G. (2006). Effects of a Metacognitive Reading Program on the Reading Achievement and Metacognitive Strategies of Students with Cases of Dyslexia. Reading Improvement, 43(2), 77–94.  

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed). Erlbaum.  

Creswell, J. W., and Creswell, J. D. (2018). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (5th ed). SAGE.   

Duke, N. K. (2004). The Case for Informational Text. Educational Leadership, 61(6), 40–45.  

Dunlosky, J., Rawson, K. A., and Hacker, D. J. (2002). Metacomprehension of Science Text: Investigating the Levels-of-Disruption Hypothesis. In J. Otero, J. A. León and A. C. Graesser (Eds.), The Psychology of Science Text Comprehension. Lawrence Erlbaum Associates Publishers. 255–279.

Forrest-Pressley, D. L., and Waller, T. G. (2013). Cognition, Metacognition, and Reading. Springer.  

Ghaith, G., and El-Sanyoura, H. (2019). Reading Comprehension: The Mediating Role of Metacognitive Strategies. Reading in a Foreign Language, 31(1), 19–43.  

Huang, S. Y. (2011). Reading “Further and Beyond the Text”: Student Perspectives of Critical Literacy in EFL Reading and Writing. Journal of Adolescent and Adult Literacy, 55(2), 145–154. https://doi.org/10.1002/JAAL.00017. 

Kintsch, W. (1998). Comprehension: A Paradigm for Cognition. Cambridge University Press.  

Klecker, B. M. (2014). NAEP Fourth-, Eighth-, and Twelfth Grade Reading Scores by Gender: 2005, 2007, 2009, 2011, 2013. Online Submission.  

Klingner, J. K. (2004). Assessing Reading Comprehension. Assessment for Effective Intervention, 29(4), 59–70. https://doi.org/10.1177/073724770402900408. 

McCown, M., and Thomason, G. (2014). Informational Text Comprehension: Its Challenges and How Collaborative Strategic Reading Can Help. Reading Improvement, 51(2), 237.

Mcleod, S. (2019). Effect size. Simply Psychology.

Muhid, A., Amalia, E. R., Hilaliyah, H., Budiana, N., and Wajdi, M. B. N. (2020). The Effect of Metacognitive Strategies Implementation on Students’ Reading Comprehension Achievement. International Journal of Instruction, 13(2), 847–862. https://doi.org/10.29333/iji.2020.13257a. 

Mullis, I. V., Martin, M. O., Foy, P., and Hooper, M. (2017). ePIRLS 2016 : International Results in Online Informational Reading. International Association for the Evaluation of Educational Achievement.  

NCES. (2022c). Assessments: Reading.  

National Center for Educational Statistics (NCES). (2019). An Overview of NAEP.

National Center for Educational Statistics (NCES). (2022a). Explore Results for the 2019 NAEP Reading Assessment. The Nation’s Report Card.

National Center for Educational Statistics (NCES). (2022b). NAEP Report Card: 2019 NAEP Reading Assessment. The Nation’s Report ard.  

National Center for Educational Statistics (NCES). (2022d). Data Tools : NAEP Data Explorer. The Nation’s Report Card.

National Center for Educational Statistics (NCES). (2022e). Reporting Results. The Nation’s Report Card.

Reilly, D., Neumann, D. L., and Andrews, G. (2019). Gender Differences in Reading and Writing Achievement: Evidence from the National Assessment of Educational Progress (NAEP). American Psychologist, 74(4), 445–458. https://doi.org/10.1037/amp0000356. 

Schugar, H. R., and Dreher, M. J. (2017). US Fourth Graders’ Informational Text Comprehension : Indicators from NAEP. International Electronic Journal of Elementary Education, 9(3), 523–552.

Shanahan, C., and Shanahan, T. (2018). Disciplinary literacy. In D. Lapp and D. Fisher (Eds.), Handbook of Research on Teaching the English Language Arts. Routledge, 281–309. https://doi.org/10.4324/9781315650555-12.

Shanahan, T., and Shanahan, C. R. (2015). Disciplinary Literacy Comes to Middle School. Voices from the Middle, 22(3), 10.

Soto, C., Gutiérrez de Blume, A. P., Jacovina, M., McNamara, D., Benson, N., and Riffo, B. (2019). Reading Comprehension and Metacognition: The Importance of Inferential Skills. Cogent Education, 6(1), 1–20. https://doi.org/10.1080/2331186X.2019.1565067.  

United States Department of Education. (2019).

Şen, H. Ş. (2009). The Relationship Between the use of Metacognitive Strategies and Reading Comprehension. Procedia – Social and Behavioral Sciences, 1(1), 2301–2305. https://doi.org/10.1016/j.sbspro.2009.01.404.

     

 

 

 

 

Creative Commons Licence This work is licensed under a: Creative Commons Attribution 4.0 International License

© Granthaalayah 2014-2023. All Rights Reserved.