Enabling Questions (EQ) For Active Student-Centered Inquiry & Verbal Learning*
EQs are interrupting questions that stir creative outcomes
•
*Abridged with author permission: Manzo/Manzo/Thomas (2009) Content Area Literacy: A Framework for Reading-Based Instruction (5th edition) Wiley Publishers
The Enabling Questions (EQ) procedure is a student-centered gambit designed to provide experience and Self-Instruction for students in the value and empowerment that comes from learning how to use various inquiry strategies in conventional verbal learning (Manzo/Manzo, 1990). EQ is initiated and demonstrated in a highly structured way with a prepared set of questions that students can be used to tune into and reduce distraction during Lecture-Discussion, still the most frequently used form of instruction. Using Enabling Questions puts the listener into an active, engaged thinking mode and invites the teacher or speaker to talk a little less and a little more pointedly in response student-relevant questions and concerns. Students should be urged to translate the model questions ahead into their own words and to practice using these each day and in each class. EQ is a powerful social tool that illustrates how to be assertive without being rude or aggressive. It can be a very inviting substitution for maladaptive behavior in disruptive classrooms. Disruptive students will use these questions rather disruptively. The teacher would be wise to focus on the questions raised and in so doing illustrate to students the value and reasonable control that they can exercise over a lesson. The inversion of student-centered questions for teacher-centered questions seems to be its own reward. The sum of student comments suggests that this kind of give and take makes them feel like they are part of something constructive.
Suggested Enabling Question Types
Set 1: Questions that Help the Listener Organize and Clarify Information
• What is/are the main question(s) you are answering by your lecture (or lesson) today?
• Which key terms and concepts are most important for us to remember from what you have said (or will say) today?
• What is most often misunderstood or confusing about the information or position you are presenting today?
Set 2: Questions that Help the Listener Get a Mental Breather
• Could you please restate that last point in some other words?
• Would you please spell ____ and ____ for us?
• Would you please say which points you especially want us to note at
this time?
Set 3: Questions that Invite Give-and-Take with the Speaker
• How does what you have said compare with positions others have taken, and who might these others be?
• Is there convincing evidence to support your position that you can share with us?
• What do you think is the weakest part of the position you have taken?
• How do you think this position (or new information) affects previously held beliefs?
• What do you suppose would happen if you extended this point another step or two?
• Would you mind pausing for a moment to see if there are other views on this in the class/audience? This would help us better understand and follow your points.
Any one of the latter sets of questions likely would put the listener back into an active and engaged thinking mode and reduce the sometimes excessive dominance of the speaker. It is important that the listener who wishes to use these types of questions does so with an eye toward using them for enriching comprehension, learning, and mature interaction, and not as a counteroffensive. One way to help students learn the value and become regular users of Enabling Questions is to write these questions on index cards and distribute a few to each class member. Then urge students to try to use the questions on their card(s) intelligently over a two- to three-day period. Schedule a day to discuss what happened and what students learned and what might need to be modified to make the Enabling Questions even more enabling. This “metacognitive”- or thinking about thinking - step also helps to convert rather rigid Inquiry skill training into self-directed flexible Inquiry Strategy Learning.
To see more teaching methods for Professional Teachers, and advancing Professional Education go to: http://teacherprofessoraccountability.ning.com/main/invitation/new?xg_source=msg_wel_network And…http://bestmethodsofinstruction.com/ Or our newest site for Professional Teachers: http://anthony-manzo.blogspot.com/2010/05/race-to-top-accountability-leaves.html
Monday, June 28, 2010
Tuesday, June 22, 2010
The Informal Reading-Thinking Inventory (IR-TI)
The Informal Reading-Thinking Inventory:
Assessment Formats for Discovering Typical & Otherwise Unrecognized Reading & Writing Needs – and Strengths
Ula Manzo, PhD
Professor and Chair, Reading Department
California State University, Fullerton
Anthony V. Manzo, PhD
Professor Emeritus, Director, Center for Studies in Higher Order Literacy,
Governor, Interdisciplinary Doctoral Studies
University of Missouri-Kansas City
“The teacher who learns to use the techniques described in this chapter will be well on her way to differentiating instruction.” Emmett Betts, Chapter 21, “Discovering Specific Reading Needs,” Foundations of Reading Instruction, 1957
Doing a diagnostic workup with the Informal Reading-Thinking Inventory is a little like taking a float trip down a familiar river but going farther than you had ever gone before. It is almost always a bit of the mysterious and a source of eye-opening discoveries.
A number of years ago, the authors, along with several doctoral students and other collaborators, began to look at the degree to which students’ Instructional levels, as identified by an Informal Reading Inventory (IRI), correspond to those students’ higher-order thinking abilities – i.e., the inclination and ability to also respond constructively, or critically and creatively to text. What we found was a conundrum or apparent paradox that does not seem to surprise most seasoned teachers, but oddly is only spottily addressed, or dismissed as just so much static in the literature of the field. The finding in a word is that in a typical heterogeneous group of students, there are likely to be about 12% whose ability to respond to higher-order questions is significantly below their Instructional level, while correspondingly there often would be found about another 12% whose ability to respond critically and creatively seemed to significantly exceed their Instructional level (Manzo & Casale, 1981; Casale, 1982, Manzo & Manzo, 1995; Manzo, Manzo & McKenna 1995). We referred to the first group who paradoxically did not seem to think as well as they read as “Profile A,” and the second as who seemed somehow to be able to think better than they could read as “Profile B.” That which we called Profile A had been noticed as troublesome especially in higher education by Chase (1926) who referred to the condition as that of Ungeared Minds. Our initial objective was to tweak the IRI into an Informal Reading-Thinking Inventory (IR-TI) that ideally would have sufficient sensitivity to identify students in both sides of this seeming conundrum. So, at the very outset we were intentionally looking for unlikely weaknesses in some students and almost unimaginable strengths in others. (As an aside, we must admit that the quest was very engaging we were where few had dared to go before. We found several studies where very accomplished researchers had simply dismissed the unexplained as a statistical quirk.) But to continue, at first we tried to find these paradoxical cases by reaching beyond the assessment of word recognition and basic comprehension simply by adding some questions that offered an optional way of assessing higher-order thinking. But there was much more to be done to give these few additional odd-angled questions what is known as “construct validity” – or legitimacy as a measure of this theorized factor or factors. This was especially important since the actual question types could sometimes be found here and there in conventional IRIs. That next step involved several rather sophisticated factor analytic studies and the application of a rather common sense idea that there was great potential value in being able to better identify and discover more specific reading needs and especially reading strengths than is discoverable with other more legacy-bound instruments.
Following is a brief overview of how the IRI in its various forms and formats has and has not evolved to reflect current views of reading development. Then we describe our own attempts to date to provide IRI options with the potential to broaden its outreach, especially into the realm of higher order thinking – the seminal challenge of twenty-first century education. Finally, we re-visit some “heritage” characteristics of the IRI that may be in danger of being lost when they should be preserved in the ebb and flow of theoretical constructs in assessment of specific reading needs.
The IRI as Emergent Science
The Informal Reading Inventory (IRI) was and remains an unrecognized bit of significant historical progress in cognitive and pedagogical science in a relatively pre-scientific era. Emmett Betts (1957), one of the founders of modern reading assessment and instruction, synthesized a great deal of research and practice on individual assessment of reading progress and implicitly, in cognitive development. Flippo, et.al (2009) note that Betts “is frequently credited with the development of IRI techniques, though some reading researchers trace their use even further back.” Indeed, Betts introduces his chapter on Discovering Specific Reading Needs with the acknowledgement that:
Space does not permit a summary of all the investigations pertinent to the use of informal reading inventories. A detailed explanation of all the whys and wherefores of a reading inventory would probably fill a sizable volume (p. 445).
Perhaps not surprisingly, then, “The “Informal Reading Inventory,” or IRI, as he called the synthesized protocols detailed in his landmark textbook, ranks among the most sophisticated approaches ever created for the evaluation of the decoding and comprehension aspects of human cognitive development. It is rarely appreciated, but by setting research-based percent criteria for accuracy in word recognition and comprehension when reading graded passages, this protocol provided what is rarely available in the study of human psychology: measureable, criterion referenced dimensions of a complex cognitive process. The IRI quickly became a cornerstone of the field of Reading. In the half-century since its publication, research-informed understandings of the reading process have been applied to fine-tune certain aspects of the protocol, but in most fundamental respects it remains remarkably – one might say, disturbingly - unchanged from the description in Betts’ chapter on reading diagnosis.
The quantitative criteria for estimating Independent, Instructional, Frustration and Capacity levels are little changed from those recommended in the earliest IRI protocols, and the means of evaluating accuracy in word recognition is a straightforward matter that also is little changed. The analysis and interpretation of oral reading errors to identify specific decoding needs has changed very little, though it has become popular to supplement this with analysis of errors from a psycholinguistic stance to discover the student’s development along a theoretical continuum from relying primarily on orthographic cues, then to semantic cues, to eventual reliance on semantic and syntactic cues. Even more attention, though little progress, has been focused upon various means of evaluating comprehension accuracy – an issue that has been and remains unsettled. In Betts’ (1957) discussion, he notes that,
Two techniques are used commonly to appraise accuracy of comprehension: First, a series of questions each of which may be answered in a word, phrase, or sentence. Second, a single question which requires the pupil to reproduce what he has read [retelling]” ( p. 459).
Betts adds that,
Mere recall of facts provides an index to accuracy of comprehension. . . . To appraise quality and depth of comprehension, however, it is desirable to interrogate with inferential-type questions or to give the pupil an opportunity to express his between-the-lines reading (p. 461).
Early commercial IRIs, such as Silvaroli’s Classroom Reading Inventory (1969) settled upon providing a series of questions for each passage for assessment of comprehension accuracy. These questions typically included a combination of literal, main idea and detail, vocabulary, and inferential questions of a grounded, text specific type (in other words, these “inferential” questions were still quite literal). This approach to quantifying passage comprehension based on answers to a series of questions of these types persisted until fairly recently, despite criticism that it measured the product, not the thought processes or cognitive development aspects of comprehension. Authors of several commercial IRIs responded by adding the option to evaluate comprehension by means of retellings, which are difficult to quantify, and arguably even more literal than questions of all other question types. Applegate, Quinn, and Applegate (2002) challenged IRIs for their failure to evaluate students’ higher-order comprehension. These authors looked at eight IRIs published between 1993 and 2001, and found that,
more than 91% of the nearly 900 items that we classified required text-based thinking; that is, either pure recall or low-level inferences. Nearly two thirds of the items we classified fell into the purely literal category, requiring only that the reader remember information stated directly in the text (p. 178).
Thus, 24% (91% minus 67%) of the 900 items were text-bound inference questions. The driving purpose of these authors’ study was to find out whether and to what extent IRIs used questions that required higher-order thinking, and in fact they found that questions of this type comprised fewer than 1% of the 900 items analyzed. The authors discussed the significant implications of the omission of the assessment of higher-order thinking in IRIs – the failure of these instruments “to distinguish between those children who can remember text and those who can think about it.” They cautioned that assessment drives curriculum and the way reading is taught, and “if the IRIs we use to assess children are insensitive to the differences between recalling and thinking about text, our ability to provide evidence of any given child’s instructional needs, let alone to have an impact upon instruction, is severely limited” (pp. 178-179). This limitation had been noted before by H. G. Wells, (1892) who put this sticky problem in these more figurative terms: “The examiner pipes and the teacher must dance—and the examiner sticks to the old tune.” As previously noted, some commercial IRIs have included higher-order thinking questions in comprehension assessment. Nilsson’s 2008 review of eight IRIs published since 2002 details the variety and complexity of current approaches to evaluating comprehension of both narrative and expository text passages, finding that “IRI authors provide measures of various dimensions, or levels, of reading comprehension – most commonly literal and inferential comprehension” (p. 532). Four of these eight IRIs did include questions requiring reader response of some type and in some format (Applegate et al., 2008; Cooter et al., 2007; Johns, 2005; Silvaroli & Wheelock, 2004).
Clearly, reading assessment should address comprehension processes as well as products, including the inclination and ability to apply higher-order thinking processes in response to reading. This type of question however tends to be more open-ended and therefore serving up too many variations on what might be an acceptable answer. That said, the individual administration of an IRI greatly reduces the problem of evaluating responses to questions that can have many “right” answers since the examiner becomes quickly skilled, as most teachers already are, to sound and relevant reasoning versus ungrounded, or in Chase’s terms, ungearded answers. Even this of course is imperfect, but it is by all accounts sufficiently reliable to be the very same convention employed on several subtests of the Weschler IQ tests that also are individually administered. Remaining issues might be how many higher-order responses should be sought in proportion to literal and basic inferential responses; and, whether results of an IRI based largely upon higher-order comprehension would be comparable to results of more conventional IRIs. Our research has led us to the conclusion that the convention of basing IRI Levels on literal and basic inferential responses is a valid assessment of this dimension of the educational goals of schools an schooling. However, this needs to be, and rather easily can be supplemented with assessment of higher-order thinking. In the Informal Reading-Thinking Inventory, described next, we recommend three sets of questions on each passage: Reading the Lines, Reading Between the Lines, and even Reading Beyond the Lines. The resulting Levels are identified based on the first two sets of questions, with responses to the Beyond the Lines questions evaluated separately and qualitatively, rather than quantitatively. It is the separation of these three types of questions into two sets, as was indicated by the factor analytic studies, that makes this all workable. In virtually all other assessment instruments where such questions might be used they are consolidated into general comprehension or inferential comprehension (which again is very literal as is evident in the fact that they so highly correlate with other conventional question types that there is no real point in treating them as a separate factor). Again, it was here that we relied upon several sophisticated factor analytic studies to provide the empirical evidence that supports the separation action that is the nexus of the IR-TI, and therefore of its “construct validity.” Oddly, this point is made best on logical more so than statistical grounds. Simply ask any educator what they wish to accomplish through quality education and the response will be some variation on students who can think critically and creatively. It is these qualities that convert mere schooling to education. It is these qualities that convert information into knowledge. And, we all hope that it will be these qualities that will bring about a worldview – often referred to as the highest state of literacy – that will be marked by greater empathy, tolerance and a more enlightened vision of our tomorrow than of our today. It is an important but almost an inane question that examiner raises when they obsessively ask how well these higher mental functions correlate with being able to read at a literal level. It is the inverse question that really counts. Our goal is not to merely keep individuals from being illiterate but rather to educate people who are literate - well read, reflective, critical and constructive. The IR-TI moves these valued goals to at least co-equal position with merely learning how to read, especially now that we have increasing evidence that a fairly significant number of our most proficient readers may well be struggling, ungeared thinkers. But let’s talk more now about how these valued objectives can be translated into classroom assessment, and implicitly into instructional actions.
Seven IR-TI Options for Discovering Specific, though often overlooked, Reading Needs – and Strengths
Option 1: Identification of Profiles A and B
Assessment of reading development is assessment of human nature. How one “reads” is virtually a projective test of how one perceives the world and one’s place in it. The minds of the children and older students for whom we have responsibility are as complex and different from one another as any DNA sampling. As each mystery of mind is unwrapped it becomes increasingly clear that assessment remains as much an art as a science. In 1995, we described Profiles A and B as follows:
Todd, a fifth grader, has never had trouble reading. He typically scores well on standardized tests and is able to answer most questions posed by Ms. Reese, his teacher, during discussions of reading selections. Ms. Reese is often surprised, however, at Todd’s reticence whenever these discussions become more open-ended and thoughtful, and opinions are encouraged. At those times, Todd generally has little to say.
Lakesha, Todd’s classmate, has struggled with reading, and has a history of placement in remedial programs. She often stumbles over words and clearly labors over assigned selections. However, after class discussions have provided her with the gist of what her abler classmates have read, she seems to blossom. Her contributions to the give-and-take of what is said are intelligent, pointed, and insightful. Ms. Reese is also puzzled by Lakesha, for she is, after all, a “remedial reader” (Manzo, Manzo & McKenna, 1995, p. 103).
Apparently Betts had struggled with this phenomenon as well, as illustrated in this quote from his 1957 textbook:
Two second-grade pupils, Sally and Billy, may be used to illustrate briefly the complexity of the instructional problem in a given grade. Both had chronological ages of seven years. Their reading achievement was estimated to be about ‘first-reader’ level, which indicated to the teacher that systematic reading instruction should be initiated at that level. . . . Sally’s basal [Independent] reading level was estimated to be about primer level, while Billy’s was assessed about preprimer level . . . . This was complicated by the fact that Billy was found to have the capacity to deal orally with the language and the facts presented in third-grade basal textbooks while Sally’s reading capacity was not above ‘first-reader’ level or beginning ‘second-reader’ level” (pp. 439-440).
The Cardinal Theory Underlying the IR-TI
The cardinal theoretical construct underlying the IR-TI is that higher-order critical-creative response to reading is a factor independent of literal or text bound inferential responding. While it might be assumed that students who can answer literal and text specific inferential questions would be able to answer higher-order questions, this does not appear to be the case. Our research suggests that one does not develop from basic to higher-order comprehension, but separately in each. Thus, students’ ability to respond to higher-order questions should be evaluated separately from their ability to respond to literal and inferential questions. Therefore, the IR-TI follows the traditional protocol for identifying Instructional level, using literal and basic inferential questions. However, it also provides higher-order questions for each passage. The recommendation is to identify Instructional level in the traditional way, and then go back to one or more passages for the student to respond to after re-reading a passage. For this reason, we described the IR-TI as “constructed ‘around’ a traditional IRI,” and suggested that “it might be helpful to think of the IR-TI as ‘containing’ an IRI” (Manzo, Manzo & McKenna, 1995). In this way, Todd and Sally’s need to develop habits of reading beyond the lines, and Lakesha and Billy’s ability to do so both are discovered, validated, and can be more intentionally attended to.
Innovations in Assessment can Spur Innovations in Treatment
There is little in typical reading theory or assessment practice to either explain or identify these seeming anomalies, and therefore little instructional attention to meeting these students’ needs. However, simply identifying these profiles can lead to unique approaches to intervention (Manzo, Manzo, Barnhill & Thomas, 2000; Manzo, Manzo & Albee, 2004). For example, in a comparison study Martha Haggard (a distinguished professor who at the time was a doctoral student) discovered that struggling readers made statistically significantly greater gains in basic reading comprehension than did a control group when they began each session with a “Creative Thinking Activity” followed then by conventional skill-based remedial instruction. The control group received the conventional skill-based remedial instruction only, and theoretically should have done better since they spent more time on the task we would call “remedial reading instruction.”
Additional IR-TI Options
The six additional options described below permit the teacher to “discover” otherwise unattended habits of mind that drive and focus instructional level reading, and, as importantly, to incorporate these into diagnostic-teaching, or teaching that simultaneously reveals and addresses student strategy and skill needs.
Option 2: Evaluate the student’s “habit” of schema activation
The habit of intentionally calling to mind one’s personal experiences and knowledge related to an anticipated reading topic is an essential component of effective reading (Pearson, Hansen, & Gordon, 1979; Recht & Leslie, 1988), and most particularly reading at the Instructional level. For each passage in the IR-TI, three “schema activation” questions are provided to be asked prior to having the student read, and space is provided on the teacher record form for recording the student’s responses. For example, in a 3rd grade passage about things that grow in the desert, the questions are:
Why is it hard for things to live in the desert?
Do you know of any things that live in the desert?
Can you tell me about a cactus?
In a 7th grade passage about whaling, the questions are:
Do you know any economic uses of whales?
How did men hunt whales?
Can you describe a whale?
Observing a student’s willingness and ability to consider and respond to such questions prior to reading is a simple way to evaluate this habit of mind.
Option 3: Evaluate the student’s “habit” of personal response to reading
The habit of generating personal responses to reading is another essential component of effective study reading, and particularly study-type reading at the Instructional level. For each passage in the IR-TI, the option to assess the student’s personal response is provided in the form of a prompt to ask the degree to which the student has “liked” reading the passage. For fictional passages, the prompt is, “How much did you enjoy reading this story?” For nonfiction passages, the prompt is, “How much would you enjoy reading the rest of this selection?” In either case, the student is prompted to respond on a scale of 1 to 5. This simple inquiry can reveal whether the student typically responds noncommittally to reading about most topics, or tends to express moderate or strong likes and dislikes – the latter suggesting the habit of personal response to reading. It can also be informative to compare students’ comprehension accuracy when reading passages they report to have “liked” as compared to those to which they responded noncommittally or negatively. Steps taken to encourage personal responses to and connections with text can have a positive impact for years going forward as reading becomes increasingly subject area and fact based.
Option 4: Evaluate the student’s “habit” of elaborative response to comprehension questions (“D” for detail)
A simple indicator of students’ inclination to read between and beyond the lines is the level of detail that they offer in response to questions about what they have read. The IR-TI option to collect this information is a simple reminder, on the teacher record form to record a notation of “D” alongside each answer for which the student provides details additional to a strictly correct answer to the question; it is, in a manner of speaking, an indicator of “engagement” – a strong predictor of future progress. A useful measure of this characteristic can be obtained by calculating the percent of questions answered with added detail out of the total number of questions asked.
Option 5: Evaluate the student’s “habit” of engagement during and after reading
Effective readers willingly engage in the task of reconstructing meaning during and following reading, understanding that during Instructional level study reading, comprehension doesn’t just “happen,” as it does during Independent reading level. They are able to respond to comprehension questions with answers that, even if inaccurate, at least are relevant to the questions. Struggling readers tend to have lower levels of engagement than effective readers (Manzo, 1969), often responding to comprehension questions with “throw away, go away” answers that are not even relevant to, or congruent with to the question. For example, given the question “What do all living things need to survive?” an incongruent response would be, “It never rains.” A measure of “congruity” can be obtained by calculating the percent of questions answered congruently out of the total number of questions asked. An increase in a student’s congruent responding has proven to be a good predictor of subsequent comprehension growth; it seems to be saying, “I’m with you, and trying.”
Option 6: Evaluate the student’s “habit” of metacognitive monitoring
Effective readers keep mental track of when they are understanding what they are reading, as well as when comprehension is faltering (Baker & Brown, 1980; DiVesta, Hayward & Orlando, 1979; Flavell, 1979). The IR-TI option to include information about the student’s metacognitive monitoring habits consists of a prompt at the end of the literal and basic inferential questions; simply, “How well do you think you have answered these factual and thought questions?” (to be answered on a scale of 1 – 5). If the “beyond the lines” questions are used for a passage, a similar question prompt is provided: “How well do you think you have answered these last questions?” A measure of effective metacognitive monitoring can be obtained by subjective evaluation of how frequently the student’s self-evaluation of comprehension aligned with actual comprehension. This simple measure is given functional validity by the extent to which it tends to increase in response to instruction. It also is a good predictor of progress in comprehension. It seems to be the student’s mental habit way of saying, “I can make sense out of this.”
Option 7: Evaluate the student’s “habit” of composing thoughtful and well-organized written response to reading
In Option 1, described above, “beyond the lines” thinking can be evaluated by using a set of questions of these types that is provided for each passage. The last question in each of these “beyond the lines” question sets is constructed to also lend itself to be used for an optional written response to reading. For example, following the 3rd grade passage about the desert, the final question/writing prompt is, “Pretend that a cactus plant, an oak tree, and a jungle vine found themselves in the same place. Where might they be, and what might they say to one another?”
For more detailed evaluation of writing, the IR-TI offers two forms of an Informal Writing Inventory (IWI) under the same cover: one for primer to fourth grade students (and older students with more limited skills), and one for fifth grade through high school levels. Each form is structured to evaluate critical-evaluative and creative thinking as well as the mechanics of writing.
The IR-TI Simply Extends Heritage Characteristics of the IRI
The Informal Reading Inventory is the quintessential performance based assessment. Originally designed as a flexible template for classroom use, it has been translated into more readily usable commercial versions that can be used for quick estimations of students’ functioning levels and for in-depth assessment to suggest specific individual strengths and needs. These commercial versions have been fairly exhaustively analyzed over the years, and criticized on dozens of major and minor points. Most importantly, while the validity of the technique has rarely been questioned, the reliability of commercial versions has been challenged at regular intervals. Recently, Janet Spector (2005) reviewed previous studies on this topic, and analyzed the reliability documentation and data in the manuals of nine IRIs published between 2000 and 2004. She reported that fewer than half of the manuals provided reliability information, and none of the nine IRIs analyzed provided reliability data that met the criteria of the study. Her interpretations of her findings were harsh. Under the heading, “Potential for harm,” Spector concluded that “IRIs that provide no evidence of reliability should not be used to estimate a student’s reading level, regardless of how casually the results will be applied.” Even though IRIs are not standardized tests, she states, “any test – no matter how informal – has the potential for harm if the information it provides is imprecise or misleading” pp. 599-600). Furthermore, while conceding that IRIs are “intuitive appealing instruments for assessing student performance in reading,” Spector warned that school psychologists, educators in leadership roles, and teachers should be informed of the “limited utility” of IRIs, and select “measures with adequate reliability for particular purposes” (p. 601).
We would argue that it is this kind of thinking that poses the greater danger to the vitality of the field, and the consequent services that reading educators are equipped to provide to children. McKenna (1983) cites Estes and Vaughn as urging teachers to “accept the philosophy of the IRI as being a strategy, not a test, for studying the behavior in depth” (p. 670). Blanchard and Johns (1986) have also argued that the IRI be considered an assessment strategy that teachers can use flexibly and differentially to access diagnostic information about students’ reading abilities. The IRI is a robust and time-tested tool for discovering specific reading needs. It lends itself to adaptation as research reveals increasing understandings about the reading process. Several “heritage” uses and traditions have accrued to the IRI that align well with current views on the purposes of reading assessment.
Use of the IRI to Invite Students’ Self Assessment
The IRI permits quantification and characterization of various dimensions of reading development, acquired in a one-to-one setting with careful attention to establishing optimal rapport between teacher and student. As such, it offers an ideal opportunity for the teacher to review and explain the findings to the student, and enlist the student’s involvement in explanation of the results, and setting goals for instruction.
Betts took care to point out that students should be aware of and invested in their own literacy development, and the uses of the IRI to involve students in self-assessment:
In the work directed by the writer and his students, it is assumed that the learner should be literate regarding his level of reading achievement, his specific needs, and his goals of learning. It has been found that this makes for intelligent co-operation between teacher and learner. As one boy exclaimed, ‘This makes sense. This is the first time I have known what I am trying to do’ (1957, p. 464).
Betts concludes, “An informal reading inventory is an excellent means of developing learner awareness of his reading needs” (1957, p. 478).
We propose that the added options in the IR-TI provide important topics to be considered in this type of informed and guided self-assessment. Students may be led to see that their prior knowledge of a topic does or does not tend to affect their comprehension; that their self-stated interest in a given passage does or does not tend to affect their comprehension; that their estimations of how well they answered comprehension questions do or do not tend to be accurate; that their answers to questions are or are not congruent with the questions; that their answers to questions tend to be brief and sometimes incomplete, or elaborative. Finally, and most importantly, students can be led to consider how easily and how well they tend to respond to beyond-the-lines questions, and how this might be affecting their regular classroom learning.
Regarding the latter point, the newly revised Standards for the Assessment of Reading and Writing (2010) by the Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English is posited upon the need to extend assessment beyond knowledge acquisition, to assessment of inquiry and problem-solving. They propose that the reading and writing standards in many districts “frequently omit important aspects of literacy such as self-initiated learning, questioning author’s bias, perspective taking, multiple literacies, social interactions around literacy, metacognitive strategies, and literacy dispositions,” adding that “students who are urged in classroom instruction to form opinions and back them up need to be assessed accordingly, rather than with tests that do not allow for creative or divergent thinking. In this context, the first Standard states that:
First and foremost, assessment must encourage students to become engaged in literacy learning: to reflect on their own reading and writing in productive ways, and to set respective literacy goals. In this way, students become involved in and responsible for their own learning and are better able to assist the teacher in focusing instruction” (p. 11).
Use of IRI Protocols and Results to Inform Diagnostic-Teaching
An experienced cook may translate a quarter teaspoon of salt into two pinches and a cup of flour into two hands-full. Similarly, teachers trained in the administration and interpretation Informal Reading Inventories can translate and apply its principals on an everyday, unstructured basis. This concept, too, is little changed from the earliest conceptions of the purposes and uses of an IRI:
In the classroom, the teacher can observe daily behavior in reading situations and, therefore, may need only a few minutes for an individual inventory. It is likely that sufficient information regarding the reading problems and needs of most children can be obtained from careful observations in class and small-group situations. In a clinic, a full half hour may be required for the inventory (Betts, 1957, p. 457).
As example of ultra-simplified application of the basic IRI criteria, these have been translated into a simple tool for students to use in selecting books for independent reading, commonly known at the “1-5-10” test – in approximately 100 words, if a student has trouble with 1 word (reading 99% with ease) the book will be easy to read; with 5 words (reading 95% with ease) the book will be fairly difficult; with 10 words (reading 90% with ease) the book may be too difficult to negotiate without a good deal of effort.
Another example would be a content area classroom application in which a selection from the textbook is displayed for students to read silently; after reading, the selection is removed and students write responses to a series of literal and basic inferential questions (75% correct would be estimated to be Instructional level). With today’s scanning and PowerPoint technologies, these short checkpoints could be made weekly. With “clicker” technologies, students could see their scores immediately, and these could be stored for later analysis by the teacher. We would suggest that the final question be a beyond-the-lines question that students either complete as homework, or use as the prompt for a cooperative structure activity of some type.
The second Standard of Standards for the Assessment of Reading and Writing (2010) states that:
Most educational assessment takes place in the classroom, as teachers and students interact with one another. . . . This responsibility demands considerable expertise. First, unless teachers can recognize the significance of aspects of a student’s performance – a particular kind of error or behavior, for example – they will be unable to adjust instruction accordingly. They must know what signs to attend to in children’s literate behavior. This requires a deep knowledge of the skills and processes of reading and writing and a sound understanding of their own literacy practices (p. 14).
Teachers familiar with IRI protocols and criteria are well prepared to interpret students’ oral reading and comprehension behaviors and difficulties in terms of the significance of a given number of errors and the relative importance or unimportance of various types of errors. Very importantly, and intentionally, teachers familiar with including higher-order questions in IRI protocols will be more likely to include these questions in daily instructional interactions, and observe the ease or difficulty with which individual students are able to respond. Thus, an important reason for including higher-order questions in an IRI is to include this dimension of reading in teachers’ repertoire of categories for informal assessment. As noted in the Introduction to the IR-TI,
One of the chief values of the IR-TI is to help you, the teacher, to personalize the question types, formats, and formulas for estimating student progress while you are engaged in teaching and discussions with your students. Ideally, as you do so, your students will begin to ask similar questions of you, of one another, and also of themselves while they read” (Manzo, Manzo & McKenna, 1995, p. 4)
In a study of second, third, and fifth graders, Barton, Freeman, Lewis and Thompson (1981) taught students to use strategies for personal response to text. Not only did students acquire and independently use these strategies much more easily than the researchers had anticipated (p. 27), but after the study had been officially concluded, they noted that “the biggest surprise the researchers experienced was the [students’] unplanned use of metacognitive strategies throughout the day during curricular areas other than reading” (p. 38). This seems to say that when a desired skill is taught and reinforced as a strategy, it can have very strong transfer effects, and even become a new habit of mind.
A good way to incorporate IR-TI options into diagnostic-prescriptive teaching is to begin the school year by administering an Informal Textbook Inventory that adds prior knowledge questions, metacognitive monitoring questions, and beyond-the-lines comprehension questions. Use these initial data, particularly for the beyond-the-lines comprehension, to group students heterogeneously for postreading cooperative structure activities based on beyond-the-lines questions.
The Contentious Practice of Comprehension Assessment Based on Oral Reading At Sight
One aspect of the original IRI that reading educators have struggled with in recent years is the practice of basing reading assessment upon oral reading at sight. Given that the IRI is in other respects a performance based assessment tool, it has been difficult for some to reconcile this practice. Even Betts found it difficult to accept this practice, but conceded that it had a reasonable use for at least some passages in an IRI administration:
In general, the procedure for the administration of an informal reading inventory for the systematic observation of performance in controlled reading situations is based on the principles governing a directed reading activity. . . . An exception to the principles basic to a directed reading activity is that of using oral reading at sight (i.e., without previous silent-reading preparation) as one means of appraising reading performance. This does have, however, the advantage of uncovering responses to printed symbols that might be undetected in a well-directed reading activity” (p. 457).
Numerous authors have recommended comparison of oral and silent reading comprehension, and cautioned that word recognition errors in oral reading at sight be analyzed only on passages below Frustration level. In addition to permitting observation and analysis of word recognition errors, basing comprehension assessment on oral reading at sight has the fortuitous effect of much more efficiently identifying Instructional level than when comprehension is based on either silent reading or oral re-reading. When reading material at one’s Independent, easy reading level, one is able to read straight through, from beginning to end, with almost complete comprehension. Thus, the highest level at which one can read with a minimum accuracy of 99% in word accuracy and 90% comprehension is one’s Independent level. Once Independent level is identified, the IRI protocol has the student continue to read higher-level passages as if these were at his or her Independent level. This makes it possible to identify the point at which a non-strategic, easy reading approach breaks down.
Asking the student to read orally at sight, at levels above Independent Level, removes the option to apply any active study-reading strategies such as re-reading, pausing to reflect, or skipping ahead; it even impedes comprehension monitoring, visualization, and generation of personal connections. Thus, the IRI criteria for Instructional level in oral reading at sight are set relatively low: a minimum of 95% accuracy in word recognition, and a minimum of 75% accuracy in comprehension. Seldom, elsewhere, would 75% comprehension be considered “good.” However, if the protocol were adjusted to permit students to read silently before reading orally, little could be observed about the silent reading strategies they might be using, and new criteria would need to be created for what would constitute Instructional Level under this different condition. Essentially, the IRI conditions for identifying Instructional Level might be redefined: rather than “the highest reading level at which systematic instruction can be initiated,” it would be more accurate to say that it is the highest reading level at which the student no longer can read passively, without applying study-reading strategies (or receiving instruction that models and/or prompts appropriate study-reading strategies).
In the previous section, we described a technique for using oral reading at sight for a quick whole class comprehension assessment in content area classrooms. We would further suggest that teachers explain to students the difference between Independent Level “easy” reading and Instructional Level “study” reading, as explanation for why a score of 75% on the forced oral reading at sight task is an acceptable score.
Analysis of Oral Reading Errors
In developing the IR-TI, we departed from the systems popular at the time for analyzing oral reading errors to specific decoding. Rather than analyzing phonic elements in decoding errors, it is more parsimonious, that is efficient and effective, to simply follow up with a straightforward phonics test. The practice of evaluating errors in terms of the cue system predominantly used (orthographic, syntactic, semantic) seems to be a narrow window with limited instructional implications. In the IR-TI we provided a list of suggestions to use when looking for error patterns (Manzo, Manzo & McKenna, 1995, pp. 65-67). These are summarized and revised below, with specific instructional recommendations
Figure 1
“Reading Oral Reading
Error patterns (predominance of a particular type of error) with possible diagnostic implications Instructional recommendations
(in order of importance)
Teacher pronunciations:
lacking basic sight words and strategies for decoding words that are not yet sight words at the passage level
Build basic sight word vocabulary
Build phonics strategies for acquiring new sight words
Build strategies for response to text at Listening level
Non-semantic substitutions/skipped words: un-inclined to reconstruct passage meaning; overlooks unfamiliar words and unfamiliar written language constructions at the passage level
Build strategies for higher-order response to reading at Independent level
Build strategies for schema activation and metacognitive comprehension fix-up at Instructional level
Identify unfamiliar meaning vocabulary words when reading, and build strategies for acquiring meanings of these words at Instructional level
Hesitations/repetitions/self-corrections: committed to reconstructing passage meaning, but lacking automaticity decoding words that are not yet sight words and/or unfamiliar with written language patterns at the passage level
Build strategies for decoding non-sight words at Independent level
Build strategies for meaning vocabulary acquisition at Instructional level
Build strategies for reconstructing meaning from language patterns at Listening level
Differential Uses of IRIs for Educators of Different Experience Levels
The IR-TI manual urges users to use it differentially according to purpose:
Because teachers’ purposes for giving the IR-TI will vary, no fixed method of administration exists. This is a consequence of the informal nature of all IRIs and should be viewed as a strength. The important thing is to clarify your own purpose and then to use the instrument accordingly. (Manzo, Manzo & McKenna, 1995, p. 27)
Toward the end of Betts’ chapter on Discovering Specific Reading Needs, he provided three complete and quite different forms for use in recording results of a full Informal Reading Inventory, when given by inexperienced examiners, by experienced examiners, and by participants in his reading clinic. This approach – differential record forms for different levels of experience (or perhaps for different purposes) -- makes a great deal of sense. Most commercial IRIs recommend that the various options offered should be used differentially, according to the purpose of the assessment; however, they tend to offer all of the options that might accompany a given reading selection on the same pages. A well-intentioned teacher or teacher-trainer may feel remiss in omitting any of these options, and thus seriously “over-test” in many cases. Perhaps a future solution would be an online IRI, in which the teacher is given a series of (explained) options initially, and the test administration and record form pages are then generated to include only those options.
Provisional Conclusions
The narrative of the IRI continues to evolve. However some things can be provisionally concluded. IRIs are useful, time-tested tools that should be considered as a series of options to be selected from flexibly according to purpose. These options should reflect the most current understandings of the nature of reading processes and reading development, and the nature and goals of learning as well as of mere schooling. IRIs should cast a broad net to discover not only obvious needs and strengths but less obvious ones such as those that eventually characterize the highest states of literacy, such as cognitive development and worldview. It is a fairly simple matter to embed questions into the IRI interaction with students to tap inclinations and abilities to activate schema prior to reading, to have them evaluate their own comprehension, and to connect with and respond elaboratively to an author’s intended and especially so in literature and persuasive pieces to unintended but reasonably conjectural meanings. In other words, the one-on-one setting of an IRI should be capitalized upon to evaluate a student’s ability to read beyond the lines in order to determine whether this may be an otherwise overlooked need of an otherwise “proficient” reader -- or – an otherwise unacknowledged strength in an otherwise average, to slightly below average reader. In truth, the IR-TI is best understood as a heuristic – or a mechanism for aiding teachers in the discovery of the wonder of our different minds as well as their unique journey to conventional academic reading objectives. IR-TI is much more a system and profile analysis for estimating our individual paths to full literacy, than just another measure of academic skills, that after all correlate so highly with one another that there is little justification for endless testing. The fourth Standard of Standards for the Assessment of Reading and Writing (2010) cogently states that which has guided the present development of the IR-TI, and plans for in future iterations:
Assessment that reflects an impoverished view of literacy will result in a diminished curriculum and distorted instruction and will not enable productive problem-solving or instructional improvement (p. 17).
Works Cited
Applegate, M.D., Quinn, K.B, & Applegate, A.J. (2002). Levels of thinking required by comprehension questions in informal reading inventories. Reading Teacher, 56(2), 174-180.
Applegate, M.D., Quinn, K.B., & Applegate, A.J. (2008). The critical reading inventory: Assessing students’ reading and thinking, 2nd edition. Upper Saddle River, NJ: Pearson Education Baker, L. & Brown, A.L. (1980). Metacognition and the reading process. In D. Pearson (Ed.), A handbook of reading research. NY: Plenum.
Barton, V., Freeman, B., Lewis, D., & Thompson, T. (2001). Metacognition: Effects on reading comprehension and reflective response. Unpublished masters thesis, Chicago: IL, Saint Xavier University.
Betts, E.A. (1946). Foundations of reading instruction. New York: American Book.
Blanchard, J., & Johns, J. (1986). Informal reading inventories--a broader view. Reading Psychology, 7(3), iii.
Chase, R. H. (1926) The ungeared mind. Philadelphia: F. A. Davis Company, publishers.
Cooter, R.B., Jr., Flynt, E.S., & Cooter, K.S. (2007). Comprehensive reading inventory: Measuring reading development in regular and special education classrooms. Upper Saddle River, NJ: Pearson Education.
DiVesta, F.J., Hayward, K.G., & Orlando, V.P. (1979). Developmental trends in monitoring text for comprehension. Child Development, 50, 97-105.
Casale, U. (1982). Small group approach to the further validation and refinement of a battery for assessing ‘progress toward reading maturity.’ Doctoral dissertation, University of Missouri-Kansas City, Dissertation Abstract International. 43, 770A.
Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911.
Flippo, R., Holland, D., McCarthy, M., & Swinning, E. (2009). Asking the right questions: How to select an informal reading inventory. Reading Teacher, 63(1), 79-83.
Johns, J.L. (2005). Basic reading inventory (9th ed.). Dubuque, IA: Kendall/Hunt.
Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English (2010). Standards for the Assessment of Reading and Writing, Revised Edition. Newark, DE.
Manzo, A.V. (1969). Improving reading comprehension through reciprocal questioning (Doctoral dissertation, Syracuse University, Syracuse, NY). Dissertation Abstracts International, 30, 5344A.
Manzo, A.V. & Casale, U. (1981). A multivariate analysis of principle and trace elements in 'mature reading comprehension'. In G.H. McNinch (Ed.). Comprehension: Process ¬and –Product. First Yearbook of the American Reading Forum. Athens, GA: American Reading Forum, 76-81.
Manzo, A.V., & Manzo, U.C. (1995). Creating an Informal Reading-Thinking Inventory. In K. Camperell, B. L. Hayes, & R. Telfer (Eds.), Literacy: Past, present and future. Fifteenth Yearbook of the American Reading Forum. Logan, UT: Utah State University.
Manzo, A.V., & Manzo, U.C., & Albee, J. A. (2004). Reading assessment for diagnostic-prescriptive teaching, 2nd ed. NY: Wadsworth.
Manzo, A.V., Manzo, U., Barnhill, A., Thomas, M. (2000). Proficient reader subtypes: Implications for literacy theory, assessment, and practice. Reading Psychology. 21(3), 217-232.
Manzo, A., V., Manzo, U.C, & McKenna, M.C. (1995). Informal reading-thinking inventory: An informal reading inventory (IRI) with options for assessing additional elements of higher-order literacy. Fort Worth, TX: Harcourt Brace College Publishers.
McKenna, M.C. (1983). Informal reading inventories: a review of the issues. Reading Teacher, 36(7), 670-679.
Nilsson, N. (2008). A critical analysis of eight informal reading inventories. Reading Teacher, 61(7), 526-536.
Pearson, P.D., Hansen, J. & Gordon, C. (1979). The effect of background knowledge on young children’s comprehension of explicit and implicit information. Journal of Reading Behavior, 11, 201-209.
Recht, D.R., & Leslie, L. (1988). The effect of prior knowledge on good and poor readers’ memory for text. Journal of Educational Psychology. 80, 16-20.
Silvaroli, N. J. (1969). Classroom reading inventory. Dubuque, IA: William C. Brown, Publishers.
Silvaroli, N.J. & Wheelock, W.H. (2004). Classroom reading inventory (10th ed.). NY: McGraw-Hill.
Spector, J. (2005). How reliable are informal reading inventories? Psychology in the Schools, 42(6), 593-603.
Manzo, A.V., Manzo, U., Barnhill, A., Thomas, M. (2000). Proficient reader subtypes: Implications for literacy theory, assessment, and practice. Reading Psychology. 21(3), 217-232.
Assessment Formats for Discovering Typical & Otherwise Unrecognized Reading & Writing Needs – and Strengths
Ula Manzo, PhD
Professor and Chair, Reading Department
California State University, Fullerton
Anthony V. Manzo, PhD
Professor Emeritus, Director, Center for Studies in Higher Order Literacy,
Governor, Interdisciplinary Doctoral Studies
University of Missouri-Kansas City
“The teacher who learns to use the techniques described in this chapter will be well on her way to differentiating instruction.” Emmett Betts, Chapter 21, “Discovering Specific Reading Needs,” Foundations of Reading Instruction, 1957
Doing a diagnostic workup with the Informal Reading-Thinking Inventory is a little like taking a float trip down a familiar river but going farther than you had ever gone before. It is almost always a bit of the mysterious and a source of eye-opening discoveries.
A number of years ago, the authors, along with several doctoral students and other collaborators, began to look at the degree to which students’ Instructional levels, as identified by an Informal Reading Inventory (IRI), correspond to those students’ higher-order thinking abilities – i.e., the inclination and ability to also respond constructively, or critically and creatively to text. What we found was a conundrum or apparent paradox that does not seem to surprise most seasoned teachers, but oddly is only spottily addressed, or dismissed as just so much static in the literature of the field. The finding in a word is that in a typical heterogeneous group of students, there are likely to be about 12% whose ability to respond to higher-order questions is significantly below their Instructional level, while correspondingly there often would be found about another 12% whose ability to respond critically and creatively seemed to significantly exceed their Instructional level (Manzo & Casale, 1981; Casale, 1982, Manzo & Manzo, 1995; Manzo, Manzo & McKenna 1995). We referred to the first group who paradoxically did not seem to think as well as they read as “Profile A,” and the second as who seemed somehow to be able to think better than they could read as “Profile B.” That which we called Profile A had been noticed as troublesome especially in higher education by Chase (1926) who referred to the condition as that of Ungeared Minds. Our initial objective was to tweak the IRI into an Informal Reading-Thinking Inventory (IR-TI) that ideally would have sufficient sensitivity to identify students in both sides of this seeming conundrum. So, at the very outset we were intentionally looking for unlikely weaknesses in some students and almost unimaginable strengths in others. (As an aside, we must admit that the quest was very engaging we were where few had dared to go before. We found several studies where very accomplished researchers had simply dismissed the unexplained as a statistical quirk.) But to continue, at first we tried to find these paradoxical cases by reaching beyond the assessment of word recognition and basic comprehension simply by adding some questions that offered an optional way of assessing higher-order thinking. But there was much more to be done to give these few additional odd-angled questions what is known as “construct validity” – or legitimacy as a measure of this theorized factor or factors. This was especially important since the actual question types could sometimes be found here and there in conventional IRIs. That next step involved several rather sophisticated factor analytic studies and the application of a rather common sense idea that there was great potential value in being able to better identify and discover more specific reading needs and especially reading strengths than is discoverable with other more legacy-bound instruments.
Following is a brief overview of how the IRI in its various forms and formats has and has not evolved to reflect current views of reading development. Then we describe our own attempts to date to provide IRI options with the potential to broaden its outreach, especially into the realm of higher order thinking – the seminal challenge of twenty-first century education. Finally, we re-visit some “heritage” characteristics of the IRI that may be in danger of being lost when they should be preserved in the ebb and flow of theoretical constructs in assessment of specific reading needs.
The IRI as Emergent Science
The Informal Reading Inventory (IRI) was and remains an unrecognized bit of significant historical progress in cognitive and pedagogical science in a relatively pre-scientific era. Emmett Betts (1957), one of the founders of modern reading assessment and instruction, synthesized a great deal of research and practice on individual assessment of reading progress and implicitly, in cognitive development. Flippo, et.al (2009) note that Betts “is frequently credited with the development of IRI techniques, though some reading researchers trace their use even further back.” Indeed, Betts introduces his chapter on Discovering Specific Reading Needs with the acknowledgement that:
Space does not permit a summary of all the investigations pertinent to the use of informal reading inventories. A detailed explanation of all the whys and wherefores of a reading inventory would probably fill a sizable volume (p. 445).
Perhaps not surprisingly, then, “The “Informal Reading Inventory,” or IRI, as he called the synthesized protocols detailed in his landmark textbook, ranks among the most sophisticated approaches ever created for the evaluation of the decoding and comprehension aspects of human cognitive development. It is rarely appreciated, but by setting research-based percent criteria for accuracy in word recognition and comprehension when reading graded passages, this protocol provided what is rarely available in the study of human psychology: measureable, criterion referenced dimensions of a complex cognitive process. The IRI quickly became a cornerstone of the field of Reading. In the half-century since its publication, research-informed understandings of the reading process have been applied to fine-tune certain aspects of the protocol, but in most fundamental respects it remains remarkably – one might say, disturbingly - unchanged from the description in Betts’ chapter on reading diagnosis.
The quantitative criteria for estimating Independent, Instructional, Frustration and Capacity levels are little changed from those recommended in the earliest IRI protocols, and the means of evaluating accuracy in word recognition is a straightforward matter that also is little changed. The analysis and interpretation of oral reading errors to identify specific decoding needs has changed very little, though it has become popular to supplement this with analysis of errors from a psycholinguistic stance to discover the student’s development along a theoretical continuum from relying primarily on orthographic cues, then to semantic cues, to eventual reliance on semantic and syntactic cues. Even more attention, though little progress, has been focused upon various means of evaluating comprehension accuracy – an issue that has been and remains unsettled. In Betts’ (1957) discussion, he notes that,
Two techniques are used commonly to appraise accuracy of comprehension: First, a series of questions each of which may be answered in a word, phrase, or sentence. Second, a single question which requires the pupil to reproduce what he has read [retelling]” ( p. 459).
Betts adds that,
Mere recall of facts provides an index to accuracy of comprehension. . . . To appraise quality and depth of comprehension, however, it is desirable to interrogate with inferential-type questions or to give the pupil an opportunity to express his between-the-lines reading (p. 461).
Early commercial IRIs, such as Silvaroli’s Classroom Reading Inventory (1969) settled upon providing a series of questions for each passage for assessment of comprehension accuracy. These questions typically included a combination of literal, main idea and detail, vocabulary, and inferential questions of a grounded, text specific type (in other words, these “inferential” questions were still quite literal). This approach to quantifying passage comprehension based on answers to a series of questions of these types persisted until fairly recently, despite criticism that it measured the product, not the thought processes or cognitive development aspects of comprehension. Authors of several commercial IRIs responded by adding the option to evaluate comprehension by means of retellings, which are difficult to quantify, and arguably even more literal than questions of all other question types. Applegate, Quinn, and Applegate (2002) challenged IRIs for their failure to evaluate students’ higher-order comprehension. These authors looked at eight IRIs published between 1993 and 2001, and found that,
more than 91% of the nearly 900 items that we classified required text-based thinking; that is, either pure recall or low-level inferences. Nearly two thirds of the items we classified fell into the purely literal category, requiring only that the reader remember information stated directly in the text (p. 178).
Thus, 24% (91% minus 67%) of the 900 items were text-bound inference questions. The driving purpose of these authors’ study was to find out whether and to what extent IRIs used questions that required higher-order thinking, and in fact they found that questions of this type comprised fewer than 1% of the 900 items analyzed. The authors discussed the significant implications of the omission of the assessment of higher-order thinking in IRIs – the failure of these instruments “to distinguish between those children who can remember text and those who can think about it.” They cautioned that assessment drives curriculum and the way reading is taught, and “if the IRIs we use to assess children are insensitive to the differences between recalling and thinking about text, our ability to provide evidence of any given child’s instructional needs, let alone to have an impact upon instruction, is severely limited” (pp. 178-179). This limitation had been noted before by H. G. Wells, (1892) who put this sticky problem in these more figurative terms: “The examiner pipes and the teacher must dance—and the examiner sticks to the old tune.” As previously noted, some commercial IRIs have included higher-order thinking questions in comprehension assessment. Nilsson’s 2008 review of eight IRIs published since 2002 details the variety and complexity of current approaches to evaluating comprehension of both narrative and expository text passages, finding that “IRI authors provide measures of various dimensions, or levels, of reading comprehension – most commonly literal and inferential comprehension” (p. 532). Four of these eight IRIs did include questions requiring reader response of some type and in some format (Applegate et al., 2008; Cooter et al., 2007; Johns, 2005; Silvaroli & Wheelock, 2004).
Clearly, reading assessment should address comprehension processes as well as products, including the inclination and ability to apply higher-order thinking processes in response to reading. This type of question however tends to be more open-ended and therefore serving up too many variations on what might be an acceptable answer. That said, the individual administration of an IRI greatly reduces the problem of evaluating responses to questions that can have many “right” answers since the examiner becomes quickly skilled, as most teachers already are, to sound and relevant reasoning versus ungrounded, or in Chase’s terms, ungearded answers. Even this of course is imperfect, but it is by all accounts sufficiently reliable to be the very same convention employed on several subtests of the Weschler IQ tests that also are individually administered. Remaining issues might be how many higher-order responses should be sought in proportion to literal and basic inferential responses; and, whether results of an IRI based largely upon higher-order comprehension would be comparable to results of more conventional IRIs. Our research has led us to the conclusion that the convention of basing IRI Levels on literal and basic inferential responses is a valid assessment of this dimension of the educational goals of schools an schooling. However, this needs to be, and rather easily can be supplemented with assessment of higher-order thinking. In the Informal Reading-Thinking Inventory, described next, we recommend three sets of questions on each passage: Reading the Lines, Reading Between the Lines, and even Reading Beyond the Lines. The resulting Levels are identified based on the first two sets of questions, with responses to the Beyond the Lines questions evaluated separately and qualitatively, rather than quantitatively. It is the separation of these three types of questions into two sets, as was indicated by the factor analytic studies, that makes this all workable. In virtually all other assessment instruments where such questions might be used they are consolidated into general comprehension or inferential comprehension (which again is very literal as is evident in the fact that they so highly correlate with other conventional question types that there is no real point in treating them as a separate factor). Again, it was here that we relied upon several sophisticated factor analytic studies to provide the empirical evidence that supports the separation action that is the nexus of the IR-TI, and therefore of its “construct validity.” Oddly, this point is made best on logical more so than statistical grounds. Simply ask any educator what they wish to accomplish through quality education and the response will be some variation on students who can think critically and creatively. It is these qualities that convert mere schooling to education. It is these qualities that convert information into knowledge. And, we all hope that it will be these qualities that will bring about a worldview – often referred to as the highest state of literacy – that will be marked by greater empathy, tolerance and a more enlightened vision of our tomorrow than of our today. It is an important but almost an inane question that examiner raises when they obsessively ask how well these higher mental functions correlate with being able to read at a literal level. It is the inverse question that really counts. Our goal is not to merely keep individuals from being illiterate but rather to educate people who are literate - well read, reflective, critical and constructive. The IR-TI moves these valued goals to at least co-equal position with merely learning how to read, especially now that we have increasing evidence that a fairly significant number of our most proficient readers may well be struggling, ungeared thinkers. But let’s talk more now about how these valued objectives can be translated into classroom assessment, and implicitly into instructional actions.
Seven IR-TI Options for Discovering Specific, though often overlooked, Reading Needs – and Strengths
Option 1: Identification of Profiles A and B
Assessment of reading development is assessment of human nature. How one “reads” is virtually a projective test of how one perceives the world and one’s place in it. The minds of the children and older students for whom we have responsibility are as complex and different from one another as any DNA sampling. As each mystery of mind is unwrapped it becomes increasingly clear that assessment remains as much an art as a science. In 1995, we described Profiles A and B as follows:
Todd, a fifth grader, has never had trouble reading. He typically scores well on standardized tests and is able to answer most questions posed by Ms. Reese, his teacher, during discussions of reading selections. Ms. Reese is often surprised, however, at Todd’s reticence whenever these discussions become more open-ended and thoughtful, and opinions are encouraged. At those times, Todd generally has little to say.
Lakesha, Todd’s classmate, has struggled with reading, and has a history of placement in remedial programs. She often stumbles over words and clearly labors over assigned selections. However, after class discussions have provided her with the gist of what her abler classmates have read, she seems to blossom. Her contributions to the give-and-take of what is said are intelligent, pointed, and insightful. Ms. Reese is also puzzled by Lakesha, for she is, after all, a “remedial reader” (Manzo, Manzo & McKenna, 1995, p. 103).
Apparently Betts had struggled with this phenomenon as well, as illustrated in this quote from his 1957 textbook:
Two second-grade pupils, Sally and Billy, may be used to illustrate briefly the complexity of the instructional problem in a given grade. Both had chronological ages of seven years. Their reading achievement was estimated to be about ‘first-reader’ level, which indicated to the teacher that systematic reading instruction should be initiated at that level. . . . Sally’s basal [Independent] reading level was estimated to be about primer level, while Billy’s was assessed about preprimer level . . . . This was complicated by the fact that Billy was found to have the capacity to deal orally with the language and the facts presented in third-grade basal textbooks while Sally’s reading capacity was not above ‘first-reader’ level or beginning ‘second-reader’ level” (pp. 439-440).
The Cardinal Theory Underlying the IR-TI
The cardinal theoretical construct underlying the IR-TI is that higher-order critical-creative response to reading is a factor independent of literal or text bound inferential responding. While it might be assumed that students who can answer literal and text specific inferential questions would be able to answer higher-order questions, this does not appear to be the case. Our research suggests that one does not develop from basic to higher-order comprehension, but separately in each. Thus, students’ ability to respond to higher-order questions should be evaluated separately from their ability to respond to literal and inferential questions. Therefore, the IR-TI follows the traditional protocol for identifying Instructional level, using literal and basic inferential questions. However, it also provides higher-order questions for each passage. The recommendation is to identify Instructional level in the traditional way, and then go back to one or more passages for the student to respond to after re-reading a passage. For this reason, we described the IR-TI as “constructed ‘around’ a traditional IRI,” and suggested that “it might be helpful to think of the IR-TI as ‘containing’ an IRI” (Manzo, Manzo & McKenna, 1995). In this way, Todd and Sally’s need to develop habits of reading beyond the lines, and Lakesha and Billy’s ability to do so both are discovered, validated, and can be more intentionally attended to.
Innovations in Assessment can Spur Innovations in Treatment
There is little in typical reading theory or assessment practice to either explain or identify these seeming anomalies, and therefore little instructional attention to meeting these students’ needs. However, simply identifying these profiles can lead to unique approaches to intervention (Manzo, Manzo, Barnhill & Thomas, 2000; Manzo, Manzo & Albee, 2004). For example, in a comparison study Martha Haggard (a distinguished professor who at the time was a doctoral student) discovered that struggling readers made statistically significantly greater gains in basic reading comprehension than did a control group when they began each session with a “Creative Thinking Activity” followed then by conventional skill-based remedial instruction. The control group received the conventional skill-based remedial instruction only, and theoretically should have done better since they spent more time on the task we would call “remedial reading instruction.”
Additional IR-TI Options
The six additional options described below permit the teacher to “discover” otherwise unattended habits of mind that drive and focus instructional level reading, and, as importantly, to incorporate these into diagnostic-teaching, or teaching that simultaneously reveals and addresses student strategy and skill needs.
Option 2: Evaluate the student’s “habit” of schema activation
The habit of intentionally calling to mind one’s personal experiences and knowledge related to an anticipated reading topic is an essential component of effective reading (Pearson, Hansen, & Gordon, 1979; Recht & Leslie, 1988), and most particularly reading at the Instructional level. For each passage in the IR-TI, three “schema activation” questions are provided to be asked prior to having the student read, and space is provided on the teacher record form for recording the student’s responses. For example, in a 3rd grade passage about things that grow in the desert, the questions are:
Why is it hard for things to live in the desert?
Do you know of any things that live in the desert?
Can you tell me about a cactus?
In a 7th grade passage about whaling, the questions are:
Do you know any economic uses of whales?
How did men hunt whales?
Can you describe a whale?
Observing a student’s willingness and ability to consider and respond to such questions prior to reading is a simple way to evaluate this habit of mind.
Option 3: Evaluate the student’s “habit” of personal response to reading
The habit of generating personal responses to reading is another essential component of effective study reading, and particularly study-type reading at the Instructional level. For each passage in the IR-TI, the option to assess the student’s personal response is provided in the form of a prompt to ask the degree to which the student has “liked” reading the passage. For fictional passages, the prompt is, “How much did you enjoy reading this story?” For nonfiction passages, the prompt is, “How much would you enjoy reading the rest of this selection?” In either case, the student is prompted to respond on a scale of 1 to 5. This simple inquiry can reveal whether the student typically responds noncommittally to reading about most topics, or tends to express moderate or strong likes and dislikes – the latter suggesting the habit of personal response to reading. It can also be informative to compare students’ comprehension accuracy when reading passages they report to have “liked” as compared to those to which they responded noncommittally or negatively. Steps taken to encourage personal responses to and connections with text can have a positive impact for years going forward as reading becomes increasingly subject area and fact based.
Option 4: Evaluate the student’s “habit” of elaborative response to comprehension questions (“D” for detail)
A simple indicator of students’ inclination to read between and beyond the lines is the level of detail that they offer in response to questions about what they have read. The IR-TI option to collect this information is a simple reminder, on the teacher record form to record a notation of “D” alongside each answer for which the student provides details additional to a strictly correct answer to the question; it is, in a manner of speaking, an indicator of “engagement” – a strong predictor of future progress. A useful measure of this characteristic can be obtained by calculating the percent of questions answered with added detail out of the total number of questions asked.
Option 5: Evaluate the student’s “habit” of engagement during and after reading
Effective readers willingly engage in the task of reconstructing meaning during and following reading, understanding that during Instructional level study reading, comprehension doesn’t just “happen,” as it does during Independent reading level. They are able to respond to comprehension questions with answers that, even if inaccurate, at least are relevant to the questions. Struggling readers tend to have lower levels of engagement than effective readers (Manzo, 1969), often responding to comprehension questions with “throw away, go away” answers that are not even relevant to, or congruent with to the question. For example, given the question “What do all living things need to survive?” an incongruent response would be, “It never rains.” A measure of “congruity” can be obtained by calculating the percent of questions answered congruently out of the total number of questions asked. An increase in a student’s congruent responding has proven to be a good predictor of subsequent comprehension growth; it seems to be saying, “I’m with you, and trying.”
Option 6: Evaluate the student’s “habit” of metacognitive monitoring
Effective readers keep mental track of when they are understanding what they are reading, as well as when comprehension is faltering (Baker & Brown, 1980; DiVesta, Hayward & Orlando, 1979; Flavell, 1979). The IR-TI option to include information about the student’s metacognitive monitoring habits consists of a prompt at the end of the literal and basic inferential questions; simply, “How well do you think you have answered these factual and thought questions?” (to be answered on a scale of 1 – 5). If the “beyond the lines” questions are used for a passage, a similar question prompt is provided: “How well do you think you have answered these last questions?” A measure of effective metacognitive monitoring can be obtained by subjective evaluation of how frequently the student’s self-evaluation of comprehension aligned with actual comprehension. This simple measure is given functional validity by the extent to which it tends to increase in response to instruction. It also is a good predictor of progress in comprehension. It seems to be the student’s mental habit way of saying, “I can make sense out of this.”
Option 7: Evaluate the student’s “habit” of composing thoughtful and well-organized written response to reading
In Option 1, described above, “beyond the lines” thinking can be evaluated by using a set of questions of these types that is provided for each passage. The last question in each of these “beyond the lines” question sets is constructed to also lend itself to be used for an optional written response to reading. For example, following the 3rd grade passage about the desert, the final question/writing prompt is, “Pretend that a cactus plant, an oak tree, and a jungle vine found themselves in the same place. Where might they be, and what might they say to one another?”
For more detailed evaluation of writing, the IR-TI offers two forms of an Informal Writing Inventory (IWI) under the same cover: one for primer to fourth grade students (and older students with more limited skills), and one for fifth grade through high school levels. Each form is structured to evaluate critical-evaluative and creative thinking as well as the mechanics of writing.
The IR-TI Simply Extends Heritage Characteristics of the IRI
The Informal Reading Inventory is the quintessential performance based assessment. Originally designed as a flexible template for classroom use, it has been translated into more readily usable commercial versions that can be used for quick estimations of students’ functioning levels and for in-depth assessment to suggest specific individual strengths and needs. These commercial versions have been fairly exhaustively analyzed over the years, and criticized on dozens of major and minor points. Most importantly, while the validity of the technique has rarely been questioned, the reliability of commercial versions has been challenged at regular intervals. Recently, Janet Spector (2005) reviewed previous studies on this topic, and analyzed the reliability documentation and data in the manuals of nine IRIs published between 2000 and 2004. She reported that fewer than half of the manuals provided reliability information, and none of the nine IRIs analyzed provided reliability data that met the criteria of the study. Her interpretations of her findings were harsh. Under the heading, “Potential for harm,” Spector concluded that “IRIs that provide no evidence of reliability should not be used to estimate a student’s reading level, regardless of how casually the results will be applied.” Even though IRIs are not standardized tests, she states, “any test – no matter how informal – has the potential for harm if the information it provides is imprecise or misleading” pp. 599-600). Furthermore, while conceding that IRIs are “intuitive appealing instruments for assessing student performance in reading,” Spector warned that school psychologists, educators in leadership roles, and teachers should be informed of the “limited utility” of IRIs, and select “measures with adequate reliability for particular purposes” (p. 601).
We would argue that it is this kind of thinking that poses the greater danger to the vitality of the field, and the consequent services that reading educators are equipped to provide to children. McKenna (1983) cites Estes and Vaughn as urging teachers to “accept the philosophy of the IRI as being a strategy, not a test, for studying the behavior in depth” (p. 670). Blanchard and Johns (1986) have also argued that the IRI be considered an assessment strategy that teachers can use flexibly and differentially to access diagnostic information about students’ reading abilities. The IRI is a robust and time-tested tool for discovering specific reading needs. It lends itself to adaptation as research reveals increasing understandings about the reading process. Several “heritage” uses and traditions have accrued to the IRI that align well with current views on the purposes of reading assessment.
Use of the IRI to Invite Students’ Self Assessment
The IRI permits quantification and characterization of various dimensions of reading development, acquired in a one-to-one setting with careful attention to establishing optimal rapport between teacher and student. As such, it offers an ideal opportunity for the teacher to review and explain the findings to the student, and enlist the student’s involvement in explanation of the results, and setting goals for instruction.
Betts took care to point out that students should be aware of and invested in their own literacy development, and the uses of the IRI to involve students in self-assessment:
In the work directed by the writer and his students, it is assumed that the learner should be literate regarding his level of reading achievement, his specific needs, and his goals of learning. It has been found that this makes for intelligent co-operation between teacher and learner. As one boy exclaimed, ‘This makes sense. This is the first time I have known what I am trying to do’ (1957, p. 464).
Betts concludes, “An informal reading inventory is an excellent means of developing learner awareness of his reading needs” (1957, p. 478).
We propose that the added options in the IR-TI provide important topics to be considered in this type of informed and guided self-assessment. Students may be led to see that their prior knowledge of a topic does or does not tend to affect their comprehension; that their self-stated interest in a given passage does or does not tend to affect their comprehension; that their estimations of how well they answered comprehension questions do or do not tend to be accurate; that their answers to questions are or are not congruent with the questions; that their answers to questions tend to be brief and sometimes incomplete, or elaborative. Finally, and most importantly, students can be led to consider how easily and how well they tend to respond to beyond-the-lines questions, and how this might be affecting their regular classroom learning.
Regarding the latter point, the newly revised Standards for the Assessment of Reading and Writing (2010) by the Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English is posited upon the need to extend assessment beyond knowledge acquisition, to assessment of inquiry and problem-solving. They propose that the reading and writing standards in many districts “frequently omit important aspects of literacy such as self-initiated learning, questioning author’s bias, perspective taking, multiple literacies, social interactions around literacy, metacognitive strategies, and literacy dispositions,” adding that “students who are urged in classroom instruction to form opinions and back them up need to be assessed accordingly, rather than with tests that do not allow for creative or divergent thinking. In this context, the first Standard states that:
First and foremost, assessment must encourage students to become engaged in literacy learning: to reflect on their own reading and writing in productive ways, and to set respective literacy goals. In this way, students become involved in and responsible for their own learning and are better able to assist the teacher in focusing instruction” (p. 11).
Use of IRI Protocols and Results to Inform Diagnostic-Teaching
An experienced cook may translate a quarter teaspoon of salt into two pinches and a cup of flour into two hands-full. Similarly, teachers trained in the administration and interpretation Informal Reading Inventories can translate and apply its principals on an everyday, unstructured basis. This concept, too, is little changed from the earliest conceptions of the purposes and uses of an IRI:
In the classroom, the teacher can observe daily behavior in reading situations and, therefore, may need only a few minutes for an individual inventory. It is likely that sufficient information regarding the reading problems and needs of most children can be obtained from careful observations in class and small-group situations. In a clinic, a full half hour may be required for the inventory (Betts, 1957, p. 457).
As example of ultra-simplified application of the basic IRI criteria, these have been translated into a simple tool for students to use in selecting books for independent reading, commonly known at the “1-5-10” test – in approximately 100 words, if a student has trouble with 1 word (reading 99% with ease) the book will be easy to read; with 5 words (reading 95% with ease) the book will be fairly difficult; with 10 words (reading 90% with ease) the book may be too difficult to negotiate without a good deal of effort.
Another example would be a content area classroom application in which a selection from the textbook is displayed for students to read silently; after reading, the selection is removed and students write responses to a series of literal and basic inferential questions (75% correct would be estimated to be Instructional level). With today’s scanning and PowerPoint technologies, these short checkpoints could be made weekly. With “clicker” technologies, students could see their scores immediately, and these could be stored for later analysis by the teacher. We would suggest that the final question be a beyond-the-lines question that students either complete as homework, or use as the prompt for a cooperative structure activity of some type.
The second Standard of Standards for the Assessment of Reading and Writing (2010) states that:
Most educational assessment takes place in the classroom, as teachers and students interact with one another. . . . This responsibility demands considerable expertise. First, unless teachers can recognize the significance of aspects of a student’s performance – a particular kind of error or behavior, for example – they will be unable to adjust instruction accordingly. They must know what signs to attend to in children’s literate behavior. This requires a deep knowledge of the skills and processes of reading and writing and a sound understanding of their own literacy practices (p. 14).
Teachers familiar with IRI protocols and criteria are well prepared to interpret students’ oral reading and comprehension behaviors and difficulties in terms of the significance of a given number of errors and the relative importance or unimportance of various types of errors. Very importantly, and intentionally, teachers familiar with including higher-order questions in IRI protocols will be more likely to include these questions in daily instructional interactions, and observe the ease or difficulty with which individual students are able to respond. Thus, an important reason for including higher-order questions in an IRI is to include this dimension of reading in teachers’ repertoire of categories for informal assessment. As noted in the Introduction to the IR-TI,
One of the chief values of the IR-TI is to help you, the teacher, to personalize the question types, formats, and formulas for estimating student progress while you are engaged in teaching and discussions with your students. Ideally, as you do so, your students will begin to ask similar questions of you, of one another, and also of themselves while they read” (Manzo, Manzo & McKenna, 1995, p. 4)
In a study of second, third, and fifth graders, Barton, Freeman, Lewis and Thompson (1981) taught students to use strategies for personal response to text. Not only did students acquire and independently use these strategies much more easily than the researchers had anticipated (p. 27), but after the study had been officially concluded, they noted that “the biggest surprise the researchers experienced was the [students’] unplanned use of metacognitive strategies throughout the day during curricular areas other than reading” (p. 38). This seems to say that when a desired skill is taught and reinforced as a strategy, it can have very strong transfer effects, and even become a new habit of mind.
A good way to incorporate IR-TI options into diagnostic-prescriptive teaching is to begin the school year by administering an Informal Textbook Inventory that adds prior knowledge questions, metacognitive monitoring questions, and beyond-the-lines comprehension questions. Use these initial data, particularly for the beyond-the-lines comprehension, to group students heterogeneously for postreading cooperative structure activities based on beyond-the-lines questions.
The Contentious Practice of Comprehension Assessment Based on Oral Reading At Sight
One aspect of the original IRI that reading educators have struggled with in recent years is the practice of basing reading assessment upon oral reading at sight. Given that the IRI is in other respects a performance based assessment tool, it has been difficult for some to reconcile this practice. Even Betts found it difficult to accept this practice, but conceded that it had a reasonable use for at least some passages in an IRI administration:
In general, the procedure for the administration of an informal reading inventory for the systematic observation of performance in controlled reading situations is based on the principles governing a directed reading activity. . . . An exception to the principles basic to a directed reading activity is that of using oral reading at sight (i.e., without previous silent-reading preparation) as one means of appraising reading performance. This does have, however, the advantage of uncovering responses to printed symbols that might be undetected in a well-directed reading activity” (p. 457).
Numerous authors have recommended comparison of oral and silent reading comprehension, and cautioned that word recognition errors in oral reading at sight be analyzed only on passages below Frustration level. In addition to permitting observation and analysis of word recognition errors, basing comprehension assessment on oral reading at sight has the fortuitous effect of much more efficiently identifying Instructional level than when comprehension is based on either silent reading or oral re-reading. When reading material at one’s Independent, easy reading level, one is able to read straight through, from beginning to end, with almost complete comprehension. Thus, the highest level at which one can read with a minimum accuracy of 99% in word accuracy and 90% comprehension is one’s Independent level. Once Independent level is identified, the IRI protocol has the student continue to read higher-level passages as if these were at his or her Independent level. This makes it possible to identify the point at which a non-strategic, easy reading approach breaks down.
Asking the student to read orally at sight, at levels above Independent Level, removes the option to apply any active study-reading strategies such as re-reading, pausing to reflect, or skipping ahead; it even impedes comprehension monitoring, visualization, and generation of personal connections. Thus, the IRI criteria for Instructional level in oral reading at sight are set relatively low: a minimum of 95% accuracy in word recognition, and a minimum of 75% accuracy in comprehension. Seldom, elsewhere, would 75% comprehension be considered “good.” However, if the protocol were adjusted to permit students to read silently before reading orally, little could be observed about the silent reading strategies they might be using, and new criteria would need to be created for what would constitute Instructional Level under this different condition. Essentially, the IRI conditions for identifying Instructional Level might be redefined: rather than “the highest reading level at which systematic instruction can be initiated,” it would be more accurate to say that it is the highest reading level at which the student no longer can read passively, without applying study-reading strategies (or receiving instruction that models and/or prompts appropriate study-reading strategies).
In the previous section, we described a technique for using oral reading at sight for a quick whole class comprehension assessment in content area classrooms. We would further suggest that teachers explain to students the difference between Independent Level “easy” reading and Instructional Level “study” reading, as explanation for why a score of 75% on the forced oral reading at sight task is an acceptable score.
Analysis of Oral Reading Errors
In developing the IR-TI, we departed from the systems popular at the time for analyzing oral reading errors to specific decoding. Rather than analyzing phonic elements in decoding errors, it is more parsimonious, that is efficient and effective, to simply follow up with a straightforward phonics test. The practice of evaluating errors in terms of the cue system predominantly used (orthographic, syntactic, semantic) seems to be a narrow window with limited instructional implications. In the IR-TI we provided a list of suggestions to use when looking for error patterns (Manzo, Manzo & McKenna, 1995, pp. 65-67). These are summarized and revised below, with specific instructional recommendations
Figure 1
“Reading Oral Reading
Error patterns (predominance of a particular type of error) with possible diagnostic implications Instructional recommendations
(in order of importance)
Teacher pronunciations:
lacking basic sight words and strategies for decoding words that are not yet sight words at the passage level
Build basic sight word vocabulary
Build phonics strategies for acquiring new sight words
Build strategies for response to text at Listening level
Non-semantic substitutions/skipped words: un-inclined to reconstruct passage meaning; overlooks unfamiliar words and unfamiliar written language constructions at the passage level
Build strategies for higher-order response to reading at Independent level
Build strategies for schema activation and metacognitive comprehension fix-up at Instructional level
Identify unfamiliar meaning vocabulary words when reading, and build strategies for acquiring meanings of these words at Instructional level
Hesitations/repetitions/self-corrections: committed to reconstructing passage meaning, but lacking automaticity decoding words that are not yet sight words and/or unfamiliar with written language patterns at the passage level
Build strategies for decoding non-sight words at Independent level
Build strategies for meaning vocabulary acquisition at Instructional level
Build strategies for reconstructing meaning from language patterns at Listening level
Differential Uses of IRIs for Educators of Different Experience Levels
The IR-TI manual urges users to use it differentially according to purpose:
Because teachers’ purposes for giving the IR-TI will vary, no fixed method of administration exists. This is a consequence of the informal nature of all IRIs and should be viewed as a strength. The important thing is to clarify your own purpose and then to use the instrument accordingly. (Manzo, Manzo & McKenna, 1995, p. 27)
Toward the end of Betts’ chapter on Discovering Specific Reading Needs, he provided three complete and quite different forms for use in recording results of a full Informal Reading Inventory, when given by inexperienced examiners, by experienced examiners, and by participants in his reading clinic. This approach – differential record forms for different levels of experience (or perhaps for different purposes) -- makes a great deal of sense. Most commercial IRIs recommend that the various options offered should be used differentially, according to the purpose of the assessment; however, they tend to offer all of the options that might accompany a given reading selection on the same pages. A well-intentioned teacher or teacher-trainer may feel remiss in omitting any of these options, and thus seriously “over-test” in many cases. Perhaps a future solution would be an online IRI, in which the teacher is given a series of (explained) options initially, and the test administration and record form pages are then generated to include only those options.
Provisional Conclusions
The narrative of the IRI continues to evolve. However some things can be provisionally concluded. IRIs are useful, time-tested tools that should be considered as a series of options to be selected from flexibly according to purpose. These options should reflect the most current understandings of the nature of reading processes and reading development, and the nature and goals of learning as well as of mere schooling. IRIs should cast a broad net to discover not only obvious needs and strengths but less obvious ones such as those that eventually characterize the highest states of literacy, such as cognitive development and worldview. It is a fairly simple matter to embed questions into the IRI interaction with students to tap inclinations and abilities to activate schema prior to reading, to have them evaluate their own comprehension, and to connect with and respond elaboratively to an author’s intended and especially so in literature and persuasive pieces to unintended but reasonably conjectural meanings. In other words, the one-on-one setting of an IRI should be capitalized upon to evaluate a student’s ability to read beyond the lines in order to determine whether this may be an otherwise overlooked need of an otherwise “proficient” reader -- or – an otherwise unacknowledged strength in an otherwise average, to slightly below average reader. In truth, the IR-TI is best understood as a heuristic – or a mechanism for aiding teachers in the discovery of the wonder of our different minds as well as their unique journey to conventional academic reading objectives. IR-TI is much more a system and profile analysis for estimating our individual paths to full literacy, than just another measure of academic skills, that after all correlate so highly with one another that there is little justification for endless testing. The fourth Standard of Standards for the Assessment of Reading and Writing (2010) cogently states that which has guided the present development of the IR-TI, and plans for in future iterations:
Assessment that reflects an impoverished view of literacy will result in a diminished curriculum and distorted instruction and will not enable productive problem-solving or instructional improvement (p. 17).
Works Cited
Applegate, M.D., Quinn, K.B, & Applegate, A.J. (2002). Levels of thinking required by comprehension questions in informal reading inventories. Reading Teacher, 56(2), 174-180.
Applegate, M.D., Quinn, K.B., & Applegate, A.J. (2008). The critical reading inventory: Assessing students’ reading and thinking, 2nd edition. Upper Saddle River, NJ: Pearson Education Baker, L. & Brown, A.L. (1980). Metacognition and the reading process. In D. Pearson (Ed.), A handbook of reading research. NY: Plenum.
Barton, V., Freeman, B., Lewis, D., & Thompson, T. (2001). Metacognition: Effects on reading comprehension and reflective response. Unpublished masters thesis, Chicago: IL, Saint Xavier University.
Betts, E.A. (1946). Foundations of reading instruction. New York: American Book.
Blanchard, J., & Johns, J. (1986). Informal reading inventories--a broader view. Reading Psychology, 7(3), iii.
Chase, R. H. (1926) The ungeared mind. Philadelphia: F. A. Davis Company, publishers.
Cooter, R.B., Jr., Flynt, E.S., & Cooter, K.S. (2007). Comprehensive reading inventory: Measuring reading development in regular and special education classrooms. Upper Saddle River, NJ: Pearson Education.
DiVesta, F.J., Hayward, K.G., & Orlando, V.P. (1979). Developmental trends in monitoring text for comprehension. Child Development, 50, 97-105.
Casale, U. (1982). Small group approach to the further validation and refinement of a battery for assessing ‘progress toward reading maturity.’ Doctoral dissertation, University of Missouri-Kansas City, Dissertation Abstract International. 43, 770A.
Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911.
Flippo, R., Holland, D., McCarthy, M., & Swinning, E. (2009). Asking the right questions: How to select an informal reading inventory. Reading Teacher, 63(1), 79-83.
Johns, J.L. (2005). Basic reading inventory (9th ed.). Dubuque, IA: Kendall/Hunt.
Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English (2010). Standards for the Assessment of Reading and Writing, Revised Edition. Newark, DE.
Manzo, A.V. (1969). Improving reading comprehension through reciprocal questioning (Doctoral dissertation, Syracuse University, Syracuse, NY). Dissertation Abstracts International, 30, 5344A.
Manzo, A.V. & Casale, U. (1981). A multivariate analysis of principle and trace elements in 'mature reading comprehension'. In G.H. McNinch (Ed.). Comprehension: Process ¬and –Product. First Yearbook of the American Reading Forum. Athens, GA: American Reading Forum, 76-81.
Manzo, A.V., & Manzo, U.C. (1995). Creating an Informal Reading-Thinking Inventory. In K. Camperell, B. L. Hayes, & R. Telfer (Eds.), Literacy: Past, present and future. Fifteenth Yearbook of the American Reading Forum. Logan, UT: Utah State University.
Manzo, A.V., & Manzo, U.C., & Albee, J. A. (2004). Reading assessment for diagnostic-prescriptive teaching, 2nd ed. NY: Wadsworth.
Manzo, A.V., Manzo, U., Barnhill, A., Thomas, M. (2000). Proficient reader subtypes: Implications for literacy theory, assessment, and practice. Reading Psychology. 21(3), 217-232.
Manzo, A., V., Manzo, U.C, & McKenna, M.C. (1995). Informal reading-thinking inventory: An informal reading inventory (IRI) with options for assessing additional elements of higher-order literacy. Fort Worth, TX: Harcourt Brace College Publishers.
McKenna, M.C. (1983). Informal reading inventories: a review of the issues. Reading Teacher, 36(7), 670-679.
Nilsson, N. (2008). A critical analysis of eight informal reading inventories. Reading Teacher, 61(7), 526-536.
Pearson, P.D., Hansen, J. & Gordon, C. (1979). The effect of background knowledge on young children’s comprehension of explicit and implicit information. Journal of Reading Behavior, 11, 201-209.
Recht, D.R., & Leslie, L. (1988). The effect of prior knowledge on good and poor readers’ memory for text. Journal of Educational Psychology. 80, 16-20.
Silvaroli, N. J. (1969). Classroom reading inventory. Dubuque, IA: William C. Brown, Publishers.
Silvaroli, N.J. & Wheelock, W.H. (2004). Classroom reading inventory (10th ed.). NY: McGraw-Hill.
Spector, J. (2005). How reliable are informal reading inventories? Psychology in the Schools, 42(6), 593-603.
Manzo, A.V., Manzo, U., Barnhill, A., Thomas, M. (2000). Proficient reader subtypes: Implications for literacy theory, assessment, and practice. Reading Psychology. 21(3), 217-232.
Tuesday, June 15, 2010
Help with Reading Mathematics: Content Area Literacy Teaching
For help with Assessing and Teaching Mathematics see these pages:
http://books.google.com/books?id=Mhsygz7-wOcC&lpg=PA384&ots=bDlvQXeKD4&dq=Manzo%201990%20Content%20area%20reading%3A%20mathematics&pg=PA384#v=onepage&q&f=false
Covered are:
Math as a Second Language;
Language-Based methods of Teaching Math;
Math and Semantic Feature Analysis;
Dahmus Parceling Method (1970);
An Informal Mathematics Textbook Inventory;
A variation on Manzo's ReQuest Comprehension Procedure called R/Q;
Peer Teaching Method (To Teach is to Learn Twice)& some
Mind Benders.
For a newer version see:•Manzo, Manzo & Thomas:(2009) Content Area Literacy: A Framework for Reading-Based Instruction (5th edition) Wiley
http://books.google.com/books?id=Mhsygz7-wOcC&lpg=PA384&ots=bDlvQXeKD4&dq=Manzo%201990%20Content%20area%20reading%3A%20mathematics&pg=PA384#v=onepage&q&f=false
Covered are:
Math as a Second Language;
Language-Based methods of Teaching Math;
Math and Semantic Feature Analysis;
Dahmus Parceling Method (1970);
An Informal Mathematics Textbook Inventory;
A variation on Manzo's ReQuest Comprehension Procedure called R/Q;
Peer Teaching Method (To Teach is to Learn Twice)& some
Mind Benders.
For a newer version see:•Manzo, Manzo & Thomas:(2009) Content Area Literacy: A Framework for Reading-Based Instruction (5th edition) Wiley
Sunday, June 6, 2010
Dyslexia as Specific Psychological Disorder -Conversion Reaction Syndrome
Dyslexia as Conversion Reaction Syndrome
Some forms of Dyslexia & possibly related Learning Disabilities may be a kin to a psychological defense mechanism known as Conversion Reaction Syndrome (CRS). CRS is said to be a subconscious process by which deep emotional conflicts or fears which otherwise would give rise to considerable anxiety are disowned or put aside by converting them into an external expression of some type. This results in a feeling of detachment, which may appear as relaxed indifference – sometimes referred to as la belle indifference. This condition has been found in some dyslexics and in some persons with specific neurological damage. Denckla (1972), for example, identified a subtype of dyslexics that she called a “dyscontrol” group because they were “sweet, sloppy, and silly.” Satz & Morris (1981), and Lyon & Watson (1981) also have identified subgroups of dyslexics that have related “motivational and emotional” problems. Curiously, a similar form of indifference has been found in patients with right-hemisphere damage: they seem indifferent to the point of denial, of other severe symptoms of physical illness (Segalowitz, 1983, p. 215). There is no data to say whether these people also had any form of reading disorder.
Similarly, the CRS condition seems to arise when a deep conflict is converted to a form that symbolically represents the repressed ideas or repressing forces, whatever these might be. Examples of some typical child-centered fears and conflicts would include: fear of the parents’ learning of the child’s “intellectual; inadequacy” relative to excessive parental expectations; fears related to revelations about premature sexual interests, activity, abuse, or gender confusion; and fear that a family might break up without some crisis to hold it together. Consider now the symbolic meaning of reading. Reading generally symbolizes growing up and being responsible. The knowledge, insights, and universal truths it brings are supposed to help one face complex issues. But, sometimes a child is faced with an issue that appears bigger than life, one so insurmountable that it seems best to deny it? In order for denial- a fundamental defense mechanism of the ego - to be complete, and for life to go on, the problem must be converted or restructured into something less intrusive in the child’s life and more acceptable to public attention.
This syndrome tends to take either of two forms, one called Somatic Conversions typically result in the apparent loss of control over fundamental voluntary muscles (Laughlin, 1967). One example is the conflict experienced by the soldier who wishes to be brave and yet fears dying. Repression of the fear leads to a heightened anxiety level. Sensing that he or she might be near hysteria or likely to faint, the soldier subconsciously converts the repressed desire to run away into a psychologically saving illness or incapacitation, such as loss of control of the muscles in the legs which carry one to battle.
A similar condition can occur physiologically to involuntary muscles and functions. In these cases, so called organ (or vegetative) difficulties occur. These tend to incapacitate or delimit sensory awareness, resulting in apparent losses or distortions of vision, hearing, speech, and the like. These incapacities sound remarkably like the word reversals, semantic paralexics (word distortions), auditory discrimination problems, speech impediments, and visual problems that have been found to be associated with some reading and learning disabilities. The possible connection between these two sets of conditions is made clearer when the next two ideas are considered.
Substitution and Net Gain through Reading-Learning Dysfunction
Both somatic and physiologic conversion conditions become an alternate expression of the deeper repressed conflict or nagging problem. This substitution can serve several useful purposes for the person who is disabled. The student who is diagnosed as dyslexic, particularly the preteen whose life is largely influenced by parental rather than peer pressures, can win considerable attention from his parents while reducing his or her preoccupation with the true emotional conflict (whatever it might be), and do so at the relatively small inconvenience of simply not being able to read. This is known as an “endogain.” That is, a net gain arising from what seems, on the surface, to be a negative or liability.
In the case of dyslexia, the parents also are inconvenienced and made to feel guilty. In this way, the child’s problem is passed on to the parents, who not only bear the student’s pain but must wonder what in them may have created the disorder -- even to the point of feeling guilt about whether they have transmitted damaging genes. Further, the child not only (net) gains the attention of his parents but the outside assistance and empathy of teachers, doctors, and other specialists in resolving the symbolic problem. More importantly, hope of resolving the real problem is kept alive by those pressed into service to work on its symbolic representation. In brief, a learning disability such as dyslexia can provide several possible “endogains” for a troubled child: it can sharply reduce anxiety and pressure to resolve a difficult personal problem; it can win the assistance and empathy of many adults; and it offers the hope of resolving the real, or repressed, problem.
Diagnostic Indicators of CRS
There are six diagnostic indicators of psychologically induced dyslexia or learning disability. Three or more would provide telling evidence of this condition. 1) Considerable emotional gain from an apparent negative condition, or liability; 2) Evidence of generative learning in most areas other than reading, or whatever the specific disability might happen to be; 3) A logically inconsistent or unreliable pattern of errors on an Informal Reading Iventory, miscue analysis, or reading test battery; e.g., strong comprehension/weak vocabulary; or the inverse; 4) Reversal of sub-test scores on standardized tests, from one testing to the next (e.g., high Verbal/low Performance one time, low Performance/high Verbal another); 5) A look of relaxed, resigned indifference to the disability (''la belle indifference'' condition)
and, 6) If learning can be greatly accelerated with an essentially placebo treatment.
[[‘’’Clinical Evidence of Psychoneurotic Dyslexia & Learning Disabilities’’’]] Working from the premise that a reading dysfunction could be a symbolic representation for a deeper conflict, Manzo (1977) developed a simple test of this proposition. With 4 graduate students, they set out to try to teach two dyslexic students to read using a system which was identical to conventional reading but which they were told was recently invented for children who had special problems like theirs. They also were told that no one could really be sure that they ever would be able to read regular print, even if they learned the alternate system.
If they could be taught to read by this surrogate, but even more difficult system, it was reasoned, then it would not be logical to attribute their disability to a neurological impairment, but to some psychological explanation. They employed an alternate alphabet (Paul McKee’s funny squiggles [1948] that he used to show parents how difficult it is to learn how to read), Both youngsters had been in clinic programs for several continuous semesters and tested at primer levels. They were by all indications “severe dyslexics.”
Findings: Exceeding every expectation, the two children learned the new code more rapidly than their tutors, who had to work as a team to keep abreast of their rate of learning. In about 15 hours they were reading at about 3rd to 4th reader level in McKee’s alternate orthography. This rapid learning effect gave strong reason to believe that the children could learn to read, and rather easily, once their minds permitted them to do so.
References
Denckla, M. B. (1972).Clinical syndromes in learning disabilities: The case for "splitting" versus "lumping." Journal of Learning Disabilities, 5, 401-406.
Laughlin, H. P. (1967). The neuroses. Washington, D.C.: Butterworths Press.
Lyon, R., & Watson, B. (1981). Empirical derived subgroups of learning disabled readers: Diagnostic characteristics. Journal of Learning Disabilities, 14, 256-261.
Manzo, A.V. (1977) Dyslexia as specific psychoneurosis. Journal of¬ Reading Behavior, 19, 305-308]
Manzo, A.V.( 1987) Psychologically induced dyslexia and learning disabilities, The Reading Teacher, 40, 408-413.
Manzo, A.V & Manzo, U (1993) Literacy ¬Disorders: Holistic Diagnosis and Remediation. Harcourt, Brace, Jovanovich (1993).]]
Manzo, A.V & Manzo, U., & Albee, J.J. (2004) Reading/Learning Assessment for Diagnostic-Prescriptive Teaching, 2nd edition. Belmont: California, Thomson/Wadsworth Publishers
Peach, Richard (2006) Acquired dyslexia as conversion disorder: Identification and management. In Clinical Aphasiology Conference: * Clinical Aphasiology Conference (2006 : 36th : Ghent, Belgium : May 29-June 2, 2006)
Segalowitz, S. J. (1983).Two sides of the brain. Englewood Cliffs, NJ: Prentice-Hall
Some forms of Dyslexia & possibly related Learning Disabilities may be a kin to a psychological defense mechanism known as Conversion Reaction Syndrome (CRS). CRS is said to be a subconscious process by which deep emotional conflicts or fears which otherwise would give rise to considerable anxiety are disowned or put aside by converting them into an external expression of some type. This results in a feeling of detachment, which may appear as relaxed indifference – sometimes referred to as la belle indifference. This condition has been found in some dyslexics and in some persons with specific neurological damage. Denckla (1972), for example, identified a subtype of dyslexics that she called a “dyscontrol” group because they were “sweet, sloppy, and silly.” Satz & Morris (1981), and Lyon & Watson (1981) also have identified subgroups of dyslexics that have related “motivational and emotional” problems. Curiously, a similar form of indifference has been found in patients with right-hemisphere damage: they seem indifferent to the point of denial, of other severe symptoms of physical illness (Segalowitz, 1983, p. 215). There is no data to say whether these people also had any form of reading disorder.
Similarly, the CRS condition seems to arise when a deep conflict is converted to a form that symbolically represents the repressed ideas or repressing forces, whatever these might be. Examples of some typical child-centered fears and conflicts would include: fear of the parents’ learning of the child’s “intellectual; inadequacy” relative to excessive parental expectations; fears related to revelations about premature sexual interests, activity, abuse, or gender confusion; and fear that a family might break up without some crisis to hold it together. Consider now the symbolic meaning of reading. Reading generally symbolizes growing up and being responsible. The knowledge, insights, and universal truths it brings are supposed to help one face complex issues. But, sometimes a child is faced with an issue that appears bigger than life, one so insurmountable that it seems best to deny it? In order for denial- a fundamental defense mechanism of the ego - to be complete, and for life to go on, the problem must be converted or restructured into something less intrusive in the child’s life and more acceptable to public attention.
This syndrome tends to take either of two forms, one called Somatic Conversions typically result in the apparent loss of control over fundamental voluntary muscles (Laughlin, 1967). One example is the conflict experienced by the soldier who wishes to be brave and yet fears dying. Repression of the fear leads to a heightened anxiety level. Sensing that he or she might be near hysteria or likely to faint, the soldier subconsciously converts the repressed desire to run away into a psychologically saving illness or incapacitation, such as loss of control of the muscles in the legs which carry one to battle.
A similar condition can occur physiologically to involuntary muscles and functions. In these cases, so called organ (or vegetative) difficulties occur. These tend to incapacitate or delimit sensory awareness, resulting in apparent losses or distortions of vision, hearing, speech, and the like. These incapacities sound remarkably like the word reversals, semantic paralexics (word distortions), auditory discrimination problems, speech impediments, and visual problems that have been found to be associated with some reading and learning disabilities. The possible connection between these two sets of conditions is made clearer when the next two ideas are considered.
Substitution and Net Gain through Reading-Learning Dysfunction
Both somatic and physiologic conversion conditions become an alternate expression of the deeper repressed conflict or nagging problem. This substitution can serve several useful purposes for the person who is disabled. The student who is diagnosed as dyslexic, particularly the preteen whose life is largely influenced by parental rather than peer pressures, can win considerable attention from his parents while reducing his or her preoccupation with the true emotional conflict (whatever it might be), and do so at the relatively small inconvenience of simply not being able to read. This is known as an “endogain.” That is, a net gain arising from what seems, on the surface, to be a negative or liability.
In the case of dyslexia, the parents also are inconvenienced and made to feel guilty. In this way, the child’s problem is passed on to the parents, who not only bear the student’s pain but must wonder what in them may have created the disorder -- even to the point of feeling guilt about whether they have transmitted damaging genes. Further, the child not only (net) gains the attention of his parents but the outside assistance and empathy of teachers, doctors, and other specialists in resolving the symbolic problem. More importantly, hope of resolving the real problem is kept alive by those pressed into service to work on its symbolic representation. In brief, a learning disability such as dyslexia can provide several possible “endogains” for a troubled child: it can sharply reduce anxiety and pressure to resolve a difficult personal problem; it can win the assistance and empathy of many adults; and it offers the hope of resolving the real, or repressed, problem.
Diagnostic Indicators of CRS
There are six diagnostic indicators of psychologically induced dyslexia or learning disability. Three or more would provide telling evidence of this condition. 1) Considerable emotional gain from an apparent negative condition, or liability; 2) Evidence of generative learning in most areas other than reading, or whatever the specific disability might happen to be; 3) A logically inconsistent or unreliable pattern of errors on an Informal Reading Iventory, miscue analysis, or reading test battery; e.g., strong comprehension/weak vocabulary; or the inverse; 4) Reversal of sub-test scores on standardized tests, from one testing to the next (e.g., high Verbal/low Performance one time, low Performance/high Verbal another); 5) A look of relaxed, resigned indifference to the disability (''la belle indifference'' condition)
and, 6) If learning can be greatly accelerated with an essentially placebo treatment.
[[‘’’Clinical Evidence of Psychoneurotic Dyslexia & Learning Disabilities’’’]] Working from the premise that a reading dysfunction could be a symbolic representation for a deeper conflict, Manzo (1977) developed a simple test of this proposition. With 4 graduate students, they set out to try to teach two dyslexic students to read using a system which was identical to conventional reading but which they were told was recently invented for children who had special problems like theirs. They also were told that no one could really be sure that they ever would be able to read regular print, even if they learned the alternate system.
If they could be taught to read by this surrogate, but even more difficult system, it was reasoned, then it would not be logical to attribute their disability to a neurological impairment, but to some psychological explanation. They employed an alternate alphabet (Paul McKee’s funny squiggles [1948] that he used to show parents how difficult it is to learn how to read), Both youngsters had been in clinic programs for several continuous semesters and tested at primer levels. They were by all indications “severe dyslexics.”
Findings: Exceeding every expectation, the two children learned the new code more rapidly than their tutors, who had to work as a team to keep abreast of their rate of learning. In about 15 hours they were reading at about 3rd to 4th reader level in McKee’s alternate orthography. This rapid learning effect gave strong reason to believe that the children could learn to read, and rather easily, once their minds permitted them to do so.
References
Denckla, M. B. (1972).Clinical syndromes in learning disabilities: The case for "splitting" versus "lumping." Journal of Learning Disabilities, 5, 401-406.
Laughlin, H. P. (1967). The neuroses. Washington, D.C.: Butterworths Press.
Lyon, R., & Watson, B. (1981). Empirical derived subgroups of learning disabled readers: Diagnostic characteristics. Journal of Learning Disabilities, 14, 256-261.
Manzo, A.V. (1977) Dyslexia as specific psychoneurosis. Journal of¬ Reading Behavior, 19, 305-308]
Manzo, A.V.( 1987) Psychologically induced dyslexia and learning disabilities, The Reading Teacher, 40, 408-413.
Manzo, A.V & Manzo, U (1993) Literacy ¬Disorders: Holistic Diagnosis and Remediation. Harcourt, Brace, Jovanovich (1993).]]
Manzo, A.V & Manzo, U., & Albee, J.J. (2004) Reading/Learning Assessment for Diagnostic-Prescriptive Teaching, 2nd edition. Belmont: California, Thomson/Wadsworth Publishers
Peach, Richard (2006) Acquired dyslexia as conversion disorder: Identification and management. In Clinical Aphasiology Conference: * Clinical Aphasiology Conference (2006 : 36th : Ghent, Belgium : May 29-June 2, 2006)
Segalowitz, S. J. (1983).Two sides of the brain. Englewood Cliffs, NJ: Prentice-Hall
Thursday, June 3, 2010
iREAP: Improving Reading, Writing, Thinking and Aesthetics in the Wired Classroom
iREAP:
Improving Reading, Writing, Study, Thinking and Aesthetics in the Wired Classroom
Anthony Manzo, Ula Manzo, & Julie Jacksons Albee
Journal of Adolescent and Adult Literacy (2002; 46/01 pp 42-7)
[(Revised: Aesthetic Annotation added: 6/3/09]
iREAP is a proposition for improving reading, writing, study, thinking and aesthetics. It has been waiting in the wings to be discovered for over a generation. The REAP system (Read, Encode, Annotate, Ponder) for responding to text has been in use in elementary through college classrooms for two decades. The “i" in iREAP represents its currency and connection to Internet community building, to several validation studies and to developmental extensions noted ahead. The core REAP system is based on a scaffold form of writing that invites creativity, much as does haiku, or any other such disciplined form of art. In addition to structuring response to text, or the discipline of REAP Broad Spectrum Thinking, the system invites readers to respond to others’ stored responses. It is in some ways similar to developments such as “threaded discussions” – as are found on Amazon.com and BlackBoard.com. Such asynchronous discussions and synchronous chat may have incidental learning outcomes, however, they are not school. They are unstructured interactions, whose discourse tends to follow a personal-social agenda. iREAP contains provisions for converting chat and asynchronous submissions into several goals of school-based learning. For example, it provides a context for “virtual inclusion” and “virtual integration, step-wise solutions to social and legal mandates such as providing every student with a “least restrictive” and non-segregated environment. It also attains some efficiencies for over-burdened teachers in the form of some new levels of assistance with guiding reading, writing and thinking as never before available. (There even are options in the offing for new software that automatically requests different phrasing when inappropriate terms are used.) What is more, iREAP offers the possibility of bringing further organization to the web, a virtual place that also can be characterized as virtual chaos: pieces of library books, homework assignments, family albums, literary masterpieces, tawdry material, and fiery political pamphlets piled in a random heap.
REAP: Background and Backbone
The basic idea for this reader-writer exchange system was proposed some time ago (Manzo, 1975) as a means of improving and supporting a national content area reading and writing project essentially for urban schools. Shortly afterwards it was collected into a teaching-learning approach called REAP - Read, Encode, Annotate, and Ponder (Eanet & Manzo, 1976; Eanet, 1978, 1983). From the beginning, it was anticipated that REAP might be an appropriate formatting system, or disciplined semantic platform and backbone for the tsunami of words that would be channeled from one “computer terminal” to another, on the then-developing intranets that were being formed by colleges, and promising to provide new technological options for the K-12 education. As such, it appeared that REAP should be a part of an evolving grammar of, and school curricula for, the electronic age.
REAP primarily is a cognitive enrichment approach that teaches students to think more precisely and deeply about what they read, by following the four-step strategy symbolized by its title:
READ to get the writer's basic message;
ENCODE the message into your own words while reading;
ANNOTATE your analysis of the message by writing responses from several perspectives, and;
PONDER what you have read and written – first by reviewing it yourself, then by sharing and discussing it with others, and finally by reading the responses of others.
At the heart of the approach is a set of annotation types that range roughly in hierarchical order from a simple summary of the author’s basic message to various perspectives for higher-order critical and creative analysis. The first few REAP annotation types require “reconstructive” thinking – understanding and perceiving the essence of the author’s meaning. The remaining ones require “constructive” thinking – going beyond the author’s intended meaning to form the personal schema connections, applications, and variations that permit the learner to transfer information and ideas from one context to another. This hierarchy aids assessment and gives guidance to students in reaching “up” to higher levels or “down” to more basic ones that may not yet have been mastered. Descriptions and examples of some of the basic annotation types are provided in Figure 1. Other types can be customized and created. For example, several teachers have had rewarding results using a “Humorous” annotation (also in Figure 1).
Guided Reading, Writing and Thinking
For classroom use, the annotation types are introduced either singly or a few at a time, with the nature and pace of instruction geared to the grade level, but without aiming at “mastery” before moving to another annotation type. Children tend to learn to write best by struggling to express their own thoughts about rich literature selections, guided by mindfully written models that scaffold reading and entice emulation. As soon as the class has the basic idea of a few annotation types, they begin to write annotations of things they have read, and to read annotations – perspectives - that others bring to response writing. They are reminded to write several annotations on a reading selection, as a means to crosscheck their initial understandings and reach for higher-order insights and questions. Exemplary annotations are stored for other individuals and classes to read before reading (i.e., frontloading), during reading (as discussion points), or after reading (as a review) and also serve as models of well composed written responses.
Figure 1
Sample Reading Selection with Examples of REAP Annotation Types
“Travelers and the Plane-Tree”
Two Travelers were walking along a bare and dusty road in the heat of a mid-summer’s day. Coming upon a large shade tree, they happily stopped to shelter themselves from the burning sun in the shade of its spreading branches. While they rested, looking up into the tree, one of them said to his companion, “What a useless tree this is! It makes no flowers and bears no fruit. Of what use is it to anyone?” The tree itself replied indignantly, “You ungrateful people! You take shelter under me from the scorching sun, and then, in the very act of enjoying the cool shade of my leaves, you abuse me and call me good for nothing!”
Reconstructive Annotations
Summary: states the basic message in brief form
Travelers take shelter from the sun under a large tree. They criticize the tree for not making flowers or fruit. The tree speaks, and tells them that they are ungrateful people for taking shelter under her leaves and then criticizing her.
Telegram: briefly states the author's basic theme with all unnecessary words removed -- a crisp, telegram-like message
Travelers stop for rest and shade under big tree. Travelers say tree is useless. Tree tells them off.
Heuristic: restates an attention-getting portion of the selection that makes the reader want to respond
In this story, a tree (remove “that”) talks back to people. The tree says, "You ungrateful people! You come and take shelter under me...and then ...abuse me and call me good for nothing!”
Question: turns the main point into an organization question that the selection answers
What if the things we use could talk back?
Constructive Annotations
Personal view: answers the question, "how do your views and feelings compare with what the author says?"
We use resources like coal without thinking. Then we criticize it for damaging our lungs and dirtying our air.
I guess kids sometimes use their parents the way the travelers used the tree, and then criticize them without thinking about their feelings.
Humorous: can vary from bringing a slight smile, usually by flirting with a naughty suggestion, to using jest to bring enlightenment.
I can just see that poor tree thinking “I hope they’re about to stop here to seek shelter and not relief.”
Critical: begins by stating the author's main point, then states whether the reader agrees, disagrees, or agrees in part with the author, and then briefly explains why
Not every word spoken in criticism is meant that way. The travelers were just grumpy from the trip. The tree is too sensitive.
Contrary: states a logical alternative position, even though it may not be the one the student supports
The travelers could be right, a better tree could produce something and also give shade.
Intention: states and briefly explains what the reader thinks was the author's intention, plan, and purpose for writing the selection
The author wants us to be more sensitive to the people and things we depend on -- especially those we see and use often.
Motivation: states what may have caused the author to have written the selection -- the author's personal agenda
It sounds like the author may have felt used, after having a bad experience with friends or family.
Discovery: states one or more practical questions that need to be answered before the selection can be judged for accuracy or worth
I wonder how many of us know when we are being "users." We could take an anonymous poll to see how many class members secretly feel that they have been used and how many see themselves as users.
Creative: suggests different and perhaps better solutions or views and/or connections and applications to prior learning and experiences
_ This fable made me think that teachers are sometimes used unfairly. They give us so much, and then we put them down if they make a little mistakes. They’re only human.
_ We should put this fable on the bulletin board where it will remind us not to be ungrateful “users.”
_ [How would you re-title this fable if you were writing it?] I’d call it “Travelers in the Dark,” to show that we go through life without knowing how many small “gifts” come to us along our way .
Aesthetic Annotation: This, the highest & riskiest level of writing, is the writer’s attempt to rouse hearts as well as minds to life; it can be by saying something that has not been said before, or by saying something hackneyed in some fresh and poignant way. It almost always will require a unique perspective, a bit of artistry, often some word craft, and some emotion; our ostensibly highly cerebral brain functions are always coupled to some feelings that sometimes begin deep in the primitive brain., but almost always become part of the conscious brain.
[Of course trees do not feel. Nonetheless, the expression "tree huggers" - that often is meant as a put-down of people who care too much about every living thing - speaks loudly to the likelihood that there may, in fact, be greater unity and community in all living things than our habits of mind routinely overlook; from a purely atomistic point of view we still don't know why and how all things physical remain separate from one another.]
After students have had some practice writing various types of annotations, these can be used and reinforced in a variety of ways; a few are listed below:
1. When giving a reading assignment, specify three annotation types for students to write and turn in.
2. As students become more skilled at annotation writing, they can be given the option of selecting from three annotation types the one that they would like to write in response to a reading assignment.
3. Assign each cooperative group member to write a different annotation type in response to a reading assignment. When students have finished reading and writing, they move to their assigned groups to share the annotations they have written and to offer constructive suggestions to one another on ways to clarify the response. Extra credit points can be offered to the group with the best annotation of each type as judged by the teacher or the class as a whole.
4. Introduce a new reading assignment by having students read annotations written by students in previous years’ classes or from a different section at the same grade level.
5. Provide incentive to read and write reflectively by posting exemplary annotations, signed by the author, on a bulletin board or a webpage, including some from different (remove “a”) age-grade levels; in other words, raise some higher targets.
6. Use REAP annotation types as a guide for phrasing post-reading discussion questions, and encourage students to do the same.
7. From time to time, use the REAP annotations to guide students’ responses to non-text learning experiences: a video, a laboratory procedure, a piece of music or art, etc.
REAP Spectrum Thinking
The goal of all these applications is to help students internalize REAP “spectrum-thinking,” or thinking from different perspectives, to the extent where it becomes a habit of mind: a familiar comfortable, almost automatic, mental strategy. Like other thinking strategies, REAP Spectrum-Thinking is helpful in negotiating everyday life, but it is particularly useful for independent, but no longer isolated study. One strength of this strategy results from hearing the thinking of others who are reacting to the same content or stimulus.
REAP Spectrum-Thinking is a flexible strategy, but it has two essential elements. First, it may begin with any of the response types, but at least one, if not several others always follow initial responses. It is through this habit of multiple-stance responding that the learner is reminded to reflect at a higher level of social-emotional maturity, one beyond his or her initial – often egocentric- response and to possibly perceive further meaning and connections. Some learners are naturally inclined to respond to reading from a critical stance. REAP Spectrum-Thinking reminds them to re-visit the information to check the basic facts before going too far down a path that may be based on misunderstandings. Other learners tend to respond from a basic reconstructive stance, but are disinclined to move to any of the constructive levels: REAP- Spectrum thinking reminds them to do so. Most students (and adults) have fairly strong preferences as to types of reading and responding that they like, feel ambivalent toward, and dislike. These preferences both reflect and influence the nature and degree of response to reading, viewing and listening. The second essential element, or value, of REAP-Spectrum-Thinking is parsimonious- or “fewer words” writing and responding. By definition, annotations are brief requiring much more thinking than writing. Gifted children are as inclined as others to read and gather-in, but not to think, or be potentially transformed as well informed, by what they read and learn. Since their gift of “learning easily” tends to slate them for more influential positions, it is in their interests and ours that they be reflective and multi-perspective thinkers. The requirement to write in response to reading, along with exposure to the responses of many others, instills greater sensitivity in them, even while they help others to think more effectively by learning to do so themselves. In a small way, the world is a better place as we all learn to share and think more clearly. Importantly, too, many things that are not otherwise spoken, may then be, inviting possible growth beyond generalities and negativism. At a very basic level, such response-to-text reading-writing is conducive to more active learning, and a foundational means of schema building (Rosenblatt, 1978; Cooper, 1985; Rosenblatt, 1985; Purves, 1993; Shelton, 1994; Blake, 1995; Kasonovich, 1996).
Full Spectrum-Thinking for Full Spectrum-Inclusion
As a teaching approach, REAP is an ideal way to provide for students in inclusion classrooms, across a broad spectrum of student abilities, needs and cognitive styles. It also permits divergent-creative-thinkers who otherwise may be academic underachievers to demonstrate their abilities while reminding them that communication requires them to be more attentive to form, sequence and details. It urges concrete thinkers to think further and make more personal connections with the facts that they so easily seem to acquire. It deals directly with the core language/thinking systems that are the target in many IEP’s for LD youngsters, and the “mediation” process essential to most human learning. It lends itself well to cooperative group work that is essential in aiding social and emotional development (see #3 above for one example). It evokes the type of pointed “instructional conversation” that “rouses minds to life” (Tharp & Gallimore, 1992).
The term iREAP represents endless possibilities for teaching and practicing REAP Spectrum-Thinking that, with Internet access, can now be translated into reality. Here are just a few examples of how iREAP might unfold as we continue to pilot develop it.
• A teacher can easily store, access, & print student responses to a given reading selection, thereby building a library of student annotations for future students’ use as pre-reading schema activation, post-reading responding, and as models of poignant writing and thinking.
• The best student annotations can be posted on a classroom or school webpage, where they can be made accessible to and from students from diverse cultural perspectives even while we have not yet figured out how to bring about full integration of the schools.
• Parents, school administrators, community members (and on outward to include anyone in the world speaking English) can be invited to submit annotations and responses to annotations for inclusion on the classroom or school webpage.
• Webpages of several schools can be linked (our current technical and logistical task).
• Volunteer, paid, peer, and/or cross-grade tutors can be trained to use REAP as the structural vehicle for conducting online instructional interactions. Tutors can prompt tutees to revise, edit, and re-submit their annotations of classroom assignments or supplementary materials. Hence, the creation of a cadre of Volunteers for Higher-Order Literacy able to assist teachers in reading and responding to the pupils’ early drafts, and therefore, increasing the likelihood that teachers would more confidently give additional reading-writing assignments. [Wrtg now is a requirement on all standardized tests, nonetheless there still is no help for middle/high school teachers who have an average of 120 students in reading and guiding wrtg.]
• A classroom (or school) could partner with a local bookstore to create an iREAP Book Club, where a “book of the month” is listed for annotation contributions. Students & community members (local and online) whose annotations are selected for online publication would receive credits toward purchases in the bookstore. Community members could contribute books purchased with credits to the classroom or school library. The bookstore could set up a couple of computer carrels where an iREAP activity could be viewed.
• The increased sale of books that seems to follow from their discussion could become a valuable source of supplementary income for schools from commercial book vendors.
• And, when educators fully grasp the value to education of on-line ads and marketing that our kids see within moments of going on-line anyway, it might even become acceptable to have this iREAP system situated so that schools received some of the revenue now flowing like a protected river through their campuses. There is a name other than “commercialism” for this option, it is called “Social Entrepreneurism” and it could become the means to end the financial drought that has plagued education throughout the prior century. (See: Manzo, 2001 for further details)
Clinical and Empirical Evidence for REAP
REAP-related research has shown quite clearly that this proposed grammar, or architecture for electronic interaction, improves basic and higher-order literacy objectives, especially complexity of thinking, as well as major components of cognitive skill and social-emotional maturity, or non-egocentric thinking and reflecting. For example, Garber (1995) found that when middle school students participated in what she called transformational reader-response strategies to narrative text, essentially REAP, both cognitive complexity and social development were increased significantly when compared with students who used transactional and transmissional reader-response strategies. Standardized measures were used to evaluate complex thinking in this study. Results of a research study conducted by Albee (2000) indicated that when university students in children's literature courses experienced REAP Annotation Exchange Writing they showed a significant increase in cognitive complexity as measured by three different instruments, as compared with students who did not experience REAP Annotation Exchange.
As iREAP develops, teachers and schools can expect several things to happen:
• significant increase and improvement in higher-order thinking
• significant increase and improvement in reading comprehension
• significant increase and improvement in writing ability
• significant increase and improvement in content knowledge, cumulative learning and standardized test scores, and
• significant increase and improvement in our ways of building toward those most cherished of human needs, a sense of shared experience and membership in a caring community focused to some higher purpose .
The iREAP system, should we be able to fully launch and support it, would permit any one to REAP the benefits of the reflections or thinking of the many on the ever bulging growth of information, knowledge, and school curricula.
There Also May Be A Considerable Peace Dividend
Cultural integration may sometimes raise tensions, but cultural isolation almost always escalates into hostilities. A global iREAP system reaching all students and teachers could create meaningful cross-cultural, cross-boundary dialogues on great books and great thoughts. iREAP would move beyond ritualized, self-interests, and the sometimes crippling “dialogues” of governments, corporations, and even dogmatic religions. It would happen on a person-to-person basis. It would be an on-going ecumenical council building empathy and a sense of common cause in the form of small patches of understanding, that like vegetation on a hillside, can not only prevent further erosion, but when its seeds of sense and sensibility are caught in the winds, their greening effects can be spread to the most remote of places. Importantly, such a system also could nearly compel those with irascible notions to express and reveal themselves, since the iREAP system would be an influential part of the new, web-based free markets in ideas. And, yes, there is danger in screening for terrorists and misfits, but this system also provides for empathy and teachable moment reminders from our most civilized and grounded citizens, to those who otherwise might be alone with their troubled, and ungrounded thoughts.
An action plan to build and implement iREAP is developing on LiteracyLeaders.com.(Other relevant cites are: http://cctr.umkc.edu/user/dmartin/hol.html and http://members.aol.com/ReadShop/REAP1.html.) Meantime, feel comfortable teaching the grammar of disciplined reading, writing and sharing at the classroom and school level. Brief writing following reading has been a fundamental and traditional part of education, especially in Europe, for hundreds of years. REAP simply is a more orderly way of teaching, scaffolding and monitoring progress in doing so.
References
Albee, J.A. (2000). The effect of Read-Encode-Annotate-Ponder annotation exchange (REAP AnX) on the complex thinking of undergraduate students in children’s literature courses. (Unpublished doctoral dissertation, Kansas City, MO: University of Missouri-Kansas City.)
Eanet, M. G. (1978). An investigation of the REAP reading/study procedure: Its rationale and efficacy. In P. D. Pearson, & J. Hansen (Eds.), Reading: Disciplined inquiry in process and practice. The 27th yearbook of the National Reading Conference (pp. 229-232). Clemson, SC: National Reading Conference.
Eanet, M. G. (1983). Reading/writing: Finding and using the connection. The Missouri Reader, 8, 8-9.
Eanet, M. G., & Manzo, A.V. (1976). REAP—A strategy for improving reading/writing/study skills. Journal of Reading, 19, 647–652.
Garber, K. S. (1995). The effects of transmissional, transactional, and transformational reader-response strategies on middle school students’ thinking complexity and social development. (Unpublished doctoral dissertation.) Kansas City, MO: University of Missouri-Kansas City.
Kasonovich, M.L. (1996). The study of first graders’ ability to respond to and analyze picture books. (ERIC Document Reproduction Service No. ED 396249). Princeton, NJ: ERIC Clearinghouse on Assessment and Evaluation.
Manzo, A.V. (1973). CONPASS English: A demonstration project. Journal of Reading, 16, 539–545.
Manzo, A.V., & Manzo, U.C. (1995). Teaching children to be literate: A reflective approach. Fort Worth, TX: Harcourt Brace College Publishers.
Purves, A. (1993). Toward a re-evaluation of reader response and school literature. Language Arts, 70, 348-361.
Rosenblatt, L. (1978). The reader, the text, the poem: the transactional theory of the literary work. Carbondale, IL: Southern Illinois University Press.
Rosenblatt, L. (1985). The transformational theory of the literary work: Implications for research. In Researching response to literature and the teaching of literature. Carbondale, IL: Southern Illinois University Press.
Shelton, K.Y. (1994). Reader response theory in the high school English classroom. Paper presented at the Annual Meeting of the National Council of Teachers of English (Orlando, FL: November 16-21, 1994). (ERIC Document Reproduction Service No. 379655). Princeton, NJ: Educational Testing Service.
Tharp, R.G., & Gallimore, R. (1990). Rousing minds to life: Teaching, learning, and schooling social context. New York: Cambridge University Press.
Improving Reading, Writing, Study, Thinking and Aesthetics in the Wired Classroom
Anthony Manzo, Ula Manzo, & Julie Jacksons Albee
Journal of Adolescent and Adult Literacy (2002; 46/01 pp 42-7)
[(Revised: Aesthetic Annotation added: 6/3/09]
iREAP is a proposition for improving reading, writing, study, thinking and aesthetics. It has been waiting in the wings to be discovered for over a generation. The REAP system (Read, Encode, Annotate, Ponder) for responding to text has been in use in elementary through college classrooms for two decades. The “i" in iREAP represents its currency and connection to Internet community building, to several validation studies and to developmental extensions noted ahead. The core REAP system is based on a scaffold form of writing that invites creativity, much as does haiku, or any other such disciplined form of art. In addition to structuring response to text, or the discipline of REAP Broad Spectrum Thinking, the system invites readers to respond to others’ stored responses. It is in some ways similar to developments such as “threaded discussions” – as are found on Amazon.com and BlackBoard.com. Such asynchronous discussions and synchronous chat may have incidental learning outcomes, however, they are not school. They are unstructured interactions, whose discourse tends to follow a personal-social agenda. iREAP contains provisions for converting chat and asynchronous submissions into several goals of school-based learning. For example, it provides a context for “virtual inclusion” and “virtual integration, step-wise solutions to social and legal mandates such as providing every student with a “least restrictive” and non-segregated environment. It also attains some efficiencies for over-burdened teachers in the form of some new levels of assistance with guiding reading, writing and thinking as never before available. (There even are options in the offing for new software that automatically requests different phrasing when inappropriate terms are used.) What is more, iREAP offers the possibility of bringing further organization to the web, a virtual place that also can be characterized as virtual chaos: pieces of library books, homework assignments, family albums, literary masterpieces, tawdry material, and fiery political pamphlets piled in a random heap.
REAP: Background and Backbone
The basic idea for this reader-writer exchange system was proposed some time ago (Manzo, 1975) as a means of improving and supporting a national content area reading and writing project essentially for urban schools. Shortly afterwards it was collected into a teaching-learning approach called REAP - Read, Encode, Annotate, and Ponder (Eanet & Manzo, 1976; Eanet, 1978, 1983). From the beginning, it was anticipated that REAP might be an appropriate formatting system, or disciplined semantic platform and backbone for the tsunami of words that would be channeled from one “computer terminal” to another, on the then-developing intranets that were being formed by colleges, and promising to provide new technological options for the K-12 education. As such, it appeared that REAP should be a part of an evolving grammar of, and school curricula for, the electronic age.
REAP primarily is a cognitive enrichment approach that teaches students to think more precisely and deeply about what they read, by following the four-step strategy symbolized by its title:
READ to get the writer's basic message;
ENCODE the message into your own words while reading;
ANNOTATE your analysis of the message by writing responses from several perspectives, and;
PONDER what you have read and written – first by reviewing it yourself, then by sharing and discussing it with others, and finally by reading the responses of others.
At the heart of the approach is a set of annotation types that range roughly in hierarchical order from a simple summary of the author’s basic message to various perspectives for higher-order critical and creative analysis. The first few REAP annotation types require “reconstructive” thinking – understanding and perceiving the essence of the author’s meaning. The remaining ones require “constructive” thinking – going beyond the author’s intended meaning to form the personal schema connections, applications, and variations that permit the learner to transfer information and ideas from one context to another. This hierarchy aids assessment and gives guidance to students in reaching “up” to higher levels or “down” to more basic ones that may not yet have been mastered. Descriptions and examples of some of the basic annotation types are provided in Figure 1. Other types can be customized and created. For example, several teachers have had rewarding results using a “Humorous” annotation (also in Figure 1).
Guided Reading, Writing and Thinking
For classroom use, the annotation types are introduced either singly or a few at a time, with the nature and pace of instruction geared to the grade level, but without aiming at “mastery” before moving to another annotation type. Children tend to learn to write best by struggling to express their own thoughts about rich literature selections, guided by mindfully written models that scaffold reading and entice emulation. As soon as the class has the basic idea of a few annotation types, they begin to write annotations of things they have read, and to read annotations – perspectives - that others bring to response writing. They are reminded to write several annotations on a reading selection, as a means to crosscheck their initial understandings and reach for higher-order insights and questions. Exemplary annotations are stored for other individuals and classes to read before reading (i.e., frontloading), during reading (as discussion points), or after reading (as a review) and also serve as models of well composed written responses.
Figure 1
Sample Reading Selection with Examples of REAP Annotation Types
“Travelers and the Plane-Tree”
Two Travelers were walking along a bare and dusty road in the heat of a mid-summer’s day. Coming upon a large shade tree, they happily stopped to shelter themselves from the burning sun in the shade of its spreading branches. While they rested, looking up into the tree, one of them said to his companion, “What a useless tree this is! It makes no flowers and bears no fruit. Of what use is it to anyone?” The tree itself replied indignantly, “You ungrateful people! You take shelter under me from the scorching sun, and then, in the very act of enjoying the cool shade of my leaves, you abuse me and call me good for nothing!”
Reconstructive Annotations
Summary: states the basic message in brief form
Travelers take shelter from the sun under a large tree. They criticize the tree for not making flowers or fruit. The tree speaks, and tells them that they are ungrateful people for taking shelter under her leaves and then criticizing her.
Telegram: briefly states the author's basic theme with all unnecessary words removed -- a crisp, telegram-like message
Travelers stop for rest and shade under big tree. Travelers say tree is useless. Tree tells them off.
Heuristic: restates an attention-getting portion of the selection that makes the reader want to respond
In this story, a tree (remove “that”) talks back to people. The tree says, "You ungrateful people! You come and take shelter under me...and then ...abuse me and call me good for nothing!”
Question: turns the main point into an organization question that the selection answers
What if the things we use could talk back?
Constructive Annotations
Personal view: answers the question, "how do your views and feelings compare with what the author says?"
We use resources like coal without thinking. Then we criticize it for damaging our lungs and dirtying our air.
I guess kids sometimes use their parents the way the travelers used the tree, and then criticize them without thinking about their feelings.
Humorous: can vary from bringing a slight smile, usually by flirting with a naughty suggestion, to using jest to bring enlightenment.
I can just see that poor tree thinking “I hope they’re about to stop here to seek shelter and not relief.”
Critical: begins by stating the author's main point, then states whether the reader agrees, disagrees, or agrees in part with the author, and then briefly explains why
Not every word spoken in criticism is meant that way. The travelers were just grumpy from the trip. The tree is too sensitive.
Contrary: states a logical alternative position, even though it may not be the one the student supports
The travelers could be right, a better tree could produce something and also give shade.
Intention: states and briefly explains what the reader thinks was the author's intention, plan, and purpose for writing the selection
The author wants us to be more sensitive to the people and things we depend on -- especially those we see and use often.
Motivation: states what may have caused the author to have written the selection -- the author's personal agenda
It sounds like the author may have felt used, after having a bad experience with friends or family.
Discovery: states one or more practical questions that need to be answered before the selection can be judged for accuracy or worth
I wonder how many of us know when we are being "users." We could take an anonymous poll to see how many class members secretly feel that they have been used and how many see themselves as users.
Creative: suggests different and perhaps better solutions or views and/or connections and applications to prior learning and experiences
_ This fable made me think that teachers are sometimes used unfairly. They give us so much, and then we put them down if they make a little mistakes. They’re only human.
_ We should put this fable on the bulletin board where it will remind us not to be ungrateful “users.”
_ [How would you re-title this fable if you were writing it?] I’d call it “Travelers in the Dark,” to show that we go through life without knowing how many small “gifts” come to us along our way .
Aesthetic Annotation: This, the highest & riskiest level of writing, is the writer’s attempt to rouse hearts as well as minds to life; it can be by saying something that has not been said before, or by saying something hackneyed in some fresh and poignant way. It almost always will require a unique perspective, a bit of artistry, often some word craft, and some emotion; our ostensibly highly cerebral brain functions are always coupled to some feelings that sometimes begin deep in the primitive brain., but almost always become part of the conscious brain.
[Of course trees do not feel. Nonetheless, the expression "tree huggers" - that often is meant as a put-down of people who care too much about every living thing - speaks loudly to the likelihood that there may, in fact, be greater unity and community in all living things than our habits of mind routinely overlook; from a purely atomistic point of view we still don't know why and how all things physical remain separate from one another.]
After students have had some practice writing various types of annotations, these can be used and reinforced in a variety of ways; a few are listed below:
1. When giving a reading assignment, specify three annotation types for students to write and turn in.
2. As students become more skilled at annotation writing, they can be given the option of selecting from three annotation types the one that they would like to write in response to a reading assignment.
3. Assign each cooperative group member to write a different annotation type in response to a reading assignment. When students have finished reading and writing, they move to their assigned groups to share the annotations they have written and to offer constructive suggestions to one another on ways to clarify the response. Extra credit points can be offered to the group with the best annotation of each type as judged by the teacher or the class as a whole.
4. Introduce a new reading assignment by having students read annotations written by students in previous years’ classes or from a different section at the same grade level.
5. Provide incentive to read and write reflectively by posting exemplary annotations, signed by the author, on a bulletin board or a webpage, including some from different (remove “a”) age-grade levels; in other words, raise some higher targets.
6. Use REAP annotation types as a guide for phrasing post-reading discussion questions, and encourage students to do the same.
7. From time to time, use the REAP annotations to guide students’ responses to non-text learning experiences: a video, a laboratory procedure, a piece of music or art, etc.
REAP Spectrum Thinking
The goal of all these applications is to help students internalize REAP “spectrum-thinking,” or thinking from different perspectives, to the extent where it becomes a habit of mind: a familiar comfortable, almost automatic, mental strategy. Like other thinking strategies, REAP Spectrum-Thinking is helpful in negotiating everyday life, but it is particularly useful for independent, but no longer isolated study. One strength of this strategy results from hearing the thinking of others who are reacting to the same content or stimulus.
REAP Spectrum-Thinking is a flexible strategy, but it has two essential elements. First, it may begin with any of the response types, but at least one, if not several others always follow initial responses. It is through this habit of multiple-stance responding that the learner is reminded to reflect at a higher level of social-emotional maturity, one beyond his or her initial – often egocentric- response and to possibly perceive further meaning and connections. Some learners are naturally inclined to respond to reading from a critical stance. REAP Spectrum-Thinking reminds them to re-visit the information to check the basic facts before going too far down a path that may be based on misunderstandings. Other learners tend to respond from a basic reconstructive stance, but are disinclined to move to any of the constructive levels: REAP- Spectrum thinking reminds them to do so. Most students (and adults) have fairly strong preferences as to types of reading and responding that they like, feel ambivalent toward, and dislike. These preferences both reflect and influence the nature and degree of response to reading, viewing and listening. The second essential element, or value, of REAP-Spectrum-Thinking is parsimonious- or “fewer words” writing and responding. By definition, annotations are brief requiring much more thinking than writing. Gifted children are as inclined as others to read and gather-in, but not to think, or be potentially transformed as well informed, by what they read and learn. Since their gift of “learning easily” tends to slate them for more influential positions, it is in their interests and ours that they be reflective and multi-perspective thinkers. The requirement to write in response to reading, along with exposure to the responses of many others, instills greater sensitivity in them, even while they help others to think more effectively by learning to do so themselves. In a small way, the world is a better place as we all learn to share and think more clearly. Importantly, too, many things that are not otherwise spoken, may then be, inviting possible growth beyond generalities and negativism. At a very basic level, such response-to-text reading-writing is conducive to more active learning, and a foundational means of schema building (Rosenblatt, 1978; Cooper, 1985; Rosenblatt, 1985; Purves, 1993; Shelton, 1994; Blake, 1995; Kasonovich, 1996).
Full Spectrum-Thinking for Full Spectrum-Inclusion
As a teaching approach, REAP is an ideal way to provide for students in inclusion classrooms, across a broad spectrum of student abilities, needs and cognitive styles. It also permits divergent-creative-thinkers who otherwise may be academic underachievers to demonstrate their abilities while reminding them that communication requires them to be more attentive to form, sequence and details. It urges concrete thinkers to think further and make more personal connections with the facts that they so easily seem to acquire. It deals directly with the core language/thinking systems that are the target in many IEP’s for LD youngsters, and the “mediation” process essential to most human learning. It lends itself well to cooperative group work that is essential in aiding social and emotional development (see #3 above for one example). It evokes the type of pointed “instructional conversation” that “rouses minds to life” (Tharp & Gallimore, 1992).
The term iREAP represents endless possibilities for teaching and practicing REAP Spectrum-Thinking that, with Internet access, can now be translated into reality. Here are just a few examples of how iREAP might unfold as we continue to pilot develop it.
• A teacher can easily store, access, & print student responses to a given reading selection, thereby building a library of student annotations for future students’ use as pre-reading schema activation, post-reading responding, and as models of poignant writing and thinking.
• The best student annotations can be posted on a classroom or school webpage, where they can be made accessible to and from students from diverse cultural perspectives even while we have not yet figured out how to bring about full integration of the schools.
• Parents, school administrators, community members (and on outward to include anyone in the world speaking English) can be invited to submit annotations and responses to annotations for inclusion on the classroom or school webpage.
• Webpages of several schools can be linked (our current technical and logistical task).
• Volunteer, paid, peer, and/or cross-grade tutors can be trained to use REAP as the structural vehicle for conducting online instructional interactions. Tutors can prompt tutees to revise, edit, and re-submit their annotations of classroom assignments or supplementary materials. Hence, the creation of a cadre of Volunteers for Higher-Order Literacy able to assist teachers in reading and responding to the pupils’ early drafts, and therefore, increasing the likelihood that teachers would more confidently give additional reading-writing assignments. [Wrtg now is a requirement on all standardized tests, nonetheless there still is no help for middle/high school teachers who have an average of 120 students in reading and guiding wrtg.]
• A classroom (or school) could partner with a local bookstore to create an iREAP Book Club, where a “book of the month” is listed for annotation contributions. Students & community members (local and online) whose annotations are selected for online publication would receive credits toward purchases in the bookstore. Community members could contribute books purchased with credits to the classroom or school library. The bookstore could set up a couple of computer carrels where an iREAP activity could be viewed.
• The increased sale of books that seems to follow from their discussion could become a valuable source of supplementary income for schools from commercial book vendors.
• And, when educators fully grasp the value to education of on-line ads and marketing that our kids see within moments of going on-line anyway, it might even become acceptable to have this iREAP system situated so that schools received some of the revenue now flowing like a protected river through their campuses. There is a name other than “commercialism” for this option, it is called “Social Entrepreneurism” and it could become the means to end the financial drought that has plagued education throughout the prior century. (See: Manzo, 2001 for further details)
Clinical and Empirical Evidence for REAP
REAP-related research has shown quite clearly that this proposed grammar, or architecture for electronic interaction, improves basic and higher-order literacy objectives, especially complexity of thinking, as well as major components of cognitive skill and social-emotional maturity, or non-egocentric thinking and reflecting. For example, Garber (1995) found that when middle school students participated in what she called transformational reader-response strategies to narrative text, essentially REAP, both cognitive complexity and social development were increased significantly when compared with students who used transactional and transmissional reader-response strategies. Standardized measures were used to evaluate complex thinking in this study. Results of a research study conducted by Albee (2000) indicated that when university students in children's literature courses experienced REAP Annotation Exchange Writing they showed a significant increase in cognitive complexity as measured by three different instruments, as compared with students who did not experience REAP Annotation Exchange.
As iREAP develops, teachers and schools can expect several things to happen:
• significant increase and improvement in higher-order thinking
• significant increase and improvement in reading comprehension
• significant increase and improvement in writing ability
• significant increase and improvement in content knowledge, cumulative learning and standardized test scores, and
• significant increase and improvement in our ways of building toward those most cherished of human needs, a sense of shared experience and membership in a caring community focused to some higher purpose .
The iREAP system, should we be able to fully launch and support it, would permit any one to REAP the benefits of the reflections or thinking of the many on the ever bulging growth of information, knowledge, and school curricula.
There Also May Be A Considerable Peace Dividend
Cultural integration may sometimes raise tensions, but cultural isolation almost always escalates into hostilities. A global iREAP system reaching all students and teachers could create meaningful cross-cultural, cross-boundary dialogues on great books and great thoughts. iREAP would move beyond ritualized, self-interests, and the sometimes crippling “dialogues” of governments, corporations, and even dogmatic religions. It would happen on a person-to-person basis. It would be an on-going ecumenical council building empathy and a sense of common cause in the form of small patches of understanding, that like vegetation on a hillside, can not only prevent further erosion, but when its seeds of sense and sensibility are caught in the winds, their greening effects can be spread to the most remote of places. Importantly, such a system also could nearly compel those with irascible notions to express and reveal themselves, since the iREAP system would be an influential part of the new, web-based free markets in ideas. And, yes, there is danger in screening for terrorists and misfits, but this system also provides for empathy and teachable moment reminders from our most civilized and grounded citizens, to those who otherwise might be alone with their troubled, and ungrounded thoughts.
An action plan to build and implement iREAP is developing on LiteracyLeaders.com.(Other relevant cites are: http://cctr.umkc.edu/user/dmartin/hol.html and http://members.aol.com/ReadShop/REAP1.html.) Meantime, feel comfortable teaching the grammar of disciplined reading, writing and sharing at the classroom and school level. Brief writing following reading has been a fundamental and traditional part of education, especially in Europe, for hundreds of years. REAP simply is a more orderly way of teaching, scaffolding and monitoring progress in doing so.
References
Albee, J.A. (2000). The effect of Read-Encode-Annotate-Ponder annotation exchange (REAP AnX) on the complex thinking of undergraduate students in children’s literature courses. (Unpublished doctoral dissertation, Kansas City, MO: University of Missouri-Kansas City.)
Eanet, M. G. (1978). An investigation of the REAP reading/study procedure: Its rationale and efficacy. In P. D. Pearson, & J. Hansen (Eds.), Reading: Disciplined inquiry in process and practice. The 27th yearbook of the National Reading Conference (pp. 229-232). Clemson, SC: National Reading Conference.
Eanet, M. G. (1983). Reading/writing: Finding and using the connection. The Missouri Reader, 8, 8-9.
Eanet, M. G., & Manzo, A.V. (1976). REAP—A strategy for improving reading/writing/study skills. Journal of Reading, 19, 647–652.
Garber, K. S. (1995). The effects of transmissional, transactional, and transformational reader-response strategies on middle school students’ thinking complexity and social development. (Unpublished doctoral dissertation.) Kansas City, MO: University of Missouri-Kansas City.
Kasonovich, M.L. (1996). The study of first graders’ ability to respond to and analyze picture books. (ERIC Document Reproduction Service No. ED 396249). Princeton, NJ: ERIC Clearinghouse on Assessment and Evaluation.
Manzo, A.V. (1973). CONPASS English: A demonstration project. Journal of Reading, 16, 539–545.
Manzo, A.V., & Manzo, U.C. (1995). Teaching children to be literate: A reflective approach. Fort Worth, TX: Harcourt Brace College Publishers.
Purves, A. (1993). Toward a re-evaluation of reader response and school literature. Language Arts, 70, 348-361.
Rosenblatt, L. (1978). The reader, the text, the poem: the transactional theory of the literary work. Carbondale, IL: Southern Illinois University Press.
Rosenblatt, L. (1985). The transformational theory of the literary work: Implications for research. In Researching response to literature and the teaching of literature. Carbondale, IL: Southern Illinois University Press.
Shelton, K.Y. (1994). Reader response theory in the high school English classroom. Paper presented at the Annual Meeting of the National Council of Teachers of English (Orlando, FL: November 16-21, 1994). (ERIC Document Reproduction Service No. 379655). Princeton, NJ: Educational Testing Service.
Tharp, R.G., & Gallimore, R. (1990). Rousing minds to life: Teaching, learning, and schooling social context. New York: Cambridge University Press.
A Fun High Frequency, Whole Word Flash Card Practice Routine - "Say it like a Barbie!"*
A Facilitative Role Play Version of An Intensive Sight Word Paradigm
[* Vici Cope, music teacher from Tustin, Ca, first suggested this pretend refrain from a campfire song, Say it Like a Barbie]
Begin: The teacher holds up a flash card or writes a word on the chalkboard:
Teacher: See this word? The word is and. Everyone look at this word, and say it together.
Students: And
Teacher: That’s correct. Now say it five times while looking at it.
S’s: And, and, and, and, and
T: Good. Now say it louder.
S’s: And!
T: Come on, you can say it louder than that!
S’s: AND!
T: Okay, I have three other cards here ("again", "answer", "arrange"). When I show a card that is not “and,” say “No!” in a loud voice. But when you see “and,” say it in a whisper.
S’s: No!
S’s: No!
S’s: (whisper) "and"
T: Great. Look at it carefully, and when I remove it, close your eyes and try to picture the word under your eyelids. Do you see it? Good. Now say it in a whisper again.
S’s: and
T: Good. Now spell it.
S’s: A...N...D
T: Now pretend to write it in the air in front of you with your finger while saying each letter.
S’s: A...N...D
T: Good. Now describe the word. The way you would describe a new kid to a friend who hasn't seen him yet.
S1: It’s small.
S2: It has a witch’s hat in the beginning.
S3: It has a belly at the end.
T: What’s its name again?
S’s: AND!
T: Ok, now let’s PRETEND. Let’s say it like a Barbie...Good, now like a He-Man, now long and scary like a Ghost (This last pretend is most useful from the point of view of phonemic segmentation because it draws out the word and hence exposure to its letter sounds. The two prior pretends, however, seem to catch the imaginations of children who will continue in this playful manner in their private speech- self speech that is barely audible.)
The teacher ideally should encourage such post lesson learning by adding something like: Let’s search for “and’s” throughout the day and even after you go home tonight. We’ll ask you later if you found any in school and again tomorrow morning if you found any at home.
And in the morning, to reward such self-instruction – the real purpose of all teaching – the teacher should have on the board, “Did you find any and’s last night?” You can expect to hear the answer to this question like a Barbie, or some other invented character. Over the next few lessons, ask if the student has seen an and. Up to three words a day usually can be taught in this general way. It is best to be sure that the target words do not look too much alike in this early learning phase. Words that are shown in context with the object word and that do look like the object word should not be stressed. These often will be learned incidentally, as the student sets about distinguishing these look-a-likes from a firm footing of words learned to 100% accuracy in flash recognition.
Facilitative Pretending Is a Legacy More than a Discovery
A little thought will reveal that Facilitative Role Play is validated by much of human experience, and has just been waiting to be named and made more accessible for a variety of instructional uses. If you will recall, Shakespeare, who seems to have captured much of human frailty as well as wisdom, had something to say about this in As You Like It:
All the world's a stage, And all the men and women merely Players
They have their Exits and their Entrances, And one man in his time players many parts…
It will be interesting to see how many new parts are written for Facilitative Role Play, or Facilitative Pretending should this formerly bit player wins a marquee name.
[Excerpt from: Reading/Learning Assessment for Diagnostic-Prescriptive Teaching, 2nd edition (.A. Manzo, U. Manzo and Julie Albee) Belmont: California, Thomson/Wadsworth Publishers (2004).]
Subscribe to:
Posts (Atom)