The Five Big Ideas in Reading
All around the world there are countless children who battle with learning how to read. Some children receive help while others do not. If help is not received, these children will most likely fall behind in school and find difficulty with reading throughout their whole lives. In order to combat this, several methods have been determined to ensure children are taught the best way to read. More importantly, evidence based research has been produced on reading instruction and the core ideas behind reading (University of Oregon, n.d.).
There are five core components of early reading: phonemic awareness, alphabetic principle, fluency with text, vocabulary, and comprehension. The article, “Big Ideas in Beginning Reading” does a great job at explaining the five core components. Phonemic awareness measures an individual’s ability to hear and manipulate sounds in spoken words, and provides a basic foundation in learning how to read and spell. Alphabetic principle is comprised of two parts: alphabetic understanding, and phonological recoding. Alphabetic understanding is made up of words that are composed of letters that represent sounds. Phonological recoding measures the systematic relationships between letter and phonemes, which represent letter-sound correspondence, in order to retrieve the pronunciation of an unknown printed string or to spell words. Fluency with text measures the speed and accuracy an individual can read words with no noticeable cognitive or mental effort. In order to gain meaning from a text, individuals must be able to read fluently. Vocabulary measures an individual’s access to meanings of words that teachers, caregivers, or other adults use in order to guide them into examining known concepts in novel ways. Comprehension is essentially the essence of reading. It involves a complex cognitive process where there is an interaction between the reader and the text in order to extract meaning from what was read (University of Oregon Center, n.d.).
Dynamic Indicators of Basic Literacy Skills (DIBELS) is a curriculum based measurement that uses repeated and frequent administration of standard probes that act as screeners for student in need of intervention support in reading. The DIBELS uses six tasks to measure each of the core components of early reading in children throughout the United States. The quantity of tests a child may take depends on the child’s grade level in school, which each focus on a different reading skill. The six tasks include: First Sound Fluency (FSF), Phoneme Segmentation Fluency (PSF), Letter Naming Fluency (LNF), Nonsense Word Fluency (NWF), DIBELS Oral Reading Fluency (DORF), and Daze. The scores on these tasks indicate whether a child is likely to be on target for learning to read or may need some assistance in learning important reading skills (Dynamic Measurement Group, 2011).
FSF measures the awareness and understanding of the sound structure of language. Spoken words are made up of sequences of individual speech sounds. A student’s fluency, or accuracy and speed, is assessed by asking him or her to identify the initial sound(s) within spoken words. In FSF the assessor says words, and the student says the first sound for each word. LNF measures a student’s automatic retrieval, alphabetic principle, and later reading success. For LNF, the student is presented with a sheet of letters and is asked to name each of the letters. PSF measures a student’s phonemic awareness which assesses fluency in segmenting spoken words into individual sound segments. In PSF, the assessor says words, and the student says in the individual sounds in each word. NWF measures a student’s alphabetic principle, basic phonics, and knowledge of basic letter-sound correspondences. For NWF, the student is presented with a list of VC and CVC nonsense words, for example: sig, rav, or ov, and asked to verbally read the words. DORF measures a student’s advanced phonics, word attack skills, accuracy and fluency of reading connected text, and reading comprehension. In DORF, the student is presented with a reading passage and asked to read aloud. The student is then asked to retell what he or she just read. Daze measures a student’s ability to construct meaning from text by using word recognition skills, prior knowledge, syntax and morphology, and cause and effect reasoning skills. For Daze, the student is presented with a reading passage where some words are replaced by multiple-choice boxes that include the original words and two distractors. The student reads the passage silently and then chooses the word that best fits the meaning of the sentence (Spenceley, 2018).
When it comes to progress monitoring, there are two major types: General Outcome Measures (GOMs) and mastery assessments. GOM is best described as being a process that makes decisions about something complex by measuring something simple the same way over a period of time. An Aimsweb consultant, Shinn, gives a great example of GOM stating, “…Teachers can measure something simple or ‘little’ like oral reading for a short period of time (e.g., 1 minute) to make statements about something complex or ‘big’ like general reading ability” (Shinn, 2013). In contrast, mastery measurements can be best described as a process of measuring something different in different ways at different periods of time in order to make a statement about something simple. For example, when teachers use mastery assessments while teaching students addition facts or combinations words, they are testing students only on those skills and not anything else. Statements can only be made about whether or not student have learned the addition facts taught, or the combination words (Shinn, 2013).
DIBELS benchmark goals are empirically obtained, criterion-referenced target scores that represent adequate reading development. A benchmark goal indicates a level of skill the student is likely to achieve on the next DIBELS benchmark goal or reading outcome. When a student does achieve a benchmark goal, then it is likely that the student achieves later reading outcomes through classroom curriculum. Cut-off scores indicate a level of skill below which the student is unlikely to achieve later reading goals without receiving additional, targeted instructional support. Students with scores below the cut-off score point are identified as likely to need intensive support, or interventions based from curriculum or supplemental support. Intensive support can include, but is not limited to: smaller group instruction, providing more instructional time or practice, and/or providing more explicit modeling and instruction (Dynamic Measurement Group, 2011).
There are many benefits to using DIBELS in comparison to standardized norm-referenced measures. DIBELS focuses on a wide range of skills, and what children are being taught, rather than measuring a standard. Every student may not be given the same test, but what is being measured, like reading, is the same for everyone, just catered to each student. DIBELS is used for universal screening, benchmark assessment, and progress monitoring. DIBELS helps in identifying students who are at risk for reading difficulties by using grade level materials. The ultimate goal is to assist in making a decision based on each individual student, and to plan an intervention if needed. DIBELS results are used to examine individual student performance over time, and because of this, interpretations are based on where a student’s skills are relative to his or her past performance, rather than being compared to same aged peers. As described previously, benchmarks help in identifying a student’s reading progress, and indicates a level of skill at which the student is likely to achieve the next DIBELS benchmark goal or reading outcome. With progress monitoring, examining student growth is essential to fueling Response to Intervention (RTI) models. Progress monitoring helps teachers decide if the instructional support a student is receiving is addressing the student’s needs or not. If not, changes should be made to better support the student. With standardized measures, every test is given in the same way to all students who take it without modifications, they may not come from the curriculum, and they increase in difficulty by a child’s grade (Dynamic Measurement Group, 2011).
While DIBELS has its benefits, it also has limitations. First, DIBELS is supposed to be conducted three times per school year, at the beginning, middle, and end of the year for each student. While it does take a short period of time to administer, ranging from three to eight minutes, some teachers may not want to spend this time for each potential student. Second, by having to individually administer DIBELS, it may take up instructional time in class. For example, if a teacher has a class of 25 students who need to be individually tested, the teacher may not have the time to assess each student. It is suggested that the teacher can complete DIBELS testing for all students if five students are assessed per day, but again this will take time away from instruction. Lastly, because DIBELS is not aligned to standards, some may consider it to be a limitation. In the next paragraph, I will speak more about the possibility of aligning DIBELS with standardized measures (Dynamic Measurement Group, 2011).
DIBELS definitely can be used in conjunction with standardized norm-referenced measures of reading. I think the most appropriate way to do this would be to make all students take the different levels of DIBELS, depending on his or her grade and then contingent on how they perform, would indicate if they are ready or not to take a standardized test. Currently all students regardless of academic performance take standardized tests when they reach the proper grade. If changed using DIBELS, a student’s reading skills can be considered not ready to take a standardized test, and then he/she will not be forced to take a test where he/she will not be able to perform as strongly as their same aged peers. Teachers can then benchmark students reading progress and determine how to put them on track to perform well on the standardized reading test, before moving onto the next grade level. I do realize this may be more work for teachers, but I think it will help students to be able to stay on track with their same aged peers, regardless of possible difficulties in reading.
Dynamic Measurement Group. (2011). DIBELS® Next Assessment Manual. Oregon: University
of Oregon’s Center on Teaching and Learning.
Shinn, M. (2013). Measuring General Outcomes: A Critical Component in Scientific and
Practical Progress Monitoring Practices. Retrieved April 18, 2018, from https://www.aimsweb.com/wp-content/uploads/Mark-Shinn-GOM_Master-Monitoring-White-Paper.pdf.
Spenceley, L. (2018). USE_DIBELS_Next PowerPoint slides. Retrieved from SUNY Oswego
University of Oregon Center. (n.d.). Big Ideas in Beginning Reading. Retrieved April 18, 2018,