Targeted, Timely, and Effective: A Deep Dive into MTSS and Progress Monitoring
- DocHolbrook
- May 18
- 7 min read

Recently, a third-grade teacher asked me a thoughtful question about Tier 2 instruction for one of her students. Specifically, she wanted to know what a typical reading goal should look like and how many probes are needed to demonstrate student growth or mastery. Many teachers and administrators wrestle with this question, especially when implementing a Multi-Tiered System of Supports (MTSS) in reading.
In my experience across multiple districts, a common challenge is that most MTSS plans lack clarity and specificity regarding reading. Reading is complex and multidimensional, making it difficult to measure. Traditional tools like running records are fading from use, leaving many teachers unsure of how to determine whether a student has met grade-level benchmarks.
To better support teachers and students, we must understand both how MTSS works and why tailored reading interventions are necessary based on distinct reading disability profiles.
What is MTSS?
MTSS (Multi-Tiered System of Supports) is a framework that integrates evidence-based practices to deliver academic and behavioral support through multiple levels (or tiers) of intervention.
Core Characteristics of MTSS:
Universal screening for all students
Data-driven decision making
Tiered interventions based on need
Ongoing progress monitoring
High-quality core instruction
Flexible movement between tiers
MTSS is designed to identify and address academic challenges early. But this requires more than just identifying students who are "behind"—we need to understand why they struggle with reading.
The Three Main Reading Disability Profiles
Reading difficulties are not all the same. Students with reading challenges typically fall into one of the following profiles:
1. Specific Word Reading Disability (SWRD) – Often referred to as Dyslexia
Core Problem: Phonological processing deficit
Characteristics: Difficulty decoding, poor spelling, slow and inaccurate word reading
Assessments: Phonemic awareness tests, nonsense word decoding, ORF (Oral Reading Fluency)
Instructional Focus:
Systematic phonics
Explicit instruction in phoneme-grapheme mapping
Multisensory methods (e.g., Orton-Gillingham)
2. Specific Reading Comprehension Disability (SRCD)
Core Problem: Language comprehension deficit
Characteristics: Fluent word reading but poor understanding of text
Possible Causes: Weak vocabulary, limited background knowledge, or social-linguistic impairments
Assessments: Listening comprehension, vocabulary tests, maze assessments
Instructional Focus:
Vocabulary development
Explicit teaching of comprehension strategies (e.g., summarizing, inferencing)
Building background knowledge
3. Mixed Reading Disability (MRD)
Core Problem: Combination of decoding and comprehension deficits
Characteristics: Difficulty with both phonological processing and language comprehension; low fluency
Assessments: Comprehensive reading evaluations including ORF, phonics screeners, vocabulary, and listening comprehension
Instructional Focus:
Combine approaches for SWRD and SRCD
Leverage student interests to boost engagement
Scaffolded, high-frequency, small-group instruction
In many schools, teachers rely heavily on diagnostic, formative, and summative assessments to track student progress. While these tools can provide useful information, there are a few limitations when it comes to tracking progress. First, they can be time-consuming to administer and analyze. Second, they might show growth in a specific skill, but that growth doesn't always reflect whether the student is actually on grade level. Instead, I recommend using a combination of these assessments alongside a curriculum-based measure (CBM) to monitor whether students are making adequate progress toward grade-level expectations.
CBM is a standardized method for assessing and monitoring students’ academic progress frequently and efficiently. It’s backed by decades of research and is a core tool within MTSS.
Key Features of CBM:
Brief and reliable
Administered weekly or at benchmark intervals
Different probes but same level of difficulty across forms
Linked to year-long curriculum expectations
Aligned with standards and high-stakes testing
CBM allows educators to:
Track growth over time
Evaluate intervention effectiveness
Adjust instruction based on real-time data
Communicate clearly with families and staff
Matching CBM to Reading Disability Type
Common CBM Reading Tasks by Grade:
Grade | Task | Purpose |
K | Letter Sound Fluency (LSF) | Phonological awareness |
1 | Word Identification Fluency (WIF), ORF | Decoding |
2-3 | ORF | Fluency & decoding |
4+ | Maze | Comprehension |
Setting Goals for Tier 2 Students
When setting goals for students receiving Tier 2 support, use either benchmark norms or growth rate formulas.
Example (Using Growth Rate Formula):
Initial Median Score = 55 WRC (words read correctly)
Weekly Growth Rate (Grade 3) = 0.8 WRC/week
Instructional Weeks = 30
Goal = 55 + (0.8 × 30) = 79 WRC
How Often Should You Administer CBM Probes?
Benchmarking: 3 times/year (Fall, Winter, Spring)
Progress Monitoring for At-Risk Students: 1–2 times/week
Frequent monitoring allows for early identification of stagnant growth and timely instructional adjustments. Most research will tell you once a week. Anyone who has ever worked in a classroom knows that is nearly impossible. I usually tell my teachers 1-2x a month, depending on the tier. Having said that, it will take you longer to gather data if you spread out your progress monitoring.
What is a probe?
In the context of MTSS (Multi-Tiered System of Supports), CBM (Curriculum-Based Measurement), and progress monitoring, a probe is a short, timed assessment that measures a specific academic skill—most commonly in reading, math, or writing.
Probe: A brief, standardized assessment tool used regularly (often weekly or bi-weekly) to measure student performance in a targeted skill area. Probes are part of CBM and are used within MTSS frameworks to track student's progress toward grade-level goals over time.
In reading, for example:
A reading fluency probe might be a one-minute oral reading passage used to count words correct per minute (WCPM).
These probes are quick, repeatable, and sensitive to small changes in performance—making them ideal for identifying whether interventions are effective.
Why Probes Matter:
They help educators make data-based decisions.
They allow for early identification of students who are not responding to instruction.
They track whether students are making adequate progress toward grade-level standards.
Where to Find CBM Probes
DIBELS – https://dibels.uoregon.edu
EdCheckup – www.edcheckup.com
Monitoring Basic Skills Progress (PRO-ED) – www.proedinc.com
Vanderbilt CBM Resources – Vanderbilt Education
Assessments that are not probes:
Using Aimlines in Progress Monitoring to Drive Instruction
Once you've identified a student who needs targeted intervention, the next step is to track their progress over time. One of the most effective ways to do this is by graphing scores and creating an aimline—a visual guide that helps you decide whether your instruction is working or needs adjusting.
Step 1: Set Up Your Graph
You can use two main methods to graph progress monitoring data:
Option 1: Paper and Pencil
Vertical Axis (Y-axis): Plot the range of possible scores—this could be Words Read Correctly (WRC) or another CBM metric. Make sure it covers from 0 to the highest expected score.
Horizontal Axis (X-axis): Label the number of instructional weeks. Make space for 1–2 data points per week.
Create a template and make one graph per student.
Option 2: Use a Digital Tool
Chart Dog (www.interventioncentral.org) offers free, web-based graphing tools.
These tools allow you to input scores, automatically calculate trend lines, and save digital copies of your progress monitoring data.
Step 2: Create an Aimline
An aimline represents the expected rate of growth from a student's current performance to their end-of-year goal. To draw one:
Take the median of the first 3 data points (baseline scores).
Plot that point on the graph.
Mark the target score (e.g., spring benchmark).
Draw a straight line connecting the baseline to the target. This is your aimline.
This visual helps you quickly determine if the student is on track.
Step 3: Use Aimline Data to Make Instructional Decisions
Once your aimline is established, you’ll compare the student’s weekly scores (data points) to the aimline to evaluate progress.
Use the “Rule of 3” to guide your next steps:
Pattern | What It Means | What to Do |
3 or more scores on or near the aimline | Student is making expected progress | Continue with current instruction |
3 or more scores below the aimline | Student is not responding adequately | Adjust instruction (e.g., increase intensity, change materials) |
3 or more scores above the aimline | Student is exceeding expectations | Raise the goal while continuing current instruction |


This approach allows you to respond quickly to student needs and ensures that interventions are data-driven—not just based on instinct.
Be Consistent
Progress monitoring works best when it's done regularly—ideally once per week for students receiving Tier 2 or Tier 3 support. The more data you have, the clearer the picture of student growth.
Aimlines aren't just lines on a graph—they're powerful tools that help you fine-tune instruction in real time, helping students get exactly what they need to succeed.
Final Thoughts
When addressing reading difficulties within MTSS, identifying the student’s reading profile is crucial. A one-size-fits-all intervention won’t work. Understanding whether a student struggles with decoding, comprehension, or both is the first step in delivering effective Tier 2 support.
So, back to that teacher’s original question: What should the goal be, and how often should I monitor progress? The answer depends on the student’s profile, but in general:
Set a goal based on benchmark norms or growth expectations.
Monitor weekly (or biweekly) to assess intervention effectiveness.
Match instruction to the student’s specific reading needs.
The more precise we are with assessments and interventions, the more likely we are to move students forward in meaningful, measurable ways.
References
AIMSweb. (2006). Oral reading fluency norms [Data file]. Available at http://www.aimsweb.com.
Capizzi, A. C., & Barton-Arwood, S. M. (2009). Using a CBM graphic organizer to facilitate collaboration in reading. Intervention in School and Clinic, 45(1), 14-23.
Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on student achievement, pedagogy, and student awareness of learning. American Educational Research Journal, 21, 449-460.
Fuchs, L.S., & Fuchs, D. (2004). Determining adequate yearly progress from kindergarten through grade six with curriculum-based measurement. Assessment for Effective Instruction, 29(4), 25- 38.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989) Effects of instrumental use of Curriculum-Based Measurement to enhance instructional programs. Remedial and Special Education, 10 (2), 43- 52.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48.
Good, R. H.III, Simmons, D.C. & Kameenui, E. J. (2001) The importance and decision making utility of a continuum of fluency-based indicators of foundational reading skills for thirdgrade high stakes outcomes. Scientific Studies of Reading, 5(3), 257-288.
Hosp, M. K., & Hosp, J. L. (2003). Curriculum-based measurement for reading, spelling, and math: How to do it and why. Preventing School Failure, 48(1), 10–17. Hosp, M. K, Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM. New York: Guilford.
Spear-Swerling, L. (2015). Common Types of Reading Problems and How to Help Children Who Have Them. The Reading Teacher, 69(5), 513-522. doi:10.1002/trtr.1410