Many educators are aware of the importance of promoting students’ social-emotional skills and how this can be done through well-coordinated and implemented social-emotional learning (SEL) programs and practices. But whether or not these approaches are being implemented effectively and the level of impact they are having on student outcomes can be a bit more difficult to determine.
Many districts and schools are now regularly collecting data assessing students’ social-emotional and behavioral skills. Data from assessments and screeners are typically used to identify students needing additional support. Other pieces of data, such as behavior monitoring or tracking, are commonly used to track the progress students are making toward their goals.
You may have heard the old saying, “Many make decisions by guessing or using their gut. They will be either very lucky or very wrong.” Because of these wise words, educators have worked diligently over the last decade to learn how to obtain, gather, compile, view, analyze and use data to avoid leaving student success to being “very lucky” or “very wrong.”
Within an MTSS (Multi-Tiered System of Supports) framework, educators are asked to collect and understand different data to drive their decision-making process. A successful MTSS implementation will rely on a myriad of data, from screening and progress monitoring assessment data to tier movement and benchmark growth.
I can still hear my students groan every time I announce “pop quiz time!” My countless hours of learning about secondary education had taught me that a solid instructional strategy was rooted in tests, tests, tests. Test the kids before they learn, test the kids while they learn, and test them after they learn. And then again—test the kids the next day, too—just to make sure they remember what we did yesterday.
As a teacher, I always sought to have some form of assessment embedded throughout every lesson because that was the foundation of good teaching, right? However, I was never taught what to do with the results of all that testing. I had all this great data at my fingertips, but I was drowning in data points, multiple-choice scores, and whether or not spelling should count in a short answer. So how did that help me help my students?
A robust Multi-Tiered System of Supports (MTSS) relies on a systematic data collection process. We are told to ensure that we have a universal screener and progress monitors, but it’s just as vital that we know what to do with our assessment data after we go through the process of gathering it.
To make smart, data-driven decisions to support our MTSS process, we need to have a clear understanding of the role of each assessment in an MTSS model. That way, we aren’t all drowning in data, without any idea of where all this data is supposed to take us.
Cue—this handy, dandy assessment table and a breakdown of assessments.
Before becoming a professional development consultant with Branching Minds, I spent 34 years in the roles of teacher, interventionist, and instructional specialist; and I’m currently supporting a school district as they continue to improve their MTSS system. My roles allow me to spend time with teachers and administrators from all over the country. And while fall has everyone drinking, eating, and smelling all things pumpkin...for those in education, this season also ushers in a time of data and stress.
With the arrival of fall comes the arrival of student scores from the Beginning of Year administration of Universal Screeners. Universal Screeners are the assessment tool for targeting students who struggle to learn when provided a scientific, evidence-based general education core curriculum (Jenkins, Hudson, & Johnson, 2007). Typically these assessments are administered three times per year during the beginning, middle, and end of year to all students.
After administering the universal screener to students, we as educators would expect/hope to see 80% of students in Tier 1, indicating that students are meeting grade-level expectations; 10% to 15% in Tier 2, indicating student performance below grade-level expectations; and 5% to 10% of students in Tier 3, indicating students are well below grade level expectations.
In 2001, motivated by the desire to make US education rankings more competitive in the global climate, the G.W. Bush administration pushed through an initiative called "No Child Left Behind (NCLB)." Through this initiative, schools were held accountable for student success determined by state testing. Schools that did not make adequate yearly progress (AYP) on state exams could be penalized, placed under state supervision and required to make significant improvements in their programming. Alongside the birth of NCLB came Response to Intervention (RTI), a practice designed to help educators apply many teaching best practices to proactively identify and intervene on behalf of students needing additional support. Whereas state tests worked as an accountability measure to determine if students had made adequate progress for NCLB’s purpose, RTI practices pushed educators to seek out more proactive data, such as benchmark assessments (tri-annual broad outcome measures) that sampled students' mastery of grade level skills. Using adaptive measures that adjusted the level of difficulty based on previous responses, the assessments were able to identify every student's ability level and compare them to local and national samples. These data were analyzed by school teams early in the academic year to identify students who were at the highest risk to ensure they receive more and/or targeted instruction in deficit areas. Students identified as needing intervention were then briefly assessed 1x/week or 2x/month to get small samples of their growth in a specific skill area. This "progress monitoring" was designed to help educators evaluate the quality of a student's response to the intervention they received. If students showed growth, they could graduate from needing the additional support. If students struggled to progress, teachers would use tracking graphs to determine if they should change or intensify what they were doing to support the student.
As a professional development consultant for Branching Minds, I work with teachers and administrators from all over the country. I frequently get asked how progress monitoring should look at the middle school and high school levels.
Effective progress monitoring is critical for a successful MTSS/RTI practice. In addition to universal screening assessments—which are given to all students three times a year—, students receiving Tier 2 or 3 levels of support should be given a progress monitoring assessment every other week or weekly, respectively. These data allow us to have better visibility into whether or not our support is working for a given student, and more importantly, when it's not so that we can adjust the intervention approach quickly to better meet the needs of that student.
Assessments used for progress monitoring should be quick, skill (not content) based, and valid and reliable (i.e., having demonstrated to accurately and consistently measure what they are supposed to be evaluating). The Center for Intensive Intervention has a helpful chart that evaluates and compares these qualities for common progress monitoring assessments.
Among many of the COVID-19 and remote learning struggles for educators, understanding students’ assessment data has been one of the most common challenges. Interpreting student scores from universal screeners and benchmarks, and using the data to inform instruction and support, is an essential component of any MTSS framework.
Without this information, educators must rely solely on their own observations of students to determine who is keeping up and who is falling behind. And of course, this becomes even more of a struggle when teachers aren’t able to observe and work with their students in person.
These types of issues will likely stick around for a while, but as long as we continue to have students learning remotely it is essential to figure out ways to work with the data and information that is available. Below are common concerns that educators have with assessment data from their remote learners and suggestions for how to address them.