Are you spending too much time on assessments without getting the clarity you need? In this episode, we break down which reading assessments actually matter, which ones may be wasting your time, and how to make smarter, more efficient decisions about the data you collect.
Show Notes
Guests in this Episode:
- Andrea Setmeyer – National Chapter Director at The Reading League and former school psychologist. Andrea works with literacy leaders and educators across the country to support evidence-aligned assessment and instruction.
- Dr. Adrea Truckenmiller – Associate Professor at Michigan State University and a school psychologist. Her research focuses on assessment, writing development, and improving how educators use data to inform instruction. She is the principal investigator of the federally funded Writing Architect project.
Resources & References Mentioned:
- The Reading League Journal (Jan/Feb 2026 issue) Subscribe.
Article: “Balancing Cost and Accuracy: How to Select Reading Assessments for Universal Screening” - Intervention in School and Clinic (Special Issue on Assessment)
Guest edited by Adrea Truckenmiller & Jessica Toste - Writing Architect Project
(Assessment tool for identifying key writing components to support student progress) - Book: Reading Assessment Done Right by Stephanie Stollar & Kate Winn
- Common Assessment Tools Referenced:
- DIBELS: Dynamic Indicators of Basic Early Literacy Skills
- Acadience Reading Assessments
- FastBridge
- easyCBM
- MAP Growth
- Amira
đź’¬ Want more insights like this?
Subscribe to the Literacy Talks Podcast Digest for episode recaps, resources, and teaching takeaways delivered straight to your inbox!
Do you teach Structured Literacy in a K–3 setting?
Sign up for a free license of Reading Horizons Discovery® LIVE and start teaching right away—no setup, no hassle. Sign-up Now.
Coming Soon: Reading Horizons Ascend™
From Pre-K readiness to advanced fluency, Ascend™ offers a consistent, needs-based reading experience across every grade, tier, and model—so every student can build mastery, one skill at a time. Learn More.
View Transcript
Narrator:
Welcome to Literacy Talks, the podcast for literacy leaders and champions everywhere, brought to you by Reading Horizons. Literacy Talks is the place to discover new ideas, trends, insights and practical strategies for helping all learners reach reading proficiency. Our hosts are Stacy Hurst, a professor at Southern Utah University and Chief Academic Advisor for Reading Horizons. Donell Pons, a recognized expert and advocate in literacy, dyslexia and special education, and Lindsay Kemeny, an elementary classroom teacher, author and speaker. Now let's talk literacy.
Stacy Hurst:
Welcome to this episode of Literacy Talks. I'm Stacy Hurst, and I'm joined today by Lindsay Kemeny. We are going to excuse Donell. She is traveling, and we have two very awesome guests today. We have Andrea setmire and Adria truckenmiller will be joining us today. If you listen to the podcast, you've listened to a series of episodes now about assessment, and this is all leading up to the reading league Summit in May, and we get to expand that conversation today by talking to Andrea and Adria. So welcome and welcome back. Lindsay. Thanks so much for having us. This is great. Yeah, we love that you're here. And we'll just start out by asking you to share with our listeners a little bit about your background. And we have had the privilege of having Andrea on a few other episodes, but we will likely have some listeners who have never heard from you Andrea, this might be their first episode, so go ahead and reintroduce yourself again. How about we start with Andrea and then Adria?
Unknown:
Great. I'm so excited to be back, and my literacy journey is as a school psychologist, that's how I came into this work, and I had a really fantastic training experience, and felt like I was really grounded in research and knew a lot about assessment, and then went and worked in schools, and felt like I was speaking a different language then the types of assessment and the types of data that the teachers I was working with were interested in. And so it was really a challenging experience to try to find common ground, and I was more aware than ever of the challenge of time, right? Like how much time good assessment takes in the lack of time that many teachers felt like they had for it. So that's what got me passionate about assessment, I think from the beginning, I then started the Indiana chapter of the reading League, and came to the reading league as the national chapter director, where I have the privilege of supporting all kinds of literacy leaders and educators across the country.
Stacy Hurst:
Great. Well, welcome back. And if you haven't heard our other episodes, go back and you'll hear even more from Andrea and her experience as a school psychologist. And Adria, we have been so looking forward to talking with you. So can you share with our listeners a bit about your extensive background?
Unknown:
Sure, I'll try to give an abbreviated version. Adria truckenmiller, I'm an associate professor at Michigan State University in the special education program. I'm also affiliated in the school psychology program like Andrea. I'm a school psychologist. I have my PhD in School Psychology, and I'm nationally certified school psychologist, but I am obsessed with the intersection between school psychology and special education, and they fit really well in our multidisciplinary special education program here at MSU. And I also like to be multidisciplinary with speech language pathologists, and have lots of those master students and PhD students in my lab as well. We're a pretty eclectic group in in this research area, so I have several lines of research. My biggest one right now is, I'm the principal investigator of a federally funded study called the writing architect. We're in the final year of that study, which is a assessment to make visible the components of writing that will move students forward in writing, somewhat similar to Andrea I when I was in grad school, I had a student that I was working with doing a reading intervention, and I couldn't figure out what the student needed, and I'd been trained, and I felt like I knew it was what I needed to do, and I just needed to sit and read with the with this student, and work with them, and they were struggling with the words, and I had no idea what what to do and how to help her. And so it really drove me to be like, Okay, what do we need to do to make visible what it is the students need next that will be most effective in moving them along. So after graduate school, I spent several years at the Florida Center for reading research. I was a research associate there. So. I mostly coordinated Dr Barbara Foreman's grant projects. There's a interesting recording with the reading league on that one. And then I came to and I directed a statewide assessment while I was in Florida. Also did a little bit of work with Mississippi back in 2013 I worked for the Regional Education Laboratory Southeast. And so we were consulting with them when they first started their reading planning, and they needed to know what assessments to use. So I came over and helped talk with the excellent, excellent group there to work with that. And then came to Michigan State in 2016
Stacy Hurst:
Wow. Well, thank you, and that does kind of help us to see how you got here, focusing on assessment and intervention. And it seems like we all have in common that our experience, our experiences, probably provided that cognitive dissonance at one point or another, like, wait a minute, I thought I should know how to do this, and I'm not seeing it. I know that the focus of our listeners know now too. The focus of the summit, this may is on assessment. Why do you think that's such an important topic to address right now?
Unknown:
I think there's a lot of things happening in the field, good things in terms of our understanding of reading development and some of the models of that we call the science of reading. I think we're really moving the field forward in our understanding of evidence aligned instruction and intervention, and some of those really important things that we need to do in classrooms and intervention and special education all the time. But I'm still seeing a disconnect in terms of our understanding of assessment, so we're mandating things that serve similar purposes to what we're already doing in schools as an example, or we are using the wrong data point on a report to make a decision. There's there's still a lack of understanding of some of those basic principles of really good quality evidence, aligned assessment. And I'm really hopeful that the summit and a focus on assessment this year will help us catch up in our understanding of assessment, so that we can move our instruction even more quickly.
Stacy Hurst:
That's great, very important. Adria, how would you answer that question?
Unknown:
Yeah, it's been fun. Quote, unquote, to see people come ask these really good questions, so I get principals and literacy coaches and state leads coming to ask these questions, of which data points really are the most important to look at, and it's, it's coming from a new place like Andrea pointed out before it was about the mandates, okay, we have to, we just have to figure out how to administer this assessment. And so just figuring out how to do that administer the assessments was a big lift for many places, and so now people have been administering assessments for a while, and they want to appropriately reduce the time spent in assessment. So need to understand more deeply, and want to know which ones they can use more effectively.
Stacy Hurst:
Yeah, yeah, which is important. And I know we brought this up on other episodes too, but I think perception around assessment is so varied, and some teachers might just view it like you mentioned those mandates, right? We have to do it, so we're going to do it. But how do you shift from that kind of perspective, or maybe idea about what assessment is to actually using data in a way that will help your instruction. What does that shift look like?
Unknown:
It's been interesting watching it. When I was in Florida, I was lucky enough to run around the state and sit in on a lot of data team meetings, and so that was really eye opening to understand how so many different people come to come to the table, right? They're all sitting at the same conference room table, looking at assessment and what are they looking at, to Andreas point, which data points are they looking at? And some people, we all have our own training. So some of us have been trained. A lot of people in the education field have been trained in informal type inventories, and so come at it from a perspective of what's the text level for the student, and they're zeroed in on that piece of information. Whereas, like my training, I was trained in CBM curriculum based measures, so I would zero in on that data. But then, when I went to Florida Center for reading research, we were working on computer adaptive assessments, brand new, really cutting edge computer adaptive assessments, and my eyes were open to. To the strengths that those could bring and what those could actually do. So it's understanding everybody comes from their own knowledge base, and having a team approach, I think, is key to that shift from compliance exercise to practice. So it's very exciting in schools to see having the literacy specialist, and then the literacy coach, and then the principal, and then the gen ed teacher, everybody in the school, psychologist and speech language pathologist, all bringing their expertise. And something that I've been really excited about is empowering each other to understand the other person's role and expertise that they bring to this and what they see. So it was a lot of of working to come to a team perspective, instead of like a top down, I'm the principal, and I need this kind of information, and once I have that, you do whatever you want with the rest of the information, but it's now more about empowering the team to do something that will move the students outcomes. And I think having that empowering perspective is is key.
Stacy Hurst:
That really does portray empowerment as you're talking about all those people being involved with assessment in the same way. And Lindsay, you and I have talked about this a lot, but how is your perspective of assessment? Lindsay, you're in the you're in the like, in the trenches of this. How would you say based on what Adrian Andrea just shared, what's your lived experience?
Lindsay Kemeny:
Well, yeah, I have, you know, I've had a big shift myself with assessment, where the first thing was just really understanding the purpose, and it's not like, a gotcha, and it's not like, oh, this this teacher's better than this teacher, or this class is doing better. It's all about like, support and how we can help the students better. And I think, you know, I can think back to myself not really understanding, like, and hearing other, like, little rumblings about, like, how terrible this certain assessment is. Like, why in the world are we doing nonsense words? And then you're kind of like, Yeah, that's true. I want them to read real words, you know, and you get kind of caught up until now, like, I feel like I have a lot of clarity, not about about everything, with assessment, but like with certain aspects, like the nonsense words. I totally understand why we're doing that, so I know the purpose, and then it's learning how so understanding the purpose is, like, kind of, I guess would be, I would think the first thing, but then it's knowing what and how to use the data. And if we don't do anything with the data, then there was no point to give the assessment. So don't make it a waste of time, but use that data. And that's something I think, that you know can take a little time to learn. And then everyone, you know, everyone was kind of mentioning, we all come at it with different backgrounds. Some teachers have had really negative experiences where they have, might have had administration or something or just like, making them feel really bad about data, you know. And so I think it's a whole nother thing to learn how to separate yourself from the data, not to take it personally and truly use it to move your students forward.
Stacy Hurst:
Yeah, I love that. I'm also thinking as you were talking. I had the same response to nonsense words originally, but then they're so powerful as a form of assessment. But then Adria, you mentioned that team involved, and the way we imply apply that, right? If we're thinking of assessment as something that is going to be judgmental of our teaching, then it is tempting to teach to the test. And I, as a literacy coach, I saw many teachers literally teaching nonsense words or sending lists of nonsense words home, and that's where I think that team approach is so beneficial, because Lindsay, like you mentioned, now you understand the purpose, and you have this knowledge of what that means for your instruction. We all need to come together to support teachers in getting to that point, which is one reason why I think I'm so excited about this summit. And Adria, you mentioned, you mentioned screeners, which I think most states have a mandate around literacy, screeners at this point, and in your and in your one of your responses, you mentioned different types of screeners. Could you kind of just tell us about what types of screeners, or maybe their limitations and benefits, and what are you seeing most in application?
Unknown:
Yeah, what a great question. We've been trying to label assessments and what they're doing and in using different terms and all trying to come up with what's the best term? For different types of assessment, and it's been a it's been a wrestling match to try to figure out what what makes it clearest, and my goal is always what makes it most effective and they're functioning. So one of the ways we've described it based on working really closely with a school district that asked us, they're like, Hey, we've been administering these three different types of screening assessments for a long time now, and we get these three pieces of data, we all sit down and look at them, and we want to make sure we're looking at the things that make the most sense. Can you can you help us? And I was like, wow, you're using three different screeners. That's a lot, but working with them made me understand why it's it's very logical why they were using three different screeners, and the they kind of fall into three categories that we then defined. So this school and many schools, they used a curriculum based measure. There's a whole bunch of curriculum based measures out there, doubles, Acadians, fastbridge, easy, CBM, you know, a whole slew of them, but they all are pretty similar in concept, in that they're measuring oral reading fluency, for example, at third grade, or word reading fluency at first grade. And so they're they're a measure of decoding and and they're pretty sensitive in those early grade levels of picking up, especially word reading difficulties, which we need to pick up earlier. And so they do a good job of picking that up. They're also brief, but a lot of people were complaining that administering a curriculum based measurement was adding time to assessment, and that's kind of contradictory to how we think about curriculum based measures. We're like, What do you mean? It's adding time to assessment? They're the briefest ones out there. But what we didn't realize, and what we came to realize is the that that was the new kid on the block. The curriculum based measure was the new kid on the block. They had been administering two other assessment types already to every single student, and so the other two types that they had been administering to other to every single student were computer adaptive test. So there's tons of computer adaptive tests out there, star and W, E, a map, growth. Amira is a new one on the block. And then, like, there's a lot of experimental ones, not experimental, but researcher created ones. So we've created one at Florida Center for reading research. But these assessments, they had been using for a long time, and but teachers and educators didn't really know what they were measuring. It was kind of a black box because they're not administering those assessments. The computer is administering those assessments. And so we tend to see that the psychology of educators is when they don't see what's happening, they tend to be dubious of what is happening there and not sure what information they're getting. Plus, those assessments are designed to be pretty unidimensional, meaning they're measuring overall reading ability, and so it's not super helpful for telling you what to teach the next day. It's helpful for the screening, purpose of identifying who's at risk or not at risk, but it's not really helpful for instruction. So they wanted to use something that is helpful for instruction. So they used an Informal Reading Inventory, which most teacher preparation programs have been teaching their students to use informal reading inventories since the dawn of time. So this was just considered standard practice, as opposed to contributing to the amount of time that it was taking for them to do assessments. And so we took all three sets of data, the Informal Reading Inventory information, and so that was resulting in giving teachers a level of books. And so that was really helpful for people who were using leveled book libraries in their school as their intervention. They're like, this is so clear the results of this assessment. When I get the result of this assessment. It says my kids read level n books. I can point them to level n books. I'm doing a great job, because the assessment was so clear, and that's something I so appreciate and think we should learn from is when you have clear connection between assessment instruction, it makes that utility and that information so much more relevant to the educators. And so I think, as an assessment person, we have to take it and I try to, like, take it all myself, as this is my fault. I'm not making this assessment usable enough, or give. In clear enough useful information. And so what we found in that study is that that Informal Reading Inventory was taking 40 minutes per student for the teacher to individually administer. And so when you multiply that by an entire class, you're missing days of instruction time. And so what we did in this study is we delineated we my lovely colleague, Dr Courtney Barrett. She's an implementation scientist, and also could do the like count how much time and resources this was taking and put it on a scale of dollars, so that dollars would, you know, really resonate with the superintendent that we were working with. And so put this figured out how to put that on a scale so we, like, completely calculated how to do, how much time, money, everything, all on one scale, the computer adaptive assessment was taking, the CDM was taking, and the Informal Reading Inventory was taking. And then we looked at the amount of we looked at the accuracy of each of those three and the accuracy of the three of them in combination, and what we found was that the computer adaptive assessment and the CBM were giving pretty, actually kind of poor accuracy by themselves, but when you put them together and we changed the cut Point to be more responsive to that local school district. We got the best accuracy. We got really high accuracy by combining the two of them together. But then the Informal Reading Inventory, not only was it not accurate and it didn't contribute any information above and beyond the CBM and the and the computer adaptive assessment, it was four times as expensive in time and resources and so, super compelling information,
Stacy Hurst:
yeah,
Unknown:
so, so that was kind of answering like the time issue and the accuracy issue. I think another important one last I've been talking for too long, but one last, really important thing that we learned, and fits with what we know from other research, is that no assessment can do all things really well. So like the curriculum based measures we're finding do a really nice job of measuring the coding fluency, especially in those up until grade four, and then for those kids who are still not fluent after that, that CBM does that really well, but it doesn't do as well with The language comprehension component of the rope of Scarborough's rope. And so computer adaptive assessment, on the other hand, does a really good job of the language comprehension and reading comprehension component, and does much better job than than curriculum based measures. And so that's where, in an ideal world, we would have some type of CVM that would capture that decoding piece, and you can capture decoding with it with a computer adaptive as well. But the best ones right now are the CVMs for decoding and then the best ones for language comprehension, reading comprehension, or the computer adaptive.
Stacy Hurst:
So you answered my question then, because right when you said those two combined were most accurate, I'm like, Why? Why combined were they? And that explains it that we have measures of both. And I actually at points when you were talking about informal reading inventories, Lindsay and I were totally trained in those and I agreed there was something very comforting about the systemic nature of that, right? And I had a whole plan in my first grade classroom, I will just do a quick yes, I'm going to say the R word running record every day on one of my students in my group, and when they get three at 95% or higher, then I can move them to the next level. Parents understand A, B, C, D, E, F, G, but then it very quickly unraveled right when you want to really get those reading outcomes from your students. So I think what I'm hearing you say too is no system is perfect at this point, but as we and I will say, too, I learned a lot from being able to analyze the students reading and no, the miscue analysis is not the way to go with it, but it once you learn more about what goes into the brain when we're learning to read, you can start to identify those places of students, right? Andrea, what do you think is. Is an outcome of having all of these assessments. And Adria, you mentioned some teachers were just giving it as course of nature, like this is what we do, those informal reading inventories, which did take forever, forever. But Andrea, what can be the result of that when schools are using so many assessments and really just kind of, yeah, is there a system to how they look at them?
Unknown:
It really devalues, I think teachers experiences and their time, right? If they're spending so much time administering assessments, I've seen teachers have to spend a lot of their time inputting scores into a spreadsheet so somebody else can see it like those are all things that are very costly from a time perspective. And so then, when I hear or sometimes there's a mandate, right in the state that I'm in, we had a mandate to administer a spelling inventory, a spelling test, to kindergarten students, to screen for dyslexia. It was part of our mandated dyslexia screener in the first 60 days of kindergarten. And if you've ever taught kindergarten, you are very accurately yet. So our entire kindergarten across the district, the average score was zero, which is not helpful from a screening perspective, to find out who's at risk and who's not. And so thinking about being really intentional, about listening to teachers, listening to the people closest to the work, about what data they're using and why. I love Lindsay's point about getting really clear about the purpose at the district level, somebody needs to own making decisions about the purposes, making sure that all of the purposes are covered but not duplicated. And then I just also want to lift up adria's work. Sometimes when we talk about assessment research, we think of it in like laboratories and really far away studies that are that are done just to norm the assessment, for example, to create it and Norm it and then publish it and then walk away from it. And I love what Adria and her team and some of her colleagues are doing in terms of these really practical, nitty gritty questions of what happens if we have three what is the cost of that and what what is the most accurate in this setting, at this grade level, at this time, they have another really fantastic article out uses and misuses of commercial reading assessment, and it walks you through those really common barriers that an elementary school faces when they're making choices about assessment and then what to do about it, what to think about next. So I love this applied research, and just want to make sure that you get a shout out for that.
Lindsay Kemeny:
Adrian, I appreciate that. Thanks. Andrea,
Narrator:
for over 40 years, Reading Horizons has helped educators build strong literacy foundations for students now with ascend, they're supporting every learner and every tier through one unified solution. Ascend mastery, a comprehensive pre K through five core literacy program, and ascend focus, an adaptive K through 12 intervention. Learn more and explore how you can bring ascend to your schools at reading horizons.com/ascend, implementations begin in the 2026, school year.
Lindsay Kemeny:
Well, and I love this article, and it's in the the reading league journal. You know that you know Adria, you guys, you and Courtney Barrett, I think you said, like, do such a good job of, like, explaining these three different types and kind of your findings. And it's called, for our listeners. It's called balancing cost and accuracy, how to select reading assessments for universal screening. And this is, let's see what it's the January, February issue of 2026 but when you're talking about the reading inventories, you know it's the most expensive of the three, where, when you figure everything in you guys are estimating it's around 50, almost $51 a student. And then you share this study about the accuracy of these Parker and colleagues in 2015 found the accuracy is about the same as flipping a coin. And I we've probably heard Dr Matt burns talk about this, where it's like, oh, save yourself a lot of money and flip a coin, whether you know about whether that child is going to have reading difficulties or not, and the accuracy is about the same. So that's eye opening.
Unknown:
Yeah, yeah. Thanks for highlighting that article. Yeah, that one when I read so Parker was one of Matt Burns's grad students, I think, a long time okay, when I read that in the article, that was the first time in a journal article, like I had a spit take, much more compelling that way. So, yeah, thanks for bringing up the reading league journal issue that was the study that, yeah, I just described so
Lindsay Kemeny:
but like, how heartbreaking that we're wasting all this time. I. Did it too. I did those two taking, you know, 40 minutes a student, and then, and then having the accuracy level so low, it's definitely something we want to de implement.
Unknown:
Yes, and that was the exciting thing about this school. As soon as we showed them the results, they dropped it that day, like it was really impressive. Because usually the implementation as a process, but I'm kind of curious. I'm going to turn the table on both of you, especially Stacy, you earlier said about using those reading inventories, one of the things that I struggle with capturing is it's providing teachers with lots of useful information listening to a child read, you're getting implicitly and explicitly, depending on your your growth and your knowledge across time about how students develop their reading. You're getting a ton of information out of that. I think you know some of the oldest studies that were conducted on curriculum based measurement. When they talked about how effective curriculum based measurement was in doing this progress monitoring, I think the effectiveness really in those studies was that the teachers were listening to their kids read more regularly, and listening to individual students read and you know, intuitively, you are such an expert in figuring out what that kid needs next. And I think that that mechanism, I can't I can't quantify that mechanism, and I can't quantify that information that you're getting and what you're doing. So anything you all can do to bottle what it is your magic sauce. I think you know, Lindsay, you're doing that in your books, and Stacy, you're doing this in this podcast and other dissemination ways. But it'd be really interesting to see how to do that, to replicate that.
Stacy Hurst:
Yeah, I am trying with my pre service teachers. I'm teaching at the university level now, undergrad pre service teachers, and this is what we do, right? I am teaching them from the beginning with CBMs, listening to your students and I share stories of my lived experience. When our state started mandating it, I had a fourth grade teacher come to me and say, you know, every year, you kind of imagine that one or two students isn't really reading well, but you don't know this is really telling, and I think it's a good indicator for teachers who may not have the opportunity to listen to every student. But you're right. There's a lot of information there. Another thing I heard frequently from myself and teachers who gave more of an Informal Reading Inventory, and I'm trying to steer my my pre service teachers in this direction. They would say, Oh yeah, their accuracy was like 75 but they understood the story. Their comprehension was fine. Well, now we have things like the simple view of reading to say we know what they were doing. They were over relying on their language comprehension, so it kind of helps explain those things that we just kind of dismissed. But I do tell my students, whenever you listen to a student read, you will notice patterns. You will notice things that they need, and put it in the perspective going back to what Lindsay was saying about purpose of how the brain learns how to read what goes into that, and then when you have the right lens, I think it's a little bit easier to contain and apply. Lindsay, what would you say,
Lindsay Kemeny:
Yeah, well, well, let me go to Andrea first, because I think Andrea, you had something you wanted to say yes,
Unknown:
I just wanted to add on that. I'm so glad we're talking about the value of listening to students read aloud at the summit. Dr Mark Shin is going to talk about that in terms of oral reading fluency and that really quick one minute piece, because yes, the score is valid and reliable for decision making, and it's important to calculate, but there's an added value to it. So I'm really glad that this conversation is going in that direction, that we want to affirm a knowledgeable listener listening to a student read aloud in making those inferences about what instructional moves to take next.
Lindsay Kemeny:
Yeah, and I'm a huge advocate of the teacher doing the testing and the progress monitoring because of this, and even myself. Well, I feel like when someone else is doing it, and I get that it takes time and training to train the teachers, or whoever's giving the assessment, but I feel like when we take it away from the teachers, there's a little bit of this mental check out. Do you know what I mean? You're not quite invested in the results and using the assessment, but the results of the assessment. But the I experienced that where I remember sitting and listening to one of my students doing the phoneme segmentation, and I was really surprised with, like so many of the our constant clusters or blends that he missed, where. Overall, I think this is when I was teaching kindergarten. He still ended up, I can't remember if it was green or blue, but myself, I was like, Oh my gosh, we've got to work on that. Do you know what I mean? And to me, that was an example of why it was so valuable for me to give that assessment. Because maybe I wouldn't have, you know, dug in a little deeper, because I would have seen, oh, they were blue, but I gave it, and I was like, Oh, we've got to work on, you know, segmenting blends. But then it also, like, worries me a little bit, if, like, I hope that's not the only time the teacher is listening to the student read, right? It's when they give the practice monitoring, because that should be just an ongoing thing. I think those the students need that immediate feedback as they're reading aloud, and so hopefully we are including that purposeful practice in our classrooms.
Stacy Hurst:
Yeah, and Lindsay, I've heard you say this before too. Don't ever take that quantifiable number at face value. Look into it and see what that that score represents. And as you were talking, I was thinking about my students give the core phonics survey, and I remember one student of mine said, I'm going to, they did not do well on consonant blends, so I'm going to, they scored strategic. So that's where my instructions going to be. And we got looking at the data, and we're like, actually, they read the consonant blends accurately. They're struggling with vowel sounds. That was the pattern. So really bringing like, I love Andrea, that you said, a knowledgeable listener and a reader, I think that is a good a good combination Adria. We've talked a lot about the purpose of assessment and and maybe the importance of putting that in a framework, and the MTSS framework is a really good way to communicate, at least for me to communicate this to my pre service teachers. So how does assessment fit within that framework?
Unknown:
Yeah, great question. So one of the ways that I've tried to simplify, and maybe hopefully not oversimplify, is thinking about the grain size of what we're measuring, and then therefore the amount of time that we need to put into place. So MTSS as a framework, some people talk about, and I like to point to the public health nature of MTSS came from a public health perspective of so in public health, we put fluoride in the water as a tier one intervention, because everybody gets access to it, and it helps prevent cavities across the board, but then sort of reduce the number of people who need expensive dental procedures at like a tier two or tier three type of intervention. So so I like taking that perspective of thinking about MTSS at a school of what does everybody what do we need to administer to everybody to reduce the number of time consuming pieces of information that we need from from tier two and tier three. So the universal screeners are meant to be brief, and everybody gets it and it's at this larger grain size of measuring overall reading, or it's measuring one aspect of reading that's super predictive at that particular developmental level. So so we're getting, like this low information for a teacher, but the right level, the rent, that right grain, size of overall reading, and then they have, you know, we then identify kids who are who are not getting what they need from The universal instruction. And so we would go deeper in, in examining what they need, in reading. And so a lot of screeners are starting to do this now that I appreciate that they have the next screen size, the medium grain size, which is picking up, is it? Is it decoding? Is it that coding strand? Is it that language comprehension strand? Is it both? And so we can pick that up with another assessment that might take a little bit longer, but not as long as some of the other information that we need to get. And then we have this finer grain size of you know, if that intervention and decoding or language comprehension or both is not working, let's dig really deep and figure out what what that student needs. And so we have these much more in depth assessments. And luckily, we have, we did a special issue this past year that was published in intervention in schooling. Clinic, I guest edited that issue alongside the reading league journal editor Jessica tost, her and I guest edit this special issue where we're like, hey, what's the most? What's the cutting edge in in the more fine grained assessment, that's going to give you really good information, detailed information that people are seeking, that could probably replace some of the things that we miss, like we miss our we miss the information that we got from reading comprehension, from the informal reading inventories. Well, hey, there's a new assessment out called mocha. So that's published in that, that journal issue we talked about the writing architect, which digs deep into writing. And then we also refer to language comprehension. The Cube assessment has this fine grained information now that you can get so, so I think about it in those three kind of pieces, and then something else that's a little bit more unusual in the way that I think about multi tiered systems of support and assessment, and that we teach in my in the master's program here at MSU, is I teach them to not only use The data to inform what instruction happens, but then also to evaluate how well it's going once you implement, say, new tier two intervention, evidence based tier two intervention. So you do everything right, we use the right screener, right, quote, unquote, the screener with the most evidence of validity, we use the intervention with the most evidence of effectiveness. But then, is it working the way we're implementing it within our context? And how do we figure out what the barriers are to better implementation, or how it's working for the groups of students in our class. So we use assessment in that way as a program evaluation. And again, to you all's excellent points, not about evaluating teachers, it's more evaluating. We selected these tools. Are they working the way they need to in our context?
Stacy Hurst:
I love that, and that addition is fantastic. I actually heard Jessica mention that at the reading league coffee chat, and I immediately downloaded the whole thing. I will say I've only skimmed one article, but it is on my list of when I have time, and they're really a lot of really good information. And I think from our conversation today, Adria, you have demonstrated that research is not boring. It's very applicable. I think the number of listeners who were on their seat when you were talking about the three types of assessment and what were the outcomes right? And that's how the articles read as well. So I recommend Lindsay already mentioned the the reading league journal for January, February. But then also we will put a link in the show notes Adria to that issue that you're talking about. Name the journal again.
Unknown:
Sure, it's intervention in school and clinic, and this is great timing, because we had previously published the writing architect article in there, and didn't pay for it to be open access, so it was behind a paywall, but we found grant money and paid for it to be open access, so it's now open access. Everybody can get it.
Stacy Hurst:
Yay. That's it. I say I teach at the university level to support my habit of getting research. Maybe I shouldn't mention that. Hopefully my dean isn't listening. But yeah, Andrea, I was wondering too, you do a lot of work with schools and systems. What have you seen when all of these things are aligned, the things that Adria was talking about with the MTSS model? What do you see when district schools, even a state maybe aligned with all of this.
Unknown:
That's such a good question. I mean, first I just a word of encouragement. It can happen when, when people really do the work to align their assessment to what they're learning about the science of reading, good things happen. So I see more students reach reading proficiency sooner, get the targeted intervention support they need earlier. I see that efficient, effective reading happen across districts, across an entire district. I see teachers less frustrated. I see them excited about getting their next data cycle and being able to have really, really robust conversations about what the data means for their students, and I see school leaders able to make decisions that really align to where their students and teachers and system is at. So good things happen when you really take the time to align your assessment system to what you know. About reading and what's going on with your instruction,
Stacy Hurst:
and who wouldn't want that? Those things and Lindsay, we hold you up as an example all the time. But how is this? How onerous would you say assessment is for you now with your understanding and the way you implement
Unknown:
it?
Lindsay Kemeny:
It's like, it's just vital for me. Now I use it all the time. I'm I'm just such a nerd, because I like looking at the data, and I like doing my progress monitoring so I can look and re evaluate, even just today, this morning, I progress monitored one of my students, and I'm like, Hmm, okay, that's interesting. And it's just gonna, it's, I like, I guess I like the challenge. And then figuring out, okay, what can I change? Is there something I need to do more for this student so that they can, you know, be better, better supported. So it's, I use it every, all the time.
Stacy Hurst:
Just, yeah, we've made this point in previous episodes. It is very beneficial to think of assessment as part of teaching, not separate from it. That's where it gets a little bit arduous for people. And I know I was in that boat too, thinking, Oh, we have to do this, and it's separate from what I do. But it shouldn't have to be. And speaking of the summit, Adria, you're going to be leading some workshops. Can you tell us about those? Yeah, I'm excited. It's been a fun exercise for me, pulling real schools data and going through it. How do we use this for each of the purposes that it was intended for and that it will be effective for. And then pulling, I'm pulling evidence from all across assessment, research, reading, research, psychology research, of of why we're doing,
Unknown:
why this particular data point is good and helpful for this particular purpose, for this particular user, and so I am excited to be partnering with Andrea on this one. It's nice to have a really thoughtful partner on board too. So we're going to walk through it like, here's here's the data. Let's talk through how you use it step by step. And I think it will be fun and empowering and thought provoking.
Stacy Hurst:
Yeah, that's really exciting and Andre you've mentioned this before, but the format of this summit is, if you're listening to Adria describe that and you think you want to go to the summit, if you're thinking is, oh, I'm going to have to choose that workshop. You're not everybody will do it. It's so great.
Unknown:
We've designed the summit to have everyone in one room learning all together, so you really hear different perspectives on the data. Right? We start with that classroom perspective. What types of data is a teacher most interested in when we think about reading and language coming together in planning small groups and differentiating instruction, that's really important. But school leaders have different data needs. They're looking at a different lens, and interventionists, or people who are working with students on individualized plans, need a different level of data, and so we're really taking all of that into consideration and planning these different panels around those different lenses, and I hope really lifting up that those are legitimate, unique perspectives, that just because the district level data doesn't answer the question, what do I teach tomorrow doesn't mean that it's not important. It just serves a different a different purpose.
Stacy Hurst:
I love that. And what would you say for teachers who are listening to this and maybe not able to attend the summit, but asking like asking the question, what is one thing that they could do to change the way that data informs their instruction?
Unknown:
I think talking to a coach. And I use the word coach loosely, when the elementary school that I was in started rolling out our science of reading initiative, we had a few teachers right away who started to think about their data differently, and they would bring, I remember one time bringing like, spelling end of week spelling assessments down to our office on the other end of the hallway where the speech language pathologist and I were, and they laid them all out on the floor, and they're like, help me look for patterns. I taught this, and now I'm seeing this. But what about this student? This is really different. And find somebody in your building at your grade level, or a literacy coach, or somebody in your reading league chapter that you can look at data with and ask all those really important questions that you have, and get somebody else's perspective to help you move along a little bit in your understanding of data.
Stacy Hurst:
I love that I've even seen on the pre service level. Just have a conversation. And you'll notice things that you didn't, because other people will bring that perspective. I love that. And I'm going to end with a question about the future. So Adria Andrea Lindsay, you can chime in if you want to as well. What do you see in the field of assessment in the future? What are some areas of research do you think that educators should be paying attention to right now? And yeah, let's just start there with that question. Yeah, I'm so
Unknown:
glad you made this a safe space. Stacy for talking about research things, because my nerd level on research side lies in the statistics. They tell
Stacy Hurst:
us. A story, they tell compelling stories.
Unknown:
They sure do. And there's some exciting things I'm hopeful for in how we can produce data by using these really cool new statistical approaches. So one particular example I'm very excited in multilingual assessment is a statistical procedure that allows for conceptual scoring. So this is where students can answer in any language and switch back and forth between languages within the same answer, and the scoring for that can take can can take that into account in this really cool conceptual scoring. So that's one thing I'm really excited about. Another thing I hope to see assessment companies do, and I, I think they're starting to come along and listen to to thinking about new approaches is ways to combine computer adaptive assessment and curriculum based measures. So we had a major statistical problem where we couldn't put those scores in the same model because they're so different, so we weren't able to leverage both pieces of information to create a predictive model. And so there's some new statistical procedures that hopefully people will adopt to maybe leverage those those types of things. So those are two things that I'm excited about and then, and on the practice side, I'm most excited about some of the work we're doing. And trying to make that more transparent connection between what teachers want to see and what makes like makes it absolutely clear that the next thing you should do is not teach kids to memorize a list of nonsense words, but rather to teach them to decode, and what program to use to teach them to decode. So I like that, that that through line.
Stacy Hurst:
What great examples. And in case you're questioning the power of assessment, like it all starts there. The example you gave with MLS being able to switch between languages, powerful, right? Because how many of us have taught ml students that don't meet the benchmark, and so we put them in intervention or not actually, don't address their needs, and then we realize, oh, actually, in their first language, they're very proficient. We just need to, you know, that changes the way we address their instruction. Assessment is equitable, for sure. Andrea, what do you look forward to in the future related to assessment?
Unknown:
One of the things I'm really excited about is the attention that language is getting within the science of reading field, which includes language assessment, right? It's it's just everywhere right now, language comprehension and how we're assessing it. And I think we have a long ways to go in terms of getting that knowledge into schools and being used in meaningful ways, but I'm really excited to have that conversation at the summit and more and more in the field, so that we can see if it's similar to our model of those early developing literacy indicators, right the word recognition, or is there something different about How we need to approach assessment when it comes to things like vocabulary and syntax. And so I think the tendency is to want to apply what we know from the word recognition skills. And I'm really excited for us to learn more about the research on language assessment. And again, this is a great time to bring in your speech language pathologists and unite some multidisciplinary experts within your district to look at
Stacy Hurst:
this? Yeah, I love that, especially when you think about I've already had conversations with educators about and been part of them about language assessment. And the very next question, even if they have an assessment, is, so then what do we do? So guide instruction. So we're going to be pointing out that these students may need help here, but how are we going to address it? So I can see that as a very comprehensive impact there. Lindsay, how would you answer that question?
Lindsay Kemeny:
Well, I just want to plug a book before we end, because there's a great new book that recently came out called. Reading assessment done right by Stephanie Stoller and Kate win. And I am just really excited, like when I first read through this book, it just made me giddy. I just think it's so practical. It walks you through it. It's not boring. If you are a teacher and you're like, ah, what? You know, I want to understand this better. And I'm just really, I think it'd be really powerful for schools to do, like, book studies on it, like as a school, to help understand the purpose of assessment and then how to use it in their instruction. I think it's
Unknown:
great,
Stacy Hurst:
yeah. What a powerful combination. So if you can't make it to the summit, then make sure to read that book, and we will do follow ups on the summit. You know, we'll have an episode about that, so you can look forward to that. Andrea Adria, thank you so much for joining us today. This has been a great conversation that I could actually keep talking about. Lindsay You probably couldn't do but we do have parameters within our episodes. Thank you for joining us. I'm looking forward to this Summit in May, if you are listening and you're still interested in going, see if you can make it. I just converted somebody to come just last week, actually, and she's going to bring data, and we're going to work through it together. So I'm so excited.
Unknown:
That's the best way to learn. I love that Stacy,
Stacy Hurst:
I know, and I am in a position now where I actually kind of have to beg people for data, right? Because I'm not in a school, so it's fun. My students do generate data that we look at. So there's that I'll be looking forward to applying that in all kinds of settings. So and Adria, we look forward to your future research and the outcomes for that. I know we'll be waiting with bated breath about that.
Unknown:
Thanks so much for having us. It was really fun, and I learned a lot.
Stacy Hurst:
Yeah, thank you likewise. And to our listeners, thank you for joining us, and we hope you'll join us on future episodes of literacy talks.
Narrator:
Thanks for joining us today. Literacy Talks comes to you from Reading Horizons, where literacy momentum begins. Visit readinghorizons.com/literacytalks to access episodes and resources to support your journey in the science of reading.