Show Notes
Early reading screening is one of the most powerful tools educators have—but only if it’s used well. In this episode of Literacy Talks, we unpack what screening actually measures, why timing matters, and how to use data to support students before small gaps become major challenges.
We explore the difference between screening and diagnosis, why “wait and see” can be harmful, and how early data should drive immediate, targeted instruction. You’ll also hear how screening results can reveal more than individual student needs—they can highlight strengths and gaps in your core instruction and overall system.
This conversation is especially timely as schools and states continue to emphasize dyslexia screening and early literacy outcomes. The key takeaway: screening is just the starting point—what you do next makes the difference.
Key Topics Discussed
- What reading screening measures across grade levels
- Why screening is not the same as diagnosis
- How screening connects to MTSS and Tier 1 instruction
- The role of persistence in identifying dyslexia risk
- Why early intervention matters more than waiting
- How to interpret and act on screening data
Resources Mentioned
The Reading League Journal (Subscribe)
Learn more about Dr. Patrick Kennedy
đź’¬ Want more insights like this?
Subscribe to the Literacy Talks Podcast Digest for episode recaps, resources, and teaching takeaways delivered straight to your inbox!
Do you teach Structured Literacy in a K–3 setting?
Sign up for a free license of Reading Horizons Discovery® LIVE and start teaching right away—no setup, no hassle. Sign-up Now.
Coming Soon: Reading Horizons Ascend™
From Pre-K readiness to advanced fluency, Ascend™ offers a consistent, needs-based reading experience across every grade, tier, and model—so every student can build mastery, one skill at a time. Learn More.
View Transcript
Unknown:
Welcome to Literacy Talks, the podcast for literacy leaders and champions everywhere, brought to you by Reading Horizons. Literacy Talks is the place to discover new ideas, trends, insights and practical strategies for helping all learners reach reading proficiency. Our hosts are Stacy Hurst, a professor at Southern Utah University and Chief Academic Advisor for Reading Horizons. Donell Pons, a recognized expert and advocate in literacy, dyslexia and special education, and Lindsay Kemeny, an elementary classroom teacher, author and speaker. Now let's talk literacy. Welcome to this episode of Literacy Talks. I'm Stacy Hurst, and I'm joined today by Donell Pons. Lindsay is teaching. She's doing the thing as we say. So she couldn't join us today, and we have a very special guest. Dr Patrick Kennedy, we're so happy you're with us. Thanks for joining us today. Thank you so much for having me. Yeah, and we have been talking. If you've been listening to the podcast, you know that we've had several episodes on assessment, and today we invited Doctor Kennedy because he wrote an article in the reading league journal the January, February edition, and it is all about dyslexia screening. So as you can imagine, this is right up Donell alley. But I was just saying before we started to record, I've read it twice, and I'm so excited to talk about it, but before we talk about the article and all things screening, Dr Kennedy, could you give our listeners just an idea of your background? Yeah, yeah, I tell people, in some ways, I was maybe sort of destined to end up somewhere in the field of education. Both of my parents are lifelong career educators. My dad was an elementary school principal for years and years and years. My mom was a high school English teacher and then eventually a school psychologist. And so in some ways, I I really didn't have much choice. I mean, obviously, but was, but was certain, certainly exposed from a very early age, you know, just to the idea of education, and saw firsthand even as a young child, like the anecdotal stories around the dinner table, right about, like, well, here's, here's how we're impacting kids today, kind of a thing and so. But I guess my journey towards becoming, I don't even know, I don't necessarily consider myself an assessment researcher, but I guess that's sort of what I am. So my journey there, I think, really started as when I got into college, and again, as a having this background with educator parents, reading, obviously, was a huge thing. I loved reading. Love literature was just really inspired by so many authors. And so I was like, Well, I'm going to go into English literature, that's a that's a really useful field, right? Yeah, like, there's so much I can do with that. I won't have to be a teacher that way. And so again, sort of this, this early sort of like, well, there's this real power and value and reading, but not really knowing where to take it, but not wanting to be, you know, a classroom educator. And so I think it really started as I ended up in a intro to psych class, Intro to Psychology class. And that really sort of opened my eyes right to sort of the cognitive side of things, and sort of what's going on when we're reading, and sort of the the idea around the scientific method. And just sort of like, how do we how do we learn, and how do we make improvements for kids. Learn about how students are learning how to make improvements for kids. And so that led one thing led to another, that led into working as a research assistant on our educational research project, and there's just continued to pique my interest. Learned about formative assessment, and so went back to school, got my PhD, and now for the last, gosh, almost two decades, I've been working on on DIBELS and other sort of early letter to screening kinds of work. So, yeah, I guess that's, that's, that's the short version. That's a great path to where you are. And I personally, I'm a fan of DIBELS. In our state right now, we're using a cadence, but we did use dibbles, and I've heard rumors that we might be going back to DIBELS, which I secretly kind of hope, I hope everybody from a cadence is listening. I mean, you know, Cadence is great too. Yeah. I mean, that one thing that I'll just sort of preface all of this with is is, obviously I'm biased, right as my experience with dimples, but I really care passionately about sort of making the difference for kids and all these systems are, you know, do many of the same things. So it's, I don't want this to be, certainly an advertisement for dimples, necessarily. Although I do think there are some benefits that we offer, but, but the I think the key takeaway is that is that early screening, sort of, regardless of what system you use, is a critical aspect of improving kids lives. That's a great point because that that's been happening, and I do remember, I've been in our state long enough to remember when we didn't have those screenings in place, and I can tell you my experience very greatly once we started using those screenings so and the way that we were able to help service children. So Patrick, I'm just going to lead with when our listeners go to Google your name, be sure to include that you're in Oregon, because otherwise you'll get a different Patrick Kennedy. So just know that that you share a name, and then when you do put in Oregon, you get a lot of great information about your background. So I had that happen to me. I'm just putting that out there for our listeners. It is a very common name. Some associations are better than others, but yes, not all. Patrick Kennedy, are people that you want to Google, and you'll be richly rewarded. There's a lot of information, including this information about the article, and I'm glad Stacy mentioned that because it is an article you can share with and should share with others who might be in the arena of working around screening. So it could be within a school, can be legislators, and I'm thinking about this. And boy, is this a really good time to be talking to you, because it has been a legislative session, and screening comes up during legislative sessions as it should. And there's conversation around, how can we improve it if we're not really doing it, how can we engage? And so what I really kind of wanted you to do for us, because I love your article, but you also do a really good job of backing things down to understand why screen and screening at different ages. Because I think people all have an idea in their head of what they think screening is, so that might be one thing you could address with us, is what screening is, and then maybe how it changes as a student gets older, and maybe start there for us. Sure. Yeah, I think I thought that the opportunity to write this article was, was, really, was a rich experience for me, just because it really made me sort of sit down and break it down, sort of like the way that you're asking can turn out it's like, I mean, obviously I have a lot of experience in it, but, but you're right that I think people tend to Get a sense of the screening is this one thing or another thing. And I think it's really important to realize that sort of, I mean, although at the cert, at the service or at the at the base level, right, like the concept is the same, right, like you're administering an assessment and you're trying to make decisions around what kind of needs a student has, sort of what that process actually looks like. Really can vary quite a bit, sort of as you go through the the educational experience from kindergarten on through, most schools don't administer screenings past middle school, but conceptually, you still could, right? And so in the article, I talk a lot about I sort of start by sort of really laying out the case for why universal screening is important. I think most of your listeners probably already have a lot of that background, but I think it is really important to that you're thinking critically about what specific skills you're assessing at each time point and sort of how that progression builds on it, and obviously it maps pretty closely to the way in which we talk about teaching reading right, starting with those foundational skills, letter sound knowledge, phonological awareness, and then building into a knowledge Around decoding, and then real word reading and eventually fluency and comprehension. But obviously there's the time and a place, and something like DIBELS is really sequenced in such a way to align the assessment to typical or accepted educational practice about which of those skills are taught and when. And I think it's really important that as schools are making decisions around what kind of a screening tool they're going to use, that they're making sure that what they're choosing is mapping on to what they're doing. Yeah, in fact, the D in double stands for Dynamic because of that, right? It changes depending developmentally where the students are, and I think that's a critical point to lift up for sure. Yeah, so, Patrick, you addressed a couple of things right there in the beginning, is having an understanding of how you're going to go about screening, and then what it is that you're screening based upon the student, the student's age, and your setting and what you're doing all those. Very important things. What are some other hurdles that you run into sometimes, when you when you're working maybe with a site, or when a, say, a school wants to start screening, what are some hurdles they run into beyond these? I mean, I think one that we hear constantly, and then I don't necessarily have a really good answer for but there's the challenge, the time commitment, right? It's just sort of like how much, how much time it takes to do the assessments, and it's a, it's a complicated trade off, right? Like, there's only so many hours in the day that you can get so many days in a year to do instruction, and you don't want to be you don't want all of that time to get eaten up with it with assessment, which is one of the reasons why the dimples and many of the assessments like them are fluency based. Right Is it is time constrained. So you're not administering a set number of items or set number of questions to a student, but you're essentially measuring how fluent or how skillful they are within a specified period of time, and and we've, we've put a lot of thought into that, particularly around the most recent version of dibbles, which, gosh, has now been out seven or eight years. But when we were sort of thinking critically about what what we wanted that to do and look like, we were very intentional about which skills are assessed when dibbles has a long history, and most formative assessments have going all the way back to, you know, standino and colleagues at the University of Minnesota, gosh, probably close to 50 years ago now, about, you know, doing things efficiently. And so this, it's been a standard for a long time that individual subtests, which typically each measure one sort of discrete skill are administered for one just, you know, one minute at a time. But then we also made, we're very thoughtful about, sort of how many times a year screening should happen, and ways in which you can set up the this the system such that once, once an individual is trained in a form of assessment tool, that they can deliver it efficiently, there's not a lot of time spent with instructions and examples and so that it's just sort of get in, see how the kid is doing and move on to the next thing. So I think timing is a huge piece. There's also the idea around training and knowledge. Obviously, these tools are most useful when they're implemented with fidelity. So there's a lot of effort that needs to be put into making sure that the the administrator or the assessor has the skills and has practice and, again, is fluent with administering them so that, so that you have confidence in the kinds of decisions you're making around the scores that you're getting from students. Those are probably the two biggest kind of challenges that we we often deal with. Yeah, and how faithfully Patrick, are people then taking the data and doing something about the data. Well, that is the million billion dollar question, one of those, I don't know what that is, sort of the critical question, and it varies. I think it's certainly schools have gotten better over the last decade or two, as as as they have gotten more familiar with these kinds of processes. You know, you're you guys had an episode, I don't know was it last fall, maybe around MTSS And so sort of similar to that you can ties back. You can see that implementation has definitely improved over the years. There's other, I guess, another challenge, sort of is leadership and leadership turnover. We often see schools that sort of do this sort of wave of well, we've we build this, build some momentum. We have some leadership. We we get some training. We sort of get on the ball and get things figured out. And then some, you know, the leader moves or leaves, and then perhaps a lot of the momentum and also institutional knowledge might go with that person. So I think it's a constant, a constant battle. And I think the episode, particularly around MTSS is a really important one. Just the emphasis that assessment sort of the foundation bedrock. But it's just one piece of the full sort of MTSS system and but of course, if you don't do that one with fidelity, then the whole system sort of falls apart. So, yeah, yeah, well, you've touched on some really critical key things already. Just listening to you, Patrick, describing what goes into thinking about a screener that that just ought to give us a touch of appreciation, because, I mean, you didn't even get into the video. Gritty where we all could have gotten lost very quickly, at least the rest of us could have, but just about how carefully these are designed, because you are thoughtful about what it is you're measuring and how you're measuring it, in order to get what you say, are results that you you know or can have faith in that these results are telling me what I need to know, right? Yeah, for sure. And there's, I mean, there's a whole, whole world of of statistical jargon and mumbo jumbo and psychometric kinds of terminology that that, you know, I try not to get too much into in the article, but you know, you that most folks at least, have at least heard of the terminology around like reliability, which you know, the extent to which you're measuring the same thing over and over again. Validity, the extent to which you're measuring the thing that you think you want to measure, the thing that you say you're measuring. And so yeah, as you said, Donell, there's we put a ton of time and effort and thought into ensuring that the content itself, but even the, you know, the whole system around it, like the administration guidelines and the training materials and all of those sort of peripheral pieces of the system, are all really designed to maximize the chances that someone with good intentions, at least, and a modicum of training, can implement this in a way that they do have trust that if they were to give this same assessment to the same student five days in a row, they should get pretty much the same score all Five days, because they should be getting a consistent, a consistent measure. And if you're not, you know, you don't have, you can't have faith in which one of those data points is the right one, right So, so that all gets back to all of the work we've done around making sure that dimples and and all of this assessments, again, do the same kinds of things that these kinds of tools are reliable and also valid in that, in that we're again, goes back to those scope and sequences, the kinds of things that schools teach. We're assessing letters sound knowledge and letter name knowledge, because that's the foundation of what what you have to know before you can start putting meaning to those things, those symbols on the page. And so, yeah, it's, it's a, I mean, it's, it's many, many more people than just me. There's a whole team. And of course, for years and years before me, it's a, there's a, you know, 2530 year legacy at the University of Oregon for around tables. And all of that work has all been towards ensuring that that the number that a teacher sees at the end of the day is both a reliable and valid indicator of what that student, hopefully is, what of what that student is capable of doing, at the very least, it's a valid indicator of what that student did that day. Yeah, yeah. Just another plug for the article, because you do such a good job of really, I feel like laying it out. And then also, you don't get too hung up in jargon, but what you do describe about validity and reliability is very clear. In fact, as somebody who is responsible for teaching these concepts to pre service teachers, I definitely took notes. I'm like, Oh yeah, if I say it this way, I think they'll understand it a little bit easier. You can tell you you've been at this for a long time, but also the way you translate it is fantastic. And as I was reading it, I also you mentioned a couple of things based on research, like screening is one of the best ways to identify students who struggle with things like dyslexia, right? And and I also think the way that we can use that data, how we interpret it, matters. And so you also mentioned that it is a screener, right? Then you need to be more diagnostic once you have identified somebody who needs who's not meeting the benchmark, so to speak. One thing you also mentioned that can be used a screener can be used for, and I know dibbles used to have this as an element in their its reporting system was how effective your tier one instruction is in relation to, you know, all of your students in relation to where they should be, perhaps, or the majority, how many of your students are hitting those benchmarks? And I, as a literacy coach, I kind of leaned into that, but I think my teachers weren't at a space to really understand that, because sometimes screening even can be used as such a high accountability measure that we don't get all of all of the things we can out of it. And I don't know if you've seen that in your work, but I think a screener is a screener, and as such, it, by definition, has some limitations, but it also has more use than I think we're used to thinking of right? Right? Oh, yeah for sure. And I think that's, I think you hit on a really critical point, right, that it's not just, or that we, we can use these tools, not just to inform things decisions around individual students, right, or even a classroom, but that the data can be used in a way, aggregated up to provide an indication of the health of your MTSS system, or your tier one instruction specifically, or all of these various things. And I think that's we do still have a report that Does that, does that. That's definitely, I think, going back to your point Donell sort of challenges schools, I think, tend to do a pretty good job of training teachers, or whoever's doing the assessment to deliver it, and maybe make a sort of base decision around well, we should this information tells us we should do something about this student. They're struggling with decoding, so we need to deliver a decoding intervention or something like that. But, and it also goes back to my point earlier about sort of transitions in leadership, right? But I think yeah, and it's, and I understand it, it's hard as a as a educator, right, to sort of take a critical turn the glance, sort of critically on yourself, and say, Well, this is actually telling us about what we're doing, not just what the kids are doing. And so I think there's a lot of potential there to use these kinds of aggregated information to inform, sort of the health of the overall system. And that goes back in your point to Stacy around dyslexia, and so the idea around screening for dyslexia isn't just that the kids struggle with these skills, because kids can struggle with foundational reading skills and on up to reading comprehension for a whole host of reasons. And maybe it's just because they haven't gotten the instruction that they need, or maybe it's that they're a language learner, and, you know, they're learning English, that's a whole different system than what they've had to experience in Spanish or whatever other language they're familiar with. So not, not to focus too heavily on dyslexia, although I think the sort of the popularity, if you will, of the term in legislation and and around the country has really given us an opportunity, right, to talk about these things. But dyslexia is really characterized by persistence in difficulty, right? So all of this dyslexia, in all of it, but just about every state has some dyslexia legislation. Most of it requires screening, and most of them require you to start screening as early as the beginning of kindergarten. Well, I think it's fantastic to screen for literacy difficulty the beginning of kindergarten, because it's a strong predictor. But guess what? Lots of kids come being in kindergarten, it's the first time they've ever seen print, and so for those students, most of those students probably they don't have dyslexia, but they're going to look a lot like kids who have dyslexia, because both groups struggle and perform poorly on dibbles or other similar measures. But so it's a it's an opportunity, really, to reframe the conversation or be critical. Think critically about the conversation in that it's not just, Well, this kid can't do this particular task at this time. Well, that's an indication that we need to make sure that they've had the opportunity to learn to do this task, to learn how to decode or go further back, right? But So dyslexia screening is not just forced poor literacy performance, but it's sustained, sustained poor literacy performance over time. And I don't know that. I think I might have veered us a little bit off topic there, but it's a I think that's one of the things that these kinds of systems like DIBELS and others do, right, is they, they give you a framework for assessing students at specified intervals. Typically it's two or three times a year. And so if you're screening kids at the beginning of kindergarten, probably you don't want to make too strong a point about dyslexia risk at that point. Well, there's definitely literacy risk, but we don't have any evidence yet that they have, one, gotten, probably haven't gotten any instruction, and two that they haven't responded to whatever that instruction is. So those dyslexia screening really should be an indication. Really should be indicated by the extent to which students aren't responsive, and if they're not responding first, probably again, to go back to your point, right? Look, look at, look at your system is. Is the tier one instruction really doing what it's supposed to be doing, and go from there. Yeah, and I love that we have a screener to draw our attention to those things. I also the word persist is important when we're talking about that. I also think at that point like this, what most of our screeners have been proven to be valid and reliable, but then what we do with that data matters. So if we see a kindergartner who does not meet the benchmark, we don't just wait, we intervene. So a screener is that indicator for intervening no matter what if a student is not meeting the benchmark. But I'm also thinking about sometimes. And Donell, you have been so adamant about this in the past, too, when we say, Oh, just wait. We don't worry about that yet. We'll just wait. But if we're not screening until you know, or even attending to the results of that, until a student is in third grade, say, which is usually when we that's a grade level we focus on in legislation, especially. But I think that's another reason why a screener, the good use of a screener, is really important. I also like how you pointed out look at your systems, right? Because if you have a kindergarten who's not meeting benchmark, and the parent might be notified of that, but until they have conversations with the school, they may not know how are the peers doing in relation and it may be that they're all not meeting the benchmark, or most of them at that point, then of course, you need to look at your systems, but I also know that if we didn't have those things in place, we wouldn't be intervening anyway. And Donell, I'm thinking about Curtis and your children who what was their experience? Were they screened? I know Curtis, but yeah, my husband was not screened. So he had absolutely no idea. And He came from a family of folks who all had dyslexia, and not one of them was ever screened or picked up for anything. He got into high school, as you've mentioned, Patrick, sometimes we might have a random thing occur outside of middle and high school where suddenly you're given a test because you're struggling somewhere, someone has said, Oh, I don't think they can even read this passage. And here we are in our senior year of high school, and he had one of those where he's pulled into a classroom, they give him a random reading test, and it's all, Oh, dear, you can barely read this. And now we're senior year of high school, and he says, and I don't remember anything happening after that. I remember them rubbing their foreheads, going, This isn't good. And then nothing happened. So again, it that is indicative of if we're going to do something, then be prepared to back it up with something, right? The MTSS, if we're going to screen and get numbers, then be prepared to do something with those numbers, right? Yeah, completely. I think, you know, I, I focus heavily day to day around assessment, but I work closely with folks you know who are very familiar with the entire, the entire, sort of fabric of MTSS, and I think it's critically important, it's, it's, it's, in some way that's sort of patently obvious, right? But it's also worth stating over and over again, is that, you know, we the assessment is the is the first indicator. But if all we do is is assess, then all we've done is sort of meet some base threshold of compliance, maybe right, or at the very best, maybe we're bringing our hands a little bit, oh gosh thing. Maybe something's wrong here. We got a problem, but it really is. It is just the assessment. Is the foundational building block that gives you the information you need to make a change. And so you have to make the change, and you have to stick with it, and you have to evaluate in an ongoing way, what was that change effective? Is it? Are we making a difference now? Yeah, if yes, great, if not. Well, now we've got more data that says whatever we've done so far isn't good enough. Teaching literacy shouldn't mean juggling multiple programs. Ascend mastery by Reading Horizons brings it all together a core, comprehensive pre K through five literacy curriculum that connects word recognition, language comprehension, oral language and writing built on decades of proven foundational expertise. Ascend mastery simplifies instruction and helps every student build lasting literacy gains. Ascend mastery will be available for district and school implementation beginning with the 2026 school year, visit reading horizons.com/ascend, to learn more and sign up for updates. One thing I think is important too is that we don't just hang all of our hats on one assessment either. And as you're mentioning, Padre. Like assessing is part of teaching I did. I know teachers, it's easy to think, especially if it's an accountability measure that, Oh, we did, separate from what we do. We have to do that, so we're going to do it, but I'm trying to train my students early. That number does not mean anything, unless you analyze it, unless you look at what that means for that student holistically, not just on that one measure. And one thing you did mention in the article too, that I was so literally, this will show you what a literacy nerd I am, but I was giddy. Brought up spelling because I think that is such a great indicator and are overlooked. Yeah, screeners don't typically include that, but as a teacher, I would recommend utilizing that. That's a really good indicator and something to compare to what might show up in a screener, too. So you did mention spelling, and so can you talk about that a little bit Sure? Yeah, it is. It is certainly been a little bit overlooked, although fortunately, at least in the most recent definition of dyslexia from my Ida, it does. It does emphasize spelling, and I think that states in particular, are starting to sort of tune into that, and have actually sort of started been asking for, well, you know, encoding is, is one, one is a complementary piece of decoding, right? Not just, can they take the page and make meaning out of it, but can they put the meaning back onto the page. And so we are, we are working, you know, not to, not to put too much pressure on ourselves, but there is definitely value in in spelling, and we see that, and we have a study underway now around sort of validating dibbles specifically, but as a screener for dyslexia. And one of the things that we're finding is we're still, you know, still collecting data. But one of the things that we're finding really is that that that encoding piece, the spelling piece, really is a key predictor above and beyond all of the decoding pieces. So, yeah, I think it's, I wish I could say that you could go to the DIBELS web page today and download an encoding measure. I guess if you want the development version, maybe it might be up there. But, but yes, I think encoding is, is an important, is a critically important piece of of early literacy screening. And that's, that's exciting, that is exciting. And, yeah, I had the thought as I was reading your article and you were talking about encoding the reason, there are so many reasons why, I think that is a powerful indicator. But one of them is when we put these kind of results in the context of the simple view of reading. I know the students I work with can really rely lean into that oral language to figure out what the words are on a page, even if they're reading isolated words. You have a lot of clues right that you can put together. But I think what spelling does, especially on the word level, is it removes all of that you have to generate that spelling in an accurate way. And I've said for a long time, if, and it's not just me. I mean, researchers have determined this if you're teaching to spelling, if you're including that, as much as I hate the word hack, I tell my pre service teachers, if you want a simple hack for your reading progress for your students, always include writing, always include spelling with your reading instruction. Every day, everything, and that is the way I think, to get to proficient reading a little bit faster, but neurologically, it's more sound as well. Because you, like you mentioned, their inverse processes, decoding and encoding, so those pathways are stronger, very much. So I mean, it's, it's, in some ways, it seems sort of obvious, right, but it's, it's really, it really has not gotten the attention that it deserves. And you're right that there's just, there's no substitute for that practice, or that practice of applying the knowledge right, not just like using the clues around you to extract or guess right, but like, really, can I in my brain, in right? Can I get from a sound that I'm thinking of to the physical representation of that that goes onto a page that then Stacy or Donell can look at and say, yep, I know what you were trying to communicate there, just thinking of the possibilities. And I'm also thinking, Patrick, what are your thoughts on third grade retention bills that are coming across the country? Do you have any thoughts on that? Because across the country, I can see it's it's become. A trend where folks are going back to saying, okay, then maybe we do third grade retention. And then I see that they put a bunch of things inside the bill that are, well, you can do third grade retention, but under these circumstances, maybe you can bypass third grade retention. And some of that's collecting a history on the student. Because, as we've just talked about, mentioned in the podcast here, just thinking of one subset of students. They these issues and challenges with reading are going to be long term, and they're going to be very difficult and challenging. So what are some of your thoughts on that? If Have you thought about it at all I have, and I I don't know how much I want to say. I sort of get the urge to, yeah, to push for some of that third grade retention, right? Because it's, it is fun. There is this sort of fundamental shift in sort of this experience for students. And if you don't have that foundational skill set at the end of third grade, you're much more like everything else is hard struggle, right? Everything else gets harder for the rest of your your schooling career at the same time. I mean, there's so many other things that go into it. I think back to the episode you had last summer with Tim Odegaard, right? And his lived experience as an individual dyslexia. And you know that there's so much stigma around retention, and it worries me that we're just switching, you know, swapping one problem for another, right? And yeah, so for me, it's, I really struggle with the idea of retention generally, and I think it really goes back to the point of, to the point of like it's just another argument for this. Is why we have to screen early. We have to do it. We have to do it right away, but we ought to be intentional about it. We can't just again. We can't just wring our hands and say, Well, maybe we've identified a problem. Let's wait and see six months or a year and see if it goes away and then it doesn't. But, but again, those those early those early screening data points, are really powerful in terms of your tier one instruction, your core instruction, because if half of your kindergarten students score below the benchmark, well below benchmark on dibbles. At that point, it doesn't really matter if they're scoring below benchmark because they have dyslexia, or if they're scoring below benchmark because they've never seen a word on the page before. Either way, the solution is to intervene. You know, provide systematic, explicit, intensive instruction on the things that we have decades of evidence showing that they that they improve those things. And then when you get to the end of kindergarten or the middle of first grade, and now, 15 to 20% of those students that were the 50% the beginning of kindergarten, still aren't responding. And you can look back and say with confidence, yes, we delivered a really intensive, systematic intervention that should have that if the issue was the lack of exposure or lack of knowledge, that intervention would have solved it, and so it didn't. Now we need to intervene for these students before we get to the point of third grade, because if we're retaining them in third grade, we're just, we're just telling them, You know, I don't know exactly what we're telling them, but we're not telling them nothing. We're not nothing. We're telling them is good at that point. Yeah, that's not positive. I guess that's my my soapbox on that. You did a fantastic job, too, by the way, Patrick, I'm going to tell you of being very diplomatic at the same time pointing out maybe some of the challenges, things we ought to be thinking about, some hiccups that could occur that maybe we haven't thought about. So I really appreciate it. That was a very thoughtful answer that question. Yeah, it's great. I'm wondering, since you have such an involved history with screeners and specifically dibbles, just looking ahead, let's say we're having a similar conversation in 10 or 15 years, how do you think screening for literacy is going to evolve? Or maybe how you would you like to see it evolve? And those are probably not the same question. You can answer both, if you want, or one or the other. Yeah. I mean, I think, yeah, what a great question. I mean, I think we've touched on a little bit of it, obviously, like we've got to get encoding in there, right? We have to, and we have to do it early, which is particularly challenging, right, especially in the beginning kindergarten. How do you for students that have never held a pencil? How do you assess whether they get well, they obviously can't write? Letters. But, you know, we've got to think about it even that early, right? Yeah, piece, I think as assessment developers, or developers of these assessment systems, we could do a better sort of job of of building capacity to evaluate these, this data, again, not just at the individual student level, because that's that's critical, but again, most schools don't have one or two students that are below benchmark. You've got five or 10 or 15 in a classroom, and so then you've got to think, you know about what the systems level changes are that are actually going to make a difference for those students. You're obviously not doing one on one tutoring with or even small group tutoring with half your classroom. So So I think as an assessment, assessment systems could do a better job of giving schools the tools to do some of that work around systems level evaluation and change. I mean, there are certainly things we've been talking about, but I think we could do better. I suspect, whether we want it to or not, technology and AI is going to play a big role. I mean, we already see systems that are moving towards computer, computerized, completely computerized assessment, right where the student read to the computer and the AI system or whatever, whatever under the hood is actually evaluating whether the student is reading individual words correctly or not. Right? Now, those systems probably aren't quite good enough to to, for me to say with confidence that that it makes sense to do that instead of a having an individual teacher do the assessment. And again, there's no there's no replacement as a teacher for having an for knowing how your individual students read or decode word, whatever the skill you're working on is but so I think there's, there's a danger there a little bit around relying too much on technology to do the work for us. I know again, it gets back to the idea around like, well, everything is a time suck, and we don't have enough of it. So of course, if we can offload the assessment to the to the computer, doesn't that just free up more time for teaching, and it does, but at what expense? And so there I I mean, as in every, every aspect of the world, right? AI is, sort of has the potential to to bring great gains, but it also is we have to be very intentional and thoughtful about how those things get implemented. And I think I'd like to see a system, or like to see it continues to where these kind of benchmark assessments, these critical one beginning, middle, end of year, kinds of assessments that most of these systems do, that you're still getting the human who's ultimately responsible for making sure the student learns, hearing the student reading, so that you know what the kid can do. And then these, these AI systems, can be great practice tools, right for for immediate feedback, for students, I know my my kids are on one of those things, right, where they they sit there and agree to it, and then it says, Well, you made seven errors, and these are the seven words you got wrong and and so that's that's helpful, because otherwise it's practice they might not be otherwise be doing or feedback they wouldn't be getting, even if they were practicing. But again, I think we have to be really sort of thoughtful around making sure we do it with intentionality and not just because we can. Yeah, okay, Doctor Kennedy, I'm going to ask you a question that I ask of a lot of our guests, but in my particular role right now, and many others across the country are teaching pre service teachers, and we have a limited number of classes, we know assessment is important. I personally am just weaving it into everything I teach, even though I don't teach an assessment class specifically, what would you say would might be the minimum thing that pre service teachers need to come away with? Say on graduation day, what do they what's the most important thing for them to know and understand about assessment? That's a great question. I think, I think the most important thing you can do for pre service teachers, because, as you just got done saying, right, it's like there's so many competing demands on their time and knowledge, and there's only so much you can you can fit in, but I think if you can instill in them, sort of the appreciation for the process and sort of the potential value of what assessment can provide to them in the context of multi tiered systems of support, for providing support and actually changing changing outcomes for students, I think if you can, if. If you can, you can, if your pre service teachers can come away with the knowledge that even if I can't give a given assessment today, that I understand why it's important and and what the, what the what kind of a tool it can provide in terms of impacting students long term, I love that, and what the results represent and can mean for their teaching. Thank you so much. That's very helpful. And actually it makes it a little less overwhelming for me too, because an appreciation of it, they don't have to know every little thing, but have some exposure and experience and background knowledge. But the appreciation, I'm probably gonna look at it. I and willingness to to learn, right? I mean, probably true of your entire sort of sequence of courses. But as as a pre service educator, right? You have to sort of come to terms with the fact that you don't know everything, and you can't possibly know everything, but if you know, if you can come away a little bit with a little bit of a sense of, yeah, these are the things that are important that I'm going to need to pay attention to figure out eventually, then I think you've, you've done them. Done them well, yeah, good point. We say that to our listeners often. Let's give ourselves grace. We're all learners first. Yeah, thank you for answering that question, Patrick, this has been fantastic. I have thoroughly enjoyed this. And it hits some really like I said when we started out the conversation, this is the time of year, if you've been through a legislative session where screening comes up a great deal, when you're talking about reading, at least, hopefully it's coming up. If it hasn't been coming up in your state, oh dear. But it probably has in some form or another, and it's just nice to come back make sure that we're we have a good level understanding of what we mean when we say screening, talking about some of the tools that are available, hearing about the effort that goes into making sure that tool is reliable when you use it. That's so important. This has just been a really timely conversation. I really appreciate it. Yeah, so as we're winding it down, Patrick, I do wonder if you had one key takeaway to share with educators about screening. What would you what do you? Would you share with them? Seems like a lot of pressure. Okay, we can expand one or two. I think I, I, I think the, the key takeaway that I would, I would want an educator to come away with, is really that assessment is not the end goal, right? It's this. It's this critically important and sort of irreplaceable tool. There's really no other way to efficiently know whether your instruction has accomplished what you wanted to accomplish. So I mean, that's why I'm so passionate about it. But again, knowing that doesn't actually change anything, it certainly doesn't change outcomes for kids, which, again, most educators, I think, are probably in the business, yeah, because they want to change outcomes for kids, they want those them to have those opportunities that they might not otherwise have if they don't have this skill set. And so I think realizing that it is, it is an incredibly powerful tool, but then you've got to, you've got to do something with it at the end of the day, yeah, oh, I love that. In fact, talking about my pre service teachers, they will be assigned your article at some point, and based on the answer you just gave, they'll be listening to this podcast episode as well. Like this is not out of my mouth, trust the experts, right? Takeaway, well, thank you. It's it's been a real honor being here. I appreciate the opportunity. Well, it has been fantastic, and we are looking forward to work you do in the future and implementing it in our respective situations. And we would remind our listeners, if you do not subscribe to the reading league journal, we recommend it. Patrick's article is fantastic, especially if you've been in the position like I have, where you're like, what's the difference between reliability and validity, and how does that fit into an assessment ecosystem, essentially screening and all the things, honestly, you do such a good job of communicating that in a simple, accurate and understandable way. So we'll, we'll recommend it again. Well, thank you, and I will, I will say the opportunity to to write this for the reading League was, was a real was? It was a great one. And I think it's not just my article, right? That all of the articles that I've every article I've ever read in the reading league is I've always one I've come away with like that was really well stated in a way that, I mean, it's not entirely non technical, right? Because you're talking about tech. Cool things. But it's, it's really clearly communicated. And there's, and I always come away with at least one or two, like, why didn't I think of that? Or like, yeah, it sort of ties us back to like, why, why do we read in the first place? Right? And it's, it's really, because it gives you, not just gives you information, but it helps, it helps, like, form these mental models, right? And structure, sort of how you think about the world and so, and I think for compared to a lot of other sort of academic journals, the reading League, and it's intentional, right? Like this is their core mission, but I think it does a really nice job of conveying important information clearly, succinctly and accessibly. So yeah, very practically. Yeah, that's very well said. So thank you again, so much for joining us today, and to our listeners. Thank you for joining us for this episode, and we know you'll get a lot out of listening to this and re listening to it. And then also, we hope you'll join us for future episodes of Literacy Talks. Thanks for joining us today. Literacy Talks comes to you from Reading Horizons, where literacy momentum begins. Visit readinghorizons.com/literacytalks to access episodes and resources to support your journey in the science of reading.