Sen. Leising has introduced SB 58 which appears to be a reaction to the revelation that last year’s ILEARN test did not produce reliable metrics. Under this legislation, a school’s performance rating for 2018-2019 can’t be any lower than 2017-2018, “additional consequences” for school improvement may not be applied for the 2018-2019 school year, and ILEARN scores can’t be used for a teacher’s performance evaluation. “Additional consequences” has to do with provisions under IC 20-31-9 providing for escalating consequences for each year that a school is in the lowest performance category (e.g. state takeover or closure in the 4th year.) Note: I’m going off the digest with for the proposition that the “additional consequences” are paused. I can’t quite parse the language of the bill to reach the conclusion that the clock stops running for the 2018-2019 school year – my reading suggests that the 2017-2018 performance will be substituted unless the school wants the 2018-2019 performance to be applied. But the language is pretty dense, so I might be misreading something.
Sen. Leising has also introduced SB 59 which provides that ILEARN scores (and other “objective measures of student achievement and growth”) may not account for more than 5% of a teacher’s total performance evaluation. I should be a little tempered in my critique here – it’s an improvement on the status quo. That said, I would strongly argue that ILEARN is not an objective measure of student achievement or growth. Fact is, we don’t know what the hell it is measuring. Its metrics – which, lest we forget, cost the State $40 million – are unreliable. ILEARN suggests that West Lafayette, the 20th best STEM school in the nation, has 40% of its students lacking proficiency in math or English. If the test missed that badly in West Lafayette, it can’t be trusted in other schools either. Using this test as even 5% of a teacher’s evaluation is too much. Furthermore, even if we had a standardized test that was a reliable measure of student growth and achievement, using them as a measure of teacher performance is problematic since there are too many confounding variables for us to mistake correlation for causation. Usually those test results end up being pretty well correlated with a student’s socioeconomic status.
Not a bad start, but seems like there is room for improvement here. I’m an enthusiastic amateur when it comes to education law – I’d love to see the opinions of people who know the ins and outs of this stuff.