In Autumn 2020, DOAJ will be relaunching with a new website with updated functionality, improved search, and a simplified application form. More information is available on our blog. Our API is also changing.

Hide this message

The Legal Consequences of Mandating High Stakes Decisions Based on Low Quality Information: Teacher Evaluation in the Race-to-the-Top Era

Education Policy Analysis Archives. 2013;21(0) DOI 10.14507/epaa.v21n5.2013

 

Journal Homepage

Journal Title: Education Policy Analysis Archives

ISSN: 1068-2341 (Online)

Publisher: Arizona State University

LCC Subject Category: Education

Country of publisher: United States

Language of fulltext: English, Spanish, Portuguese

Full-text formats available: PDF

 

AUTHORS


Bruce D. Baker (Rutgers University)

Joseph O. Oluwole (Montclair State University)

Preston C. Green III (The Pennsylvania State University)

EDITORIAL INFORMATION

Double blind peer review

Editorial Board

Instructions for authors

Time From Submission to Publication: 20 weeks

 

Abstract | Full Text

In this article, we explain how overly prescriptive, rigid state statutory and regulatory policy frameworks regarding teacher evaluation, tenure and employment decisions outstrip the statistical reliability and validity of proposed measures of teaching effectiveness.  We begin with a discussion of the emergence of highly prescriptive state legislation regarding the use of student testing data within teacher evaluation systems, specifically for purposes of making employment decisions.  Next, we explain the most problematic features of those policies, which include a) requirements that test-based measures constitute fixed, non-negotiable weight in final decisions, b) that test-based measures are used to place teachers into categories of effectiveness by applying numerical cutoffs beyond the precision or accuracy of the available data, and c) that professional judgment is removed from personnel decisions by legislating (or regulating) specific actions be taken when teachers fall into certain performance categories.  In the subsequent section, we point out that different types of measures are being developed and implemented across states, and we explain that while value-added metrics in particular are, in fact designed to estimate a teacher’s effect on student outcomes, descriptive growth percentile measures are not designed for making such inference and thus have no place in making determinations regarding teacher effectiveness. We also explain that, due to the properties of value-added estimates, they have no place in making high-stakes decisions based on rigid policy frameworks like those described herein.  We evaluate the legal implications of rigid reliance on measures of teaching effectiveness that a) lack reliability and b) may be entirely invalid.