Last modified: Monday, August 23, 2010
Report points to problems in using SAT and ACT to assess high school achievement
CEEP study notes issues in stretching intended purpose of college admission and placement tests
FOR IMMEDIATE RELEASE
Aug. 23, 2010
A special report issued by the Center for Evaluation and Education Policy (CEEP) at Indiana University's School of Education cautions that the growing trend of using college admission and placement exams in measuring high school student achievement may in some cases produce misleading and inappropriate results.
"College Admission Tests as Measures of High School Accountability" notes several problems in using such exams, commonly the ACT or SAT, in determining student performance. The just-issued report is authored by independent educational consultant and former ETS vice president and director of the SAT Program Richard J. Noeth, CEEP assistant research scientist David J. Rutkowski, and CEEP graduate research assistant Bridget A. Schleich.
"These tests were developed as college admission and placement tests, and they're excellent in that regard -- they have no peers," said Noeth, whose experience also includes working at ACT and for the College Board. "But these tests are traditionally used for students going to college. So it's a subset of the high school population."
Three states -- Illinois, Maine, and Michigan -- use ACT or SAT scores as a part of what the report classifies as "high-stakes assessments." High-stakes assessments have direct consequences on graduation or a school's Adequate Yearly Progress (AYP), the measure under the federal No Child Left Behind law that determines if a school is meeting achievement goals for its students. The report identifies at least three other states as using the tests as part of "low-stakes assessments," which do not have as direct an impact outside the school.
Rutkowski said that more states may consider moving to an augmented ACT or SAT assessment design as they begin to adopt the Common Core State Standards, the learning outcomes guidelines issued by the National Governors Association and the Council of Chief State School Officers.
"As states adopt similar national standards, it would be logical to have one test that assesses the core of these standards," he said.
The authors note that in 2008, the Commission on the Use of Standardized Tests in Undergraduate Admission (formed by the National Association of College Admission Counseling) went so far as to label using the ACT or SAT to evaluate secondary school or student performance as "test misuse." The report notes admission tests are not directly linked to particular instructional curriculum, thereby lacking any link to explicit state measures for learning.
"So the issue is how well do the standards of a state mesh with what these tests measure and are these tests, then, by themselves the most appropriate measure of that or should these tests be combined with other assessments?" Noeth said.
The report cites that although these tests have met federal guidelines, previous alignment studies found that the Illinois assessment did not align with all of the Illinois Learning Standards for English language arts. A review of Maine's assessment also found language arts and mathematics measures were not completely covered by its testing.
Other issues include the changing nature of state standards. While the study finds that the ACT and SAT have important but infrequent changes over time, state standards can change considerably. Since the tests don't change to reflect the developing standards, assessments can be further skewed.
Underserved populations, which traditionally score below the national average on college admission tests, may be particularly ill-served by having only these tests measure school and student population achievement. Additionally, these populations may lack access to preparation programs for the SAT and ACT. While studies indicate gains from students who have test preparation, that data has been gathered from college-going populations, not all students. Further research is needed to determine the effect on the overall student performance.
"We have some understanding of how test-prep affects scores," Rutkowski said. "But do all students have the same advantage? We should do as much as possible to level the playing field for high-stakes test examinees."
Because SAT and ACT tests are designed for one purpose and one population, the report notes it is possible that scores may lack a consistent meaning when applied to secondary achievement. In 2006, for example, nearly 1,750 Illinois students who earned reading scores in the state's two lowest performing categories simultaneously had ACT reading scores at or above the ACT Reading College Readiness Benchmark (College Readiness Benchmarks are the minimum scores the ACT projects as required for probable success in college-credit courses).
Noeth said the use of college admission tests as assessment tools is rooted in the noble goal of raising standards to make all of a state's students college-ready.
"That's not a bad idea -- it's a good idea," he said. "But the issue is, are states able to match their standards with what these tests measure so that they get a fair assessment of every student, not just traditional college-bound students."
The full report is available for download at http://ceep.indiana.edu/projects/PDF/Special_Report_Test_08_2010.pdf.
CEEP, one of the country's leading nonpartisan education policy and program evaluation centers, promotes and supports rigorous evaluation and research primarily, but not exclusively, for educational, human services and nonprofit organizations. Center projects address state, national and international education questions. CEEP is part of the IU School of Education. To learn more about CEEP, go to http://ceep.indiana.edu.