Indiana University

Skip to:

  1. Search
  2. Breadcrumb Navigation
  3. Content
  4. Browse by Topic
  5. Services & Resources
  6. Additional Resources
  7. Multimedia News

Media Contacts

Chuck Carney
Dir., Media Relations and Communications
ccarney@indiana.edu
812-856-8027

Last modified: Wednesday, April 23, 2008

Proper course evaluations could lead to improved teaching and learning

FOR IMMEDIATE RELEASE
April 23, 2008

BLOOMINGTON, Ind. -- Research at the Indiana University School of Education reveals how course evaluations can help teachers gauge their teaching.

Findings from ongoing work by Ted Frick, associate professor of Instructional Systems Technology, will be published in an upcoming edition of the journal Educational Technology Research and Development. Frick has found that asking questions in course evaluations that address these factors as well as the usual global questions (student satisfaction with the course, overall course/instructor quality) can better predict how well students learn material and in turn show instructors what elements of the course might need adjustment.

"There is some validity to the way we're now doing it, if we look at the global items," Frick said of current course evaluations used at most universities. "But the rest of that stuff hardly has any bearing on student learning. It may be important for other reasons, but doesn't have much of a bearing there."

Frick's research team and co-authors on this study include Carol Watson, program manager at the Eppley Institute for Parks and Public Lands at Indiana University, and education Ph.D. students Rajat Chadha, Ying Wang and Pamela Green. Emilija Zlatskovska, a Ph.D. student in language education, joined the research team this past year and will be conducting a similar study in Macedonia, where she is on leave as a professor and instructional consultant.

Through a Web survey of 140 graduate and undergraduate students in 89 different courses at several institutions across the country, Frick found high correlation between what are called "First Principles of Instruction" and how well students thought they learned. The First Principles are five aspects of teaching and learning developed by Brigham Young University Hawaii Professor M. David Merrill, which indicate a positive learning experience.

The principles include: (1) authentic problems or tasks that people do in the real world; (2) activation, in which the student's past learning or experience is connected to new learning; (3) demonstration, where students see examples of what they are expected to learn or do; (4) application, allowing students to try out what they've learned with coaching or feedback from the instructor; (5) and integration, where students incorporate new learning into their lives.

Frick reported high correlation between First Principles and "academic learning time" -- instructional time in which students engage successfully in tasks or activities related to instructional objectives. He found a very high correlation between student ratings of First Principles and their ratings on global items about overall course quality.

In a more recent study that is still in progress at IU, students completed the new course evaluation, and their instructors also rated how well the students mastered course objectives. Preliminary results from 190 students indicate that "they were three times more likely to report success if the instructor used First Principles," Frick said. "And if they agreed that they experienced academic learning time, they were almost four times as likely to be rated by their instructors as 'high masters of the course objectives."

He also said the converse is true. Instructors were eight times more likely to rate students as "low masters of the course objectives" if students did not agree that they were frequently engaged successfully in tasks, assignments and projects in the course. The implications for instruction are clear, Frick said.

"We think the important implication from this is that the First Principles are something that the instructors can do something about." He added that he'd like to conduct future surveys to determine the effect of an instructor changing a course based on such findings.

Frick said his interest in course evaluations came from time spent on a committee to reward excellence in teaching. A literature review revealed few items that are associated with student learning achievement in college. The best predictors are global items associated with student perceptions of overall quality, and these are only moderately correlated with student learning. Most of the other items are not. Nonetheless, these course evaluations are tied to teaching awards, raises, tenure and promotion.

"Our long-term goal is to come up with better ways of doing course evaluation at the university level that have [improved] validity," Frick said. "And then encourage departments or universities to adopt them. Already some have started to consider these."

The results being published this spring are part of a continuing look at the issue. Frick and his research group are currently conducting a similar study in 12 courses at IU in business, computer science, history, kinesiology, philosophy, nursing and social work, including as many as 700 students.

Media Outlets: the following comments are available as mp3 files on the IU School of Education Web site at https://education.indiana.edu. Look for this news release under "News" on the home page. The sound bites below will have a clickable link to hear and to save the files.

Frick says the inclusion of "First Principles" in a course survey instrument is very important to determine the effectiveness of the class:

"From the point of view in thinking about this in teaching, if you use the First Principles, you're going to increase the likelihood of students' successful engagement by a factor of three. And if they're engaged successfully, they're four times as likely to be high masters, according to their instructor's ratings of their performance than if they do not [become engaged successfully]. We think the important implication from this is that the First Principles are something that the teachers can do something about, instructors can do something about. That's the big thing in my mind. And so, for example, if you get a low rating on the application phase in terms of the students having the opportunity to try something out, or you get a low rating on the integration phase because they're not applying it or using it in any way in their own lives, well, what might you change in the class so that could go up?"

A better evaluation that is tied to student learning could help on many levels, Frick says:

"We want to have convincing data that we can share with other people that says, 'if you use these, you'll get better indicators of quality, in terms of teaching and learning, in your classes.' We're starting with higher ed now, but it could be done in other settings as well -- high school, for example. So I look at it as a point of potential leverage in the sense of it's something that gets done regularly at institutions. I'm mostly concerned about Indiana University, but this would be any higher education institution, that items that don't have hardly any relationship to student learning -- just shove those aside and put some in that do -- then we would get more valid measures. That's what we would hope to argue for. And if so, and then those were tied to faculty raises, tenure and promotion cases, then that ought to improve the quality of teaching at an institution over time. If the items are valid, and people respond to it by changing in the ways that I just described, for example, then achievement should go up. And that's what we would hope to see. "

Frick explains why he became interested in course evaluations:

"Is there some way, though, that we can evaluate teaching at the university level that has validity? There is some validity to the way we're now doing it, if we just look at the global items. But the rest of that stuff doesn't have hardly any bearing on student learning. It may be important for other reasons, but doesn't have much of a bearing there. That's what got me started on this."