Teaching

Authentic Assessment

Assessments are often categorized as formative (providing feedback, "forming" an opinion, or just-for-practice) or summative (evaluative, for a grade, or a "summary" of what students know). But there are other parameters to assessment that matter as well. Whether a given assessment is direct or indirect may matter for its impact, for instance. An indirect assessment is one that asks students to self-report, such as asking them if they believe they know the material. A direct assessment, by contrast, asks them to prove their comprehension by asking a direct question from the discipline being studied. Direct assessments carry stronger weight and imply more concretely that students actually understand (as opposed to merely thinking, rightly or wrongly, that they understand).

Authentic assessments operate on still yet another parameter. An assessment's authenticity provides an indication of how much the task resembles the "real" skill. If the skill is to repair refrigerators, an authentic assessment is to ask student-electricians to repair a broken refrigerator. An inauthentic assessment in that same course would be to ask students to take a multiple-choice exam. The latter might provide some indications that students do understand, but the evidence is less direct and less authentic than asking them to fix a refrigerator manually.

Authentic assessments are "better" in that they provide the most direct and convincing proof that students have mastered the learning. However, they are often at odds with the economies of scale in the university setting. It may be difficult to imagine students in many classes performing truly authentic assessments if there are too many participants for the instructor to grade. This encourages many faculty members down a path of automation. While multiple-choice tests can be written to maximize higher-order thinking skills and should be customized to move beyond rote learning in most contexts, they still reduce complex objectives and outcomes to simple four-choice answers, and the assessment is out of alignment with a more authentic one.

There are sometimes choices which can be explored to generate different possibilities beyond the fully-automated tests. Authenticness in assessments is a spectrum, and moving down the spectrum toward a partly-authentic experience provides benefits without over-taxing the instructor. Perhaps moving PART of a test to short-answer or essay questions could be possible. Or, student writing could be examined first by peers before faculty or TAs as part of an overall strategy to increase the authenticness of tasks but not abandoning the economies of scale (and benefits of automation) that occur in large classes.

ATLE would be delighted to brainstorm with USF instructional faculty about ways to alter the assessments in their courses, and to move along the authenticness spectrum away from full automation, even if only slightly.