Accreditation

Assessment Standards

0. General

Mission Statements

This section should contain the degree program’s mission statement. Mission statements can be usually found on the degree program’s website. Double-check that what is entered in Xitracs matches the department or degree program’s mission. If both align, then no edits are required.

Program Goals

Program goals should comprise the knowledge, skills, and competencies each program expects its graduates to have mastered by graduation.  Program goals are broad, over-arching statements that are central to each program’s curriculum. They are not intended to be and should not be measurable outcomes. Florida BOG requires undergraduate programs to have program goals related to at least the following three areas:

  1. Mastery of content/discipline-specific knowledge and skills
  2. Demonstration of critical thinking skills
  3. Demonstration of communication skills

Undergraduate programs are free to add more program goals. Graduate programs need to provide their own program goals with a minimum of at least one program goal for each graduate program. Each certificate program, both graduate and undergraduate, must also have a minimum of one program goal. Each program goal should be aligned to at least one PLO.

1. Program learning outcome (PLO) Statements

The PLO Statement is the first subsection of the five-part student learning outcome section and is a specific statement about what students will know and be able to do. PLOs are organized under a program goal and are measurable outcomes of that goal. In turn, each PLO has a specified method of assessment. For example, under the ‘Communication’ program goal, a program may have the following program learning outcome statement: “Students will be able to orally present and defend their original research projects.”

PLO Assessment Standards

a. Describe an expectation for students' knowledge, attitude, and/or behavior.

The assessment process looks at what the program does to facilitate learning and knowledge acquisition for students; not simply what students do in the program. The PLO should describe students’ set of skills, beliefs, and knowledge after the program. In other words, measuring PLO achievement will answer the question: how effective is the program in what it claims to do?

In the PLO section, do not state what students will do in the program, such as write theses or take exams. These are assessment instruments and belong in the Method of Assessment section. In the PLO section, state skills students will acquire from the program that they will demonstrate through their theses or exams.

  • Incorrect PLO statement: Students will write a thesis.
  • Correct PLO statement: Students in the (name of the program) will be able to present defensible conclusions based on an investigation of pertinent primary and secondary sources. The PLO specifically refers to the ability that students will acquire from the program: “the ability to present defensible conclusions”. Then, in the Method of Assessment section, you may state that students will write a thesis to demonstrate the above PLO.

Keep in mind that students achieve different learning outcomes and skills at different points of time during their educational career, and some learning outcomes are stepping stones for others. For example, there is a difference between assessing graduating students and assessing students entering their junior year. If you assess graduating students, you only obtain a summative snapshot of their progress through the program. You may only get the following information: 20% of graduates could not apply some specific skill. On the other hand, if you assess students entering junior year, or in other words, students who are about to take upper-level courses, you will get different kinds of information. Having this kind of information will allow faculty to intervene, make changes to the lower-level curriculum, and make sure that students understand the basics before taking upper-level courses. Therefore, it is important to indicate to which students the stated SLO is directed. For example sophomore students, graduating students, students taking a required capstone course, students completing the core sequence of the courses, students entering their senior year, etc.

b. Align to the program mission and goals.

The PLO should be specifically tied to the program. If you have two programs with identical sets of PLOs, the implication is that these are identical programs. If students learn identical things in programs A and B, then program A is identical to program B; and thus, one of the programs should be eliminated.

Each program is designed to give students a unique set of skills and abilities. For example, although undergraduate degrees in biomedical science and chemistry have a majority of shared courses, these degrees prepare students for different careers; therefore, they should have different content knowledge learning outcomes. Communication skills PLOs and critical thinking skills PLOs may be shared.

c. Are clear, observable, and measurable.

The PLO should be stated in a manner that facilitates measurement by students demonstrating some skill, behavior, and/or knowledge.

  • Incorrect PLO statement: Students will be good citizens. This is an example of a Program Goal, because it is a desired outcome, but is stated without specificity. Consider instead the following example.
  • Correct PLO statement: Students will be able to apply the Amendments to the Constitution of the United States in various situations. This skill constitutes the fact of being a good citizen, but it is also observable and measurable. As a result, this outcome naturally lends itself to the assessment method. You may ask students to write an essay asking students to apply their knowledge of the amendments to their life, or you may design embedded exam questions that present a case and ask students how amendments can be applied to that case.

d. Employ an action verb.

Because PLOs are meant to depict a student’s knowledge, skills, and attitudes at some point in time, they must include an action verb that details the unique outcome. See the example below of a poor verb choice for a PLO:

  • Incorrect PLO statement: Undergraduate students in physics will be able to understand the basic laws of electricity and magnetism. The word “understand” is inappropriate for use as a PLO because it does not include sufficient detail regarding the knowledge a student has about the laws referenced. To clarify this PLO, the faculty and program staff should consider what someone who does understand the laws would be able to do with that understanding; that is, they should select an action verb.

A resource that many faculty members rely on to identify appropriate action verbs is Bloom’s taxonomy, a sample of a hierarchical model of classifying cognitive skills in terms of complexity. See example: https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/. The reason that many faculty members rely on Bloom’s taxonomy is that being able to apply the knowledge is better than having the factual knowledge of the material, and so on. These levels can serve as a guide to choose the most appropriate cognitive skill for the level of your program (Bachelor’s vs. Master’s vs. Doctorate), and for the level of the class within the program (introductory course vs. capstone course). Students at the beginning of their education may focus more on knowing and explaining the material, while more advanced students should apply, analyze, and synthesize the material.

  • Correct PLO statement: Undergraduate students in physics in their freshman year will be able to list and define the basic laws of electricity and magnetism.
  • Correct PLO statement: Undergraduate students in physics in their senior year will be able to apply laws of electricity and magnetism to a wide range of situations.
  • Correct PLO statement: During their third year, doctoral students in physics will be able to produce a scientific paper of a publishable quality that constitutes an original contribution to their chosen field of specialization.

Note that as students advance through their educational careers, they are required to demonstrate cognitive skills of higher complexity; advancing from knowledge to application, and eventually to the synthesis of new knowledge.

2. Methods of assessment

This section describes how students are assessed on the learning outcome. Each PLO must have a clearly stated method of assessment. The Method of Assessment section can be considered similar to a Methods section in an educational research study wherein measures are employed to determine the achievement of the PLOs.

Method of Assessment Standards

a. State and describe the assessment instrument.

In as much detail as possible, describe the assignment, activity, etc., that will be used to assess the PLO. Common assessment instruments include essays, written student work including discussion board responses, theses/dissertations, presentations, oral reports, performances, portfolios, open-ended (or multiple choice) embedded test questions, lab reports, internship or practicum evaluation forms, exams, or standardized tests.

b. Indicate how the instrument specifically measures the stated PLO.

Justify how the selected assessment instrument specifically addresses the stated PLO. Is the PLO a criterion in the rubric? Are there specific embedded questions? Example: The PLO states that the student will be able to do x, y, z. If the assessment instrument is a multiple-choice test, please provide the statement that aligns x, y, z to specific questions in the test. If the assessment instrument is a rubric and is used to assess a term paper, please specify which rubric components measure x, y, z.

c. Are distinct from cumulative grades or overall passing rates.

Course grades are inappropriate for continuous quality improvement; they summarize the overall performance of the student (and may include unneeded information such as attendance and class participation). This type of assessment will not necessarily yield data that can be used for improvement. A student with a 70% overall test score may fail in one objective which may need to be improved. One option is to measure each aspect separately, report those ratings, and then average them together.

Example of an unacceptable assessment: Students will write theses and the professor will then assign grades to each thesis. A year later you will see the following distribution: 20 students received an “A”, 40 students received a “B”, 15 students received a “C”. What can you do with this information? How can you improve the program? What are common problems? What difficulties do students encounter while writing a thesis? Which skills are underdeveloped? Is it writing, research skills, ability to defend an argument, or gaps in the knowledge of the discipline? If there are gaps in discipline-specific knowledge, then in what area? Letter grades do not give useful information that can be used to make adjustments to the curriculum.

However, you may use some of the existing graded assignments if you can link PLOs to specific grading criteria (rubric components/specific exam questions). For example, you may say that PLO is measured using specific test questions in the chapter test, or that a PLO is measured with grades from a specific column of a grading rubric. The key distinction between compliant and noncompliant grade usage is the specificity and alignment with the PLO.

d. Describe the assessment context, including evidence for the instrument’s accuracy and precision (validity and reliability).

 The context for the assessment can be divided into two types: (1) Course-embedded assessment and (2) assessment outside of the course. Examples of the former include a project in a capstone course or a final exam in one of the core courses; examples of the latter include a qualifying oral exam at the end of the program, a portfolio of student work drawn from multiple classes, internship evaluation forms, or a licensure exam administered outside of the coursework. Regardless of what assessment type is selected, please explain the assessment context. If administered outside of a course, describe under what circumstances the assessment was administered and address the motivation for students to participate in the assessment. Detail variations in the method and/or context of assessment within a course and/or across multiple sections of a course.

  • Example (1) of a course-embedded assessment: “The final project in the ABC XXXX capstone course will be used as an assessment instrument. The rubric used to score the final project includes a criterion for conciseness of argument and is scored on a 4-point scale from Capstone to Benchmark. The course instructor provides one rating of each final project, and one other member of the department scores a random sample of final projects to measure inter-rater agreement.”
  • Example (2) of an assessment outside of the course: “The assessment instrument is a rubric used to rate an oral qualifying exam that students have to pass to complete a degree program. An Oral Examination Committee comprised of three professors drawn from the student’s core courses [ABC XXXX, ABC XXXX, ABC XXXX, ABC XXXX] and elective [varies year to year] coursework will conduct the evaluation.”

Regardless of the context, detail which students are involved in the assessment: first-year students, graduating seniors, all students in the program, etc. Remember that assessment methods cannot be reliant on external determinants such as the acceptance to a journal or conference proceedings, and must be designed so that all students within the program are represented in the assessment.

Additionally, describe faculty participation in choosing, developing, validating, administering, and analyzing the assessment. The key to accurate methods of assessment is the involvement of subject matter experts in the discipline or outcome area. These experts, most often faculty but also professionals in the field, have the greatest insight into all possible occurrences of the learning outcome in instruction, so their input on multiple-choice questions, essay prompts, or the quality of research projects, etc. is a strong indication of accuracy (sometimes referred to as validity).

Precision, or reliability, is addressed in specific ways for specific types of assessments and is addressed in a Methods Supplement in this Handbook. While not every measure of reliability is required, there should be some indication that the method of assessment can be expected to produce results that are similar in similar situations; two students with the same level of knowledge should achieve the same score on a multiple-choice exam, regardless of other contexts. Some things that can threaten reliability are biased questions, such as word problems all referring to the same context which was used in one section of a course but not another. 

e. Indicate the sample.

Will you assess all the students or sample of the students? Please provide all the relevant statistical information and sampling techniques employed. When drawing a sample, be sure to include all modes of course delivery (e.g. online, hybrid, traditional classes) and all instructional sites. Note: if you do not indicate the use of a sample in the Plan, the Report should not include a sample. This creates misalignment between the Method and Results sections. If the method of assessment was altered, this can be edited in the Report to reflect the enacted practice.

f. Address all the rubric's requirements (if used).

If employing a rubric, provide specific information on how it was developed and evidence for the accuracy of data. We encourage the use of established rubrics for which there is evidence to support using the data collected (such as the American Association of Colleges & Universities VALUE Rubrics), properly cited. If developing a rubric internally, include a statement on how evidence for accuracy and precision were or can be collected, especially evidence for more than one subject matter expert involved in writing the rubric before data is collected and evidence for inter-rater agreement after data is collected.

  1. What are the criteria of the rubric? For example, the criteria in the Oral Communication VALUE Rubric are Organization, Language, Delivery, Supporting Material, and Central Message. When a student gives a presentation, the criteria stated above are all assessed using individual rows on the rubric. This gives an individual score for each criterion as well as the ability to calculate an overall score. Rubrics designed to score various types of student work typically include distinct criteria. For example, a rubric to score a student research paper may include criteria for grammar, thesis statement, quality of evidence, and structure. Each PLO is referring to a specific component of student knowledge, skills, or attitudes, and as such, should be assessed using rubric criteria that align with the action verb in the PLO. In most cases, a whole rubric score is inappropriate. In all cases, alignment between the text of the PLO and the method of assessment is required.  
  2.  What is the range of the rubric score and what does each rating mean? For example: if you state that students will be evaluated on a scale from 1 to 5, please elaborate on what is meant by each rating (getting 1 means “unacceptable,” getting 2 means “emerging,” and so on). A rubric is a subjective assessment instrument and should be rigorously defined. This can be facilitated by copy and pasting the rubric directly into Xitracs, although descriptions with sufficient detail are allowed.
  3. Who will be evaluating the students? Compared to the use of test questions that can be either right or wrong, the use of rubrics has some degree of subjectivity. One rater may think that the student deserves a 3 out of 5 on this criterion, another rater thinks that the student deserves a 4 out of 5; different raters may interpret aspects/criteria of scoring differently. Multiple raters are needed to improve the reliability of the measurement. Please state who are the raters, as well as the number of raters (at least two). Because rubrics are inherently subjective, only individuals who know the subject area well should evaluate the students. Usually, these individuals are faculty members or sometimes professionals outside of the university. Students are not permitted to serve as evaluators.
  4. How is inter-rater agreement addressed? Inter-rater agreement (IRA) is the degree of agreement between raters. In other words, the method of assessment needs to state how drastic differences in scores (if any arise) between two reviewers would be addressed. Common methods of addressing the inter-rater agreement include: (1) raters discuss until they reach an agreement regarding the rating, (2) if raters cannot agree on the rating, then a third rater is utilized. SACSCOC requires percent agreement stemming from rubric calibration to ensure reliability in rubric scoring across multiple raters. Almost all assessment types require multiple raters and therefore also require IRA: for example, oral presentations, portfolio review, or performances need multiple faculty raters to review each student’s submission. From these independent faculty scores, a final score must be produced. The Method of Assessment section should include a statement on IRA, and data supporting IRA should be included in the Assessment Results section of the Report. Note: the following assessment types do not need inter-rater agreement:
    • Standardized tests;
    • Embedded test questions that are multiple-choice, or are structured so that only one true answer exists.

3. Performance Target(S)

Performance targets are internal predictions made by the program regarding the level of student achievement for that learning outcome. This section may be short and must only include a numerical prediction. The prediction should be stated in terms of the rubric’s parameters. For example, if the rubric rates students on a scale of 1-5 for that learning outcome, the performance target should include a percentage of students and a predicted achievement rate.

Performance Target Standards

a. Is quantifiable.

 

The performance target should be stated in terms of the assessment instrument. For rubrics, it should be stated in terms of the overall rating or criterion score. For embedded questions, it should be stated in terms of the number of questions answered correctly.

Setting performance targets is up to the program; however, the benchmark should be meaningful and appropriate for making decisions regarding the program. Saying that 100% of students will reach the threshold may not be realistic. Additionally, stating that 30% of students should reach the threshold may not be appropriate. Please note, the assessment process should produce results that will help improve curriculum and/or instruction. The goal is continuous quality improvement.

b. Specifies the threshold of success and indicates the percentage of students that will reach the threshold.

Performance targets are internal predictions made by the program regarding the level of student achievement for that PLO. For example: “Program implementation will be considered a success if 90% of the sample will achieve a final score of 4 or higher for this assessment.” This is the extension of the previous standard. In addition to specifying the benchmark result, specify how many students (percentage) will reach that threshold.

c. Aligns with the PLO and method of assessment.

The performance target should be related to the method of assessment and learning outcome. Please verify that there is a common thread throughout your Assessment Plan. This is what we want students to know, here is how we will measure it, here is the numerical target that would indicate the program is successful in providing knowledge and skills to its students.

 4. assessment Results (report Only)

The assessment results section should mirror the wording in the Performance Target section, but include the results of the assessment. The total number of students assessed on each learning outcome should be indicated in this section. If using a sample, the final number included in the sample should be indicated, as well as the adjusted percentage of the total number of students in the program. For assessment methods that require multiple raters, the final scores are sufficient for this section instead of including the independent scores, statistical analysis, and final numbers for each student.

Assessment results can be reported in terms of the percentage of students achieving each category of the rubric. For example, if a program used a rubric that assessed students on a scale of 1-5, they might report the results as:

  • Students achieving a final score of 5/5 was approximately 75% (n = 30).
  • Students achieving a final score between 4-4.9/5 was 20% (n = 8).
  • Students achieving a final score of 3-3.9/5 was 5% (n = 2).
  • No students achieved a final score lower than a 3.

Assessment Results Standards

a. Align with the PLO, Method of Assessment, and Performance Target.

Results should be worded in terms of the performance target.

b. Include the total number of students assessed and the percent of the total population assessed.

How many students were assessed? If you were using a sample, what proportion of the total population is the sample?

c. Include the number of students that reached the benchmark.

How many students reached the stated benchmark?

d. Provide sufficient statistical information about the results.

Provide all statistical information that is needed for meaningful interpretation (mean, median, standard deviation, etc.).

 5. Use of Assessment results (report only)

The Use of Assessment Results section is very important and is the portion of the Assessment Plan that is most commonly completed incorrectly. This portion describes intended improvements at the program level. It is an important distinction to note that this is an assessment of the program, not its participants. Recall that the Use of Assessment Results section should be completed with reflection on the data and potential routes for improvement in the Year One Report, then detailing action items in the Year Two Report, and reflection on the success or lack thereof of the implementations as they were acted in the Year Three Report.

Use of Assessment Results Standards

a. Interpret and analyze the results; provides reflection.

This is the most important section of the assessment – this is the reason why assessment is required by institutional and specialized accreditors throughout the world. As a university, we should continuously improve. Each program should look for weak areas in the curriculum and address them. This is an assessment of the program, not its participants or its instructors. For this section, please look at your results and interpret them. What do the data show? Are there any anomalies? Does anything stand out? Provide as much narrative as needed for meaningful interpretation. What insights arose from this process? What did you learn? This reflection process is required in all three years of the process.

b. Include actionable “next steps” the program will take; include “evidence of seeking improvement…” (Specific to Year Two)

In Year Two of the three-year process of assessment, it is required that action steps are identified by the program based on the interpretation of the results. Programs should look at and think about what curricular or pedagogical improvements or developments will be implemented at the program level in light of the assessment results. This section is not for programs to describe how they will change their assessment plan to yield greater levels of student achievement or to elaborate on the assessment results in any way. In addition, this section is not meant for programs to relay how they will “fix” students to achieve greater results (e.g., advising students to seek tutoring, limiting access to program courses).

  • Example 1: If the Critical Thinking assessment resulted in a significantly lower number of students achieving at the performance target, then the Use of Results section could include how and where the curriculum will reinforce critical thinking skills, what adjustments will be made to the curriculum, how the program faculty will address the deficiency and other future improvements or developments.
  • Example 2:  If the Critical Thinking assessment resulted in sufficient scores to indicate that the measured learning outcome had been met, then the program should include a statement that the program is functioning well in this area and a statement of the projected area of concentration for the subsequent year’s assessment. Ask “What about the program is making the environment effective for student learning? How can we continue to promote this environment?”

c. Refrains from using the phrase such as “we will continue to monitor…” If results are positive, reflection on effective practice is included.

The use of the above phrase violates the continuous quality standard found in most accreditation principles. Assessment is not linear and finite; it is continuous and cyclical. If all performance targets have been met within a plan, the program is asked to develop new ways of assessing PLOs, disaggregating results, or increasing the rigor of the curriculum that improve new areas aside from what has already been “perfected.”