Assessment for Accountability
The assessment of some unit, such as a department, program or entire institution, which is used to satisfy some group of external stakeholders. Stakeholders might include accreditation agencies, state government, or Board of Regents. Results are often compared across similar units, such as other similar programs and are always summative. An example of assessment for accountability would be ABET accreditation in engineering schools, whereby ABET creates a set of standards that must be met in order for an engineering school to receive ABET accreditation status.
Assessment for Improvement
Assessment activities that are designed to feed the results directly, and ideally, immediately, back into revising the course, program or institution with the goal of improving student learning. Both formative and summative assessment data can be used to guide improvements.
Direct Assessment of Learning
Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself. Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year).
Course Embedded Assessment
A means of gathering information about student learning that is integrated into the teaching-learning process. Results can be used to assess individual student performance or they can be aggregated to provide information about the course or program. can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).
Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams.
Formative assessment refers to the gathering of information or data about student learning during a course or program that is used to guide improvements in teaching and learning. Formative assessment activities are usually low-stakes or no-stakes; they do not contribute substantially to the final evaluation or grade of the student or may not even be assessed at the individual student level. For example, posing a question in class and asking for a show of hands in support of different response options would be a formative assessment at the class level. Observing how many students responded incorrectly would be used to guide further teaching.
Indirect Assessment of Learning
Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students’ self-assessments, and admissions to graduate schools are all indirect evidence of learning.
Uses the institution as the level of analysis. The assessment can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, institution-wide goals and objectives would serve as a basis for the assessment. For example, to measure the institutional goal of developing collaboration skills, an instructor and peer assessment tool could be used to measure how well seniors across the institution work in multi-cultural teams.
Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem? This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc. If a goal is to assess value added, some comparison of the performance to newly declared majors would be included.
A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.
The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.
The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). To measure value-added a baseline measurement is needed for comparison. The baseline measure can be from the same sample of students (longitudinal design) or from a different sample (cross-sectional).
Adapted from Assessment Glossary compiled by American Public University System, 2005