What is assessment?
Where do we want students to be at the end of a course or a program? And how will we know if they get there? Those two questions are at the heart of assessment.
Although there is a lot of buzz about assessment these days, assessment itself is nothing new. If you’ve ever given an exam, led a discussion, or assigned a project – and used what you discovered about student learning to refine your teaching – you’ve engaged in assessment. Assessment is simply the process of collecting information about student learning and performance to improve education.
At Kansas State University, we believe that assessment is to be designed to expose learning needs that could lead to program improvement. The process is to thoughtful and systematical, driven by faculty:
- Reflecting the goals and values of particular disciplines.
- To guide teaching practices that can enhance student learning.
- To guide departments and programs in the refinement of curriculum.
Course Level Student Learning Outcomes
Similar to the college and degree program levels, student learning outcomes at the course level identify the knowledge, skills, and dispositions students are expected to acquire by the end of the course. Some of the learning outcomes should be connected to those for the degree program, while others may be very specific to the course such as learning a set of techniques for conducting an experiment.
Student learning outcomes are statements of things that students will know, understand, or be able to do at the end of a course. Student learning outcomes:
- Are the basis for assessment of student learning at the course, program, and institutional levels.
- Provide direction and focus for all teaching and learning activity.
- Inform students about what they are expected to learn in each course, degree program, or student service program.
Effective statements of student learning outcomes:
- are student-focused rather than professor-focused.
- focus on the learning resulting from an activity rather than the activity itself.
- are in alignment at the course, academic program, and institutional levels.
- focus on important, non-trivial aspects of learning.
- focus on skills and abilities central to the discipline and based on professional standards of excellence.
- are general enough to capture important learning but clear and specific enough to be measurable.
Source: Huba, M.E., & Freed, J.E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Boston, MA: Allyn and Bacon.
What is the difference between formative and summative assessment?
The goal of formative assessment is to monitor student learning to provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their learning. More specifically, formative assessments:
- help students identify their strengths and weaknesses and target areas that need work
- help faculty recognize where students are struggling and address problems immediately
Examples of formative assessments include asking students to:
- draw a concept map in class to represent their understanding of a topic
- submit one or two sentences identifying the main point of a lecture
- turn in a research proposal for early feedback
The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against some course or programmatic rigor as defined by a standard or benchmark.
Summative assessments are often high stakes, which means that they have a point value that often leads to a grade. Information from summative assessments can be used formatively when students or faculty use it to guide their efforts and activities in subsequent courses. Examples of summative assessments include:
- a midterm exam
- a final project
- a paper
- a senior recital
What is the difference between assessment and grading?
Assessment and grading are not the same. Although course grades are sometimes treated as a proxy for student learning, they are not always a reliable measure because course grades include scores for multiple learning outcomes, thus often allowing specific learning needs to remain hidden in the average. Moreover, they may incorporate criteria – such as attendance, participation, and effort – that are not direct measures of learning.
The goal of assessment is to improve student learning. Although grading can play a role in assessment, assessment also involves many ungraded measures of student learning. Moreover, assessment goes beyond grading by systematically examining patterns of student learning across courses and programs and using this information to improve educational practices.
Common Assessment Terms
Assessment for Accountability
The assessment of some unit, such as a department, program or entire institution, which is used to satisfy some group of external stakeholders. Stakeholders might include accreditation agencies, state government, or Board of Regents. Results are often compared across similar units, such as other similar programs and are always summative. An example of assessment for accountability would be ABET accreditation in engineering schools, whereby ABET creates a set of standards that must be met in order for an engineering school to receive ABET accreditation status.
Assessment for Improvement
Assessment activities that are designed to feed the results directly, and ideally, immediately, back into revising the course, program or institution with the goal of improving student learning. Both formative and summative assessment data can be used to guide improvements.
Direct Assessment of Learning
Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself. Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year).
Course Embedded Assessment
A means of gathering information about student learning that is integrated into the teaching-learning process. Results can be used to assess individual student performance or they can be aggregated to provide information about the course or program. can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).
Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams.
Formative assessment refers to the gathering of information or data about student learning during a course or program that is used to guide improvements in teaching and learning. Formative assessment activities are usually low-stakes or no-stakes; they do not contribute substantially to the final evaluation or grade of the student or may not even be assessed at the individual student level. For example, posing a question in class and asking for a show of hands in support of different response options would be a formative assessment at the class level. Observing how many students responded incorrectly would be used to guide further teaching.
Indirect Assessment of Learning
Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students’ self-assessments, and admissions to graduate schools are all indirect evidence of learning.
Uses the institution as the level of analysis. The assessment can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, institution-wide goals and objectives would serve as a basis for the assessment. For example, to measure the institutional goal of developing collaboration skills, an instructor and peer assessment tool could be used to measure how well seniors across the institution work in multi-cultural teams.
Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem? This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc. If a goal is to assess value added, some comparison of the performance to newly declared majors would be included.
A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.
The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.
The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). To measure value-added a baseline measurement is needed for comparison. The baseline measure can be from the same sample of students (longitudinal design) or from a different sample (cross-sectional).
Adapted from Assessment Glossary compiled by American Public University System, 2005
Adapted from the Eberly Center for Teaching Excellent and Educational Innovation - Carnegie Mellon University