Speakers & Sessions

Keynote Speaker

Paul L. Gaston

Trustees Professor of English, Kent State University and co-author of The Degree Qualifications Profile .

Paul Gaston, III

An experienced scholar with a focus on higher education reform, public policy, and the humanities, Paul L. Gaston, III, serves Kent State as its sole Trustees Professor. Dr. Gaston has been a principal speaker in venues around the world. In 2007-09, he has offered major addresses on health care legislation, the Italian novel, computer dominated futures trading, the future of the book, the Bologna Process, and U.S. higher education legislation.

He is the author of three books and of more than 40 scholarly articles on subjects ranging from interart analogies, the poetry of George Herbert, and the fiction of Walker Percy, to academic strategic planning, the Higher Education Act, and the assessment of educational outcomes. His most recent articles concern George Herbert and the British Hymn Tradition, the risks of excessive reliance on computer program trading in futures markets, Il Gattopardo (Italian novel), the Bologna Process (European higher education reform), and regional campus administration. His book on general education reform, co-authored with Jerry Gaff, was published in 2009 by the Association of American Colleges and Universities. His book on the Bologna Process, The Challenge of Bologna, was published January 2010 by Stylus Publishing, LLC.

As a meeting facilitator, conference planner, and presenter, he has worked with the Council on Higher Education Accreditation, Academic Impressions, the Association of Specialized and Professional Accreditors, the Association of American Colleges and Universities, the Planning Accreditation Board, the American Speech, Language, and Hearing Association, the Council on Accreditation for Health Informatics and Information, the Commission on Accreditation of Allied Health Education Programs, and with institutions such as Keuka College, Southern Illinois University Edward­sville, Providence College (RI), Bishop’s University (Quebec), the University of Nebraska-Kearney, the University of Wisconsin-Eau Clair, John Jay College of Criminal Justice, Eastern Washington University, Bradley University, and Hampton University.

Best Practices of Assessment for University Level Issues

Kansas Board of Regents: Foresight 2020's Expectations for Student Learning

Gary Alexander - Kansas Board of Regents
Level: Intermediate

Assessment/Performance Reporting. A discussion with Gary Alexander, Vice President for Academic Affairs, of KBOR's interest in, and expectations for, institutional reporting on student learning assessment. In addition, and time permitting, we will consider issues related to the Board's criteria for Performance Reporting.

Creating a strategic vision for implementing an assessment management or e-portfolio technology

Ida Asner, LiveText
Level: Beginning

Learn some best-practice approaches to implementing a system on your campus or in your program. Many are attracted to technology with a hope of making existing processes easier to manage. With 13 years of experience and case studies for hundreds of...campuses we will explore two items on the session: 1) potential approaches and potential pitfalls 2) what assessment management/e-portfolio systems such as LiveText offers.

Degree-Level Learning Outcomes: Who's Responsible?

Paul Gaston, Lumina Foundation

"It takes a university." Every faculty member contributes to student
accomplishment of degree-level outcomes. (For some, the contribution is
negative.) This provocative, perhaps occasionally irritating discussion
considers why faculty members should often move beyond their disciplinary
focuses in order to model and share with their students the values of
liberal learning.

The Thread of Intentionality: The ELOs, Tuning, the DQP, and Assessment

Paul Gaston, Lumina Foundation

Major efforts to enhance the quality of student learning have one thing in
common: an emphasis on the singular scholarly value, intentionality. All
of these initiatives bring to the classroom and the academic department a
traditional scholarly respect for stating objectives clearly, pursuing
them strategically, measuring the results, and using the measurements to
implement refinements. Discussion will consider making responsible and
pragmatic choices from this developing toolkit.

Best Practices of Assessment for Faculty

Assessing Students Through Student Response Systems

Brandon Keck - Turning Technologies
Level: Beginning

We will discuss how student response systems can be used to instantly assess students understanding of classroom content as well as to assess individual students understanding on particular topics or subjects before it becomes too late. The information that our software provides allows you to work closely with each individual student before they have taken a test to assess their needs or areas of concern. As well as in class assessments our software can be used to take formal assessments or tests, which allows you to collect student data immediately and get that back to the students, quicker and more efficiently. This in turn gives students more time to prepare and study for future exams. In addition, we also discuss how clickers can be used for student engagement purposes as well as best pedagogical practices for incorporating clickers in your classroom for the first time.

Measuring Student Learning

John Fliter and Raju Dandu – Kansas State University
Level: Beginning

This session will focus on measuring student learning. At the end of this session, participants will be able to develop plans to effectively measure student learning outcomes, implement strategies to ensure an efficient and encompassing measurement process that is integrated in their work, and utilize various techniques to ensure reliability of measurements. Participants will also discuss how their measures tie into program, department, college, and institution outcomes.

What a Blooming Good Question!

Dr. Sheri H. Barrett - Johnson County Community College
Level: Intermediate

In 1956, Benjamin Bloom headed a group of educational psychologists who developed a classification of three domains: Cognitive: mental skills (Knowledge), Affective: growth in feelings or emotional areas (Attitude), Psychomotor: manual or physical skills (Skills). Within the cognitive domain Bloom identified six levels, from the simple recall or recognition of facts, as the bottom level, through increasingly more complex and abstract mental levels, to the highest order which is classified as evaluation. Bloom found that over 95 % of the test questions students encounter require them to think only at the lowest possible level...the recall of information. As educators we tend to ask questions in the "knowledge" category 80% to 90% of the time. This session will focus on using Bloom's taxonomy to write assignments and questions that assess higher order thinking skills.

Building, Assessing, and Advancing Foundational Skills: First-Year Seminars and General Education Outcomes

Sarah Crawford-Parker and Andrea Greenhoot - University of Kansas
Level: Beginning

In Fall 2012, the University of Kansas launched a First-Year Seminar (FYS) program as part of a coordinated strategy to enhance and document student learning, engagement, and academic success. The FYS program is organized around a set of common learning outcomes, including critical thinking and written communication, with assessment built into the course design. In this interactive session, we will share our methods for assessing students’ achievement of these learning outcomes and our results from the pilot year of the program. Discussion will address how we are using our assessment evidence to shape future development of FYS courses and to benchmark first-year student learning in key academic skill areas. The audience will be invited to consider the pros and cons of our approach, to share their own campus practices for assessing these foundational skills, and to brainstorm possible strategies to scale-up assessment and evidence use with program expansion.

Continual Improvement of Student Learning

Sally Yahnke and Anne Philips – Kansas State University
Level: Beginning

This final session focuses on using the knowledge, skills, and abilities developed in sessions I through III to put student-centered learning concepts into practice. At the end of this session, participants will be able to utilize evidence to improve student learning within programs and curricula, enhance departmental/unit student learning improvement plans, and persuade others to incorporate a student-centered mindset. Particular emphasis will be placed on high-impact practices and utilizing multiple sources of evidence to continually improve student learning.

Re-Designing Art's Assessment of Student Learning

Rhona Shand, James Oliver, & Patricia Lindley - Pittsburg State University
Level: Beginning

The purpose of this presentation is to share the work of the faculty in the Department of Art at Pittsburg State University in re-designing and implementing a plan for improving student learning in the degree program. The collaborative process began with the program mission and moved to the student learning outcomes, which were leveled across the four years of the program and multiple courses, both core to art majors and unique to their chosen emphasis areas. Lessons learned from this process and suggestions for replicating with other programs will be offered.

Authentic Assessments for Critical Thinking: An Overview of Strategies

Melissa Mallon, - Wichita State University, Leo Lo & Jason Coleman Kansas State University
Level: Intermediate

Critical thinking occupies a prominent spot on many institutional lists of core learning outcomes and is widely touted as a societal good best fostered through higher education. Despite its centrality and import, faculty often struggle to both teach and assess these skills. In this session, we propose that much of this struggle can be overcome through use of authentic assessments requiring students to create real world products or performances. As a first step to making this case we present definitions of critical thinking and authentic assessment and contend that the nature of critical thinking makes it difficult to validly assess through indirect, non-authentic means. We then explore approaches to authentically teaching and assessing critical thinking by sharing practical examples of assignments, products and evaluation methods drawn from several disciplines. Along the way, we highlight ancillary benefits inherent to this form of assessment (e.g., enhanced student motivation, better preparation for grad school or careers) and provide strategies for avoiding common pitfalls encountered when designing authentic assessments. We conclude by highlighting some emerging technologies that promise to make authentic assessments easier to design and evaluate.

Questioning the Text and Each Other: Using Student-created Quizzes to Assess Reading Assignments

Phillip Payne & Matthew McCoy - Kansas State University

In this poster session, we will provide an overview of the strategies we are currently using to employ student-created quizzes within our courses, model assessments created by the students, and data collected in response to this project. We welcome feedback and opportunities to collaborate with other professionals at Kansas State University or other institutions of higher learning. Currently under development is a model of student-created quizzes used to assess student recall and comprehension of assigned readings in the majority of the courses we teach as part of the undergraduate music education program at Kansas State University. Based on our own experiences with students and a preliminary examination of the literature, we are seeking to address the following questions:

  1. How can pre-service educators be provided opportunities to learn about assessment preparation and administration without significantly adding course content?
  2. What is the impact of student-created quizzes on individual and group learning?
  3. What is the impact of student-created quizzes on instructional practices?
  4. What feelings are experienced by the students in relation to the act of creating quizzes on assigned readings?

Best Practices of Assessment for Program Assessment Coordinators

Assessing Student Success: The Why, the What, and the How-To

Crystal Lenz, Lauren Edelman, Molly Milota - Kansas State University
Level: Beginning

This introductory session will outline the student learning assessment process from the importance of assessment to steps following data collection and analysis. Equipped with an understanding of how K-State First assesses first-year student success using student learning outcomes and institutional goals as a guide, participants will discuss best practices of student learning assessment and generate ideas to implement on their own campuses/units.

General Education Assessment: How do other institutions handle this?

Ying Xiong - University of Kansas
Level: Intermediate

General education is probably the most challenging university-level assessment area due to its unique issues surrounding ownership, coordination, and alignment with multilevel learning goals. This study surveyed about 30 institutions to identify best practices and challenges in general education assessment. Questions like "what do other institutions do for their general education assessment?", "what lead to a successful general education assessment?", and "what are the top-10 challenges for assessing general education?" will be addressed. We will also have a discussion with the audience on how KU plans to overcome challenges in our KU Core assessment process. Information shared at this session can be used by administrators, faculty and professionals to develop their GE assessment frameworks and models that would better capture new meanings of quality in general education that is under transition nationwide.

Short Session: Rating Behavior: Have You Thought About Inter-rater Reliability?

Sally Yahnke - Curriculum and Instruction/College of Education/Kansas State University
Level: Beginning

This session will examine techniques for ensuring consistency across raters in making high stakes decisions based on ratings of behavior. These decisions determine success for undergraduates in the College of Education as they complete field experiences and portfolios across their professional program.

Short Session: The Utilization of the ETS Mathematics Major Field Test at Pittsburg State University:

Tim Flood - Pittsburg State University
Level: Beginning

The math department at Pittsburg State University has been administering the ETS Major Field Test (MFT) to all of our graduates since the spring of 2005. A brief history of the test at PSU will be provided along with specific details about the MFT, including the various discipline tests available. In addition, the format of the results from the MFT will be presented along with the ways the department analyzes and uses this data. Although the presentation will focus on the Math MFT, the information presented will be applicable to other disciplines as well.

Ten years of an Award-winning Assessment Program at Neosho County Community College: One system for Course, Program, and General Education Assessment

Sarah Robb - Neosho County Community College
Level: Beginning

During this presentation, audience members will be exposed to the culture of assessment at Neosho County Community College through an explanation and demonstration of our assessment processes. This will include a discussion of course-level student learning outcomes that feed into a credible system of assessment at the program and general education levels. We will demonstrate how assessment data is collected, and then describe with examples of how we use the data to improve student learning, inform the institutional budget, and inform the board of trustees and our community by answering the question - Are students learning?

Computerizing a Test Using a Learning Management System

Christina Kitson - Kansas State University
Level: Beginning

The population of international students continues to grow at US universities. How can we make the placement process easier? This session will explain one programs process of moving from a paper and pencil placement test to a computer-based test (CBT) for remote testing. Due to a recent special program in which we have to pre- and post- test students that are placed at partnering institutions it has become important to be able to test in the most efficient and secure way possible. According to Dolan and Burling in the Handbook on Measurement, Assessment, and Evaluation in Higher Education one of the key features of a CBT is, “increasing the efficiency of the assessment process”. The increased efficiency and security of using a CBT on a Learning Management System (LMS) is what facilitated the decision to move from paper and pencil to the computer. Some benefits which will be discussed, of using a CBT on an LMS are as follows: scoring, efficiency, security, and information control. The process started with a basic idea of moving the paper and pencil test online. The process evolved as the test was being moved to include short pre-test practice, tutorials for the partner institutions on the LMS being used, full-length practice tests, a short tutorial for the test taker to help with familiarity, visits to partner sites to confirm lab situation, and the process of moving, piloting, and revising a final computerized form. Session participants will learn about the process used and how to manage a project like this.

The SLS Process of Creating New Student Learning and Development Outcomes

Kyle van Ittersum & Dr. Irma O'Dell - School of Leadership Studies at Kansas State University

In the fall of 2012, the School of Leadership Studies, prompted by K-State 2025 and concerns with the current model, decided to re-evaluate their Student Learning Outcomes (SLO’s). To begin, a student learning assessment committee (SLSARC) was formed to oversee the development of the new model. The original model contained 8 SLO’s which were global and could be applied to more than one class if appropriate. SLSARC was then charged with developing one SLO per core course and each SLO was to be tied to a high impact practice already in place in each course in addition to being grounded in theory. After several discussions within SLSARC and the SLS faculty as a whole, a finalized logic model of student learning was developed. The end result was one assessable student learning outcome (SLO) and one student development outcome (SDO) for each core course. Additionally, each SLO and SDO was tied to the high impact practice for the course and was also grounded in student learning theory, namely, Bloom’s Taxonomy and the Leadership Identity Development model.

Using a Quality Improvement Process to Assess and Support Students Who are At Risk for Failure in a BSN Program

Betty Elder, Debra Pile, Brandy Jackson, and Yvonne Fast - Wichita State University

Measures of student performance can be used to track and evaluate quality improvement initiatives in higher education. Continual measurement and comparing of key results is essential to identify best practices within an organization and to evaluate current practice over time. This initiative outlines an example of how the quality improvement process is used to support students at risk for failure in a BSN nursing program. The student’s performance scores in key areas are assessed and students who may be at risk for failure are identified. Once students are identified, they are assigned a faculty member who assists them in a self-evaluation process to identify key areas of need. Students are presented with a variety of support mechanisms at the department and university levels. Measureable key performance indicators are used in the individual student evaluations and to determine program performance over time.

Best Practices of Assessment for Student Life

Best Practices in Assessing Learning that Occurs in Student Affairs Areas

Carla Jones, Dorinda Lambert, Julie Gibbs, Kevin Cook - Kansas State University
Level: Intermediate

When assessing outcomes for student learning targeting the out-of-class experience, student affairs professionals have a wealth of knowledge about the process of identifying specific learning goals and creating learning outcomes to measure progress toward goals. The session will include members of the student affairs assessment committee, as well as other colleagues, who will speak about and demonstrate the documentation of learning and the out-of-class experiences of students. Participants will leave with information on techniques and trends. Access to electronic information will be provided.

Integrating the co-curricular units into the classroom

Shawna Jordan, Melia Fritch, Cindy Logan - Kansas State University
Level: Beginning

The Athletic Training program and the Library Instructional staff have developed and implemented an instructional plan that incorporates assessment across both units throughout the curriculum. The instructional plan is structured to infuse the library instruction each year within the four academic plan. This allows for assessment in writing, research skills, and general library knowledge across the various course areas.

For advisors: a student self-assessment

Dorinda Lambert - Kansas State University
Level: Beginning

The College Learning Effectiveness Inventory (CLEI): a student self-assessment tool that can help target steps to improve academic performance. The CLEI is an instrument designed to measure individual attitudes, behaviors, and dispositions that may impact student learning success. Studies indicated significant relationships between the CLEI scales and academic performance and retention. The CLEI can function as a tool of exploration for students to increase their awareness of personal strengths and weaknesses as a learner and be able to make changes that may improve academic performance. It may be most effective with professionals trained to assist students with academic problems (counselors, advisors, learning center staff). This presentation will outline what the CLEI is and how it can be used to assess a student’s needs and help direct them to resources to assist them in being more successful in school.

Leadership for Life: Case Study of a Pilot Mentoring Program

Kerry Priest, Lori Kniffin, Sarah Donley, K-State School of Leadership Studies & Amanda Cebula, Chair, Wildcats Leadership for Life

The Wildcats Leadership for Life (WLFL) Pilot Mentoring Program was designed in response to current students who voiced a desire for personal and professional mentorship. As part of the School of Leadership Studies’ ongoing commitment to program development, assessment, and improvement, we initiated an evaluative case study of the pilot program. The overall purpose of the study was to describe and evaluate the mentoring programs’ impact on students. The data consists of pre- and post- interviews conducted in the Spring of 2012 with four students and three mentors. Our findings demonstrate the importance of mentorship in students’ professional growth and development in three main areas: growth in their preparedness for transitioning school to their career; development in confidence, planning for the future, and goal-setting in relation to their future; and finally, through their relationship with their mentors, students were able to translate leadership skills (beliefs and practices) in to their everyday lives. Our qualitative assessment research is important to K-State and in particular, Student Life, because it demonstrates the need and benefit of having mentors in students’ lives, while simultaneously informing strategies to enhance continued connections between current students and alumni.