Effective Faculty Evaluation:
Annual Salary Adjustments,
Tenure and Promotion
Chapter 3. Evaluation for Annual Salary Adjustment
General guidelines for the evaluation system at Kansas State University were laid down in 1974, when the Faculty Senate approved a policy statement regarding the annual evaluation of unclassified personnel for merit salary increases. The policy mandated that each department should create a system including the three specific features listed below. Policies and procedures have been elaborated over time, but these three points remain fundamental:
- Criteria and procedures are to be developed jointly by faculty, department heads, directors, and/or deans.
- Unclassified personnel will provide an update of relevant data on a yearly basis pertaining to whatever merit salary criteria are
established within their unit.
- Unclassified personnel will be provided the opportunity to review the final written evaluation being used as the department head's
recommendation for merit salary increases before it is submitted to the dean.
-
Creating Or Revising an Evaluation System
-
Bases of Evaluation
Evaluation for annual salary adjustment is based jointly on the excellence and productivity of work and upon the extent to which work contributes to unit and University missions. Therefore within departments having strong priorities among programs, annual salary adjustments may and should reflect institutional needs as well as the excellence and amount of work.
Evaluation for annual salary adjustments may also be based secondarily (and separately) on considerations of supply and demand. Thus in a department having significantly different supply, demand, and cost factors operating among the specialities within its discipline(s), annual salary adjustments may reflect these differences as well as individual excellence of work and its relevance to the missions of the institution. This does not mean, however, that a person who could command a larger salary elsewhere should necessarily be favored over other members of the department in the annual salary recommendation. Supply and demand factors properly enter into consideration for salary adjustments only if the person (a) could command a significantly larger salary elsewhere, (b) could not likely be replaced at the level of the present salary, (c) is distinctive within the department in these respects and (d) demonstrates individual excellence in endeavors that contribute to the unit's missions.
Fortunately, the similarities within many departments' programs render relatively minor both differences in mission relevance to the institution and supply and demand factors. In the common case where these factors are essentially equal, annual salary adjustments can be based solely on individual accomplishments. This is both simpler and better for faculty morale.
-
Context and Constraints
The context of annual evaluation is established by the University's guidelines for conducting this process as contained in the Faculty Handbook and by the institution's goals and standards. These provide guidance on procedures to be followed in evaluations and broad definitions of the goals toward which every unit should direct its efforts.
The constraints are more varied. Some have their origin in policies established by the Legislature, the Board of Regents, or the University, while others are simply "realities" that may be founded solely on circumstance. Nevertheless, whatever their origin, these constraints impose certain parameters within which any evaluation system must be developed and implemented.
An example of a policy constraint is the legislative requirement that all salary adjustments be distributed on the basis of merit. This has several effects. One is that no salary adjustment may be justified as a "cost-of-living" increase. Moreover, this policy demands that salary adjustments in which equity, mission relevance, and supply and demand considerations play a role must still be based on the demonstrable merit of the recipient.
Another element influencing the functioning of any system is the University's practice of assigning salary adjustments as a percentage of an employee's base salary rather than as a dollar increment. One result of this practice is that two people having the same rating in an evaluation will receive the same percentage raise, but the one with the higher salary will receive more dollars.
An example of the "non-policy" realities that affect the functioning of evaluation systems is the often sizable differences in the amount of money provided by the Legislature for salary adjustment from one year to the next. Given that the State's budget represents a balance of competing demands for resources, and that the resources themselves differ significantly from year to year, these variations are unavoidable. Yet they create problems for evaluators charged with the equitable distribution of r ewards within their units.
Another constraint is the "fixed-sum" system by which salary raises are allocated. Evaluations are relative because they are based on a comparison of the performance of the unclassified persons within an administrative unit. The results of the evaluation do not indicate an individual's absolute status or even status with respect to a large external reference group, but only his or her standing within the department. Therefore half of all the members of the unit must be below the median. This situation is difficult for many faculty members to accept because they are accustomed to excelling in most endeavors. Indeed, if they had not been among the top performers at every level in their education, they would not have attained the positions they hold today. This makes it extremely difficult for them to contemplate being "average" (let alone "below average") regardless of the peer group with which they are compared.
These problems associated with annual evaluation are exacerbated by the assignment of raises. The unit head is allotted a fixed sum of money for salary increases, usually expressed as a percentage of the unit's present salary pool. That money must be distributed to the members of the department in accordance with their individual evaluations. As a result, about half of the members of the unit will receive salary increases below the average percentage increase approved for the University by the Legislature, and this is true even for people whose evaluations indicate that they are meeting or even exceeding the unit's expectations.
-
Weighting the Elements of Evaluation
The department must also consider the contribution of each of the areas of evaluation toward the determination of the final overall assessment. As in defining the areas themselves, the weight assigned to each area will depend upon the responsibilities and goals of the unit. Thus, for example, a department with a doctoral program might reasonably demand greater scholarly accomplishments from its faculty than a department that offers only baccalaureate degrees, and this difference could be expressed by the weight assigned to research in the overall evaluation. Similarly, a unit might examine the various activities that contribute to each broad area of evaluation to determine whether further specification of weighting is appropriate. Thus, a department with a graduate program should not evaluate teaching in a way that ignores the contributions of those faculty who supervise graduate students.
Some units adopt a single scale of weighting that applies to all faculty members. For example, a department might evaluate its members on their contributions to teaching, research, and non-directed service, giving respective weight of 65%, 30%, and 5% to the areas. Another might assign a value of 40% to teaching, 45% to creative endeavors, and 15% to service.
A system based on a single scale is workable in a unit where all faculty have similar assignments, but it is not appropriate for a department having differentiated assignments. Where the responsibilities and goals of the unit dictate specialization of effort among the faculty, the weighting of various areas of endeavor must be sufficiently flexible to accommodate the individual assignments. This is necessary both to permit the unit to make the best use of its resources in performing its tasks and to offer its members an equal opportunity to receive recognition and reward for their accomplishments.
Units having substantial differentiation of faculty assignments often set the relative weight of the areas of professional activity separately for each faculty member. This should be agreed to by the head and each faculty person at the beginning of each evaluation year. Should the head and faculty member be unable to reach agreement, then the head, after consulting with the dean, should prevail. In either case, it is important for the weights to be clearly established before the evaluation period begins. Of course the weights should be subject to re-negotiation in the event of significant changes in assignment during the year.
Other units employ a variable scale of weights in which the individual employee can select, within a predetermined range, the percentage that each area will contribute to the overall evaluation. This practice allows for some variation in individual assignments. Whether the unit specifies that the percentages should be selected at the beginning of the evaluation year as a part of a planning process or at the end, this approach permits individuals to emphasize the areas in which they expect to accomplish or have accomplished the most in a given year. When adopting a system of variable weighting, a unit must establish maximal and minimal values for each area to assure that the individuals' activities accord with the unit's responsibilities, goals, and expectations.
-
The Evaluation Period
Each unit must also determine the period to be considered in each evaluation. Each unclassified employee must be evaluated every year for the purpose of salary adjustment, and the results of the evaluation must be available in the spring when the University determines its budget for the coming year. The University's budgetary process establishes certain deadlines by which evaluations and the accompanying salary recommendations must be completed or reviewed at various administrative levels. Therefore, many units adopt the calendar year as the period covered in each evaluation. This has the effect of minimizing the time between the end of the evaluation period and the beginning of the fiscal year in which the salary adjustment based on that evaluation takes effect. Other units use the fiscal year as their evaluation period, but since this begins and ends in the middle of the summer, it may not be well suited to departments having many nine-month employees.
Units may also consider including in the annual review accomplishments from more than one year. This is particularly relevant in departments where responsibilities or goals make it desirable to encourage faculty to undertake long-term projects that do not lend themselves to an "in progress" evaluation. In some disciplines, for example, the most respected form of scholarly communication is the book, and it is in the interest of these departments to encourage their faculty to write books. Yet between the inception of a book and its publication (a period that includes research, writing, refereeing, editing, printing, and reviewing) a number of years may pass. Authors who receive full credit for such publications in only one annual evaluation may suffer in salary raises in comparison to colleagues who produce smaller and less significant studies more frequently. Moreover, evaluations of book authors would reveal peaks and valleys in spite of uniform productivity. The inequities in this situation may be exacerbated by variation in the range of salary increases from one year to the next. Therefore, a department that wished to promote book production among its faculty might well establish a system of annual evaluation that included consideration of publications not only from the immediately preceding year but from several earlier years as well. The same logic could apply to a unit wishing to encourage long-term projects in any area.
-
The Rolling Average
Departments may also wish to adopt a mechanism to hinder the development of salary inequities deriving from external circumstances. In the simplest systems for assigning salary adjustments, there is a direct relationship between the annual evaluation and the adjustment for the next fiscal year. Under such a system, the unit head might assign salaries for fiscal year 1995 (beginning July 1, 1994) based entirely on the evaluation of individuals' performance in the calendar year 1993. The simplicity of this approach is certainly a virtue, but the system is not without problems. The amount of money available for salary increases depends on legislative appropriations that vary greatly from year to year. Evaluators tend to assign a wider range of raises in years when the average percentage increase is high than when it is low. Performance evaluation is also an annual process, and for most people the evaluation differs somewhat from one year to the next. Thus inequities can easily arise between the person who receives a high evaluation in a year when the range of salary increases is large and the one whose high evaluation comes in a year when the range is small.
One way to mitigate the effects of such variations is to use a rolling average as the basis for assigning salary increases. This method of averaging does not affect the process of evaluation. Rather it influences the way in which the results of the evaluation are used to assign salary increases. The simple system described above employs a single year's evaluation, which is, in effect, weighted at 100%. A rolling average is based on the use of several years' evaluations with a weight assigned to each of them. Thus, for example, the weighting might be 50-25-25. In this case, salaries would be distributed to individuals based on an average of the last three years' evaluations, to which the immediately preceding year (e.g. 1993) would contribute 50% and each of the two years before that (1992, 1991), 25%. A wide range of schemes is possible (e.g., 60-40, 50-35-15, 40-30-20-10), but the unit must agree on a distribution consonant with its goals and other criteria. A unit moving from a single-year system to a rolling average might also wish to consider phasing in the new system over several years.
-
Varying Expectations
Some departments establish different expectations for faculty at different stages of their careers. For example, greater versatility might reasonably be expected of full professors than of assistant professors. Or senior faculty might be expected to provide more broadly based institutional service (e.g., serving on University committees rather than mainly on departmental ones). Or beginning faculty might not be expected to have fully-developed, ongoing research programs. Thus units that vary their expectations might evaluate the same performance at different levels for persons at different stages of their careers. For example, if an assistant professor and a professor published articles of equal quality in the same journal, the assistant professor's accomplishment might be judged the more meritorious.
-
Pitfalls in Developing Systems for Annual Evaluation
The goals of summative evaluation are positive, but the practices have certain dangers. Many of these can be avoided by sensitive implementation of the system, but some of them are best circum- vented in the creation of the system itself.
One very serious threat is that the evaluation system may encourage undesirable behavior. It seems tautological that evaluation should influence behavior, and because the goal of evaluation is to recognize and reward excellence, it would appear that its influence would be salutary. But such is not always the case. Either way, evaluation has impact. Some evaluation practices can serve to channel faculty improvement efforts into highly constructive activities. Requiring appropriate sources of data can contribute to positive changes as in the use of student ratings of their learning or the use of peer ratings of instructional materials or tests. Creating appropriate procedures can also contribute to achievement as in devising a means to grant a fair amount of credit to long-term undertakings.
Other evaluation processes can divert efforts from fruitful pursuits into counterproductive ones. Consider some examples concerning the assessment of instruction. A plan for evaluating instruction that placed too much emphasis on the number of students taught could encourage a dilution of standards to increase enrollments. An overemphasis on student ratings of how well they like the content of the course (which is ordinarily desirable) could result in instructors trying to "sell" students in survey courses on inappropriate majors. (The purpose of some exploratory courses is to see if one likes the field; deciding that one does not like a field can reflect attainment of this goal.) A system that judged quality of instruction by student performance on specified achievement tests could promote a narrowing of the curriculum to that which is easily assessed with paper and pencil measures. In addition, focusing can prompt conscious or unconscious teaching to the tests. An evaluation that was narrowly limited to classroom performance would discourage faculty service on graduate committees. It would also fail to take account of such important activities as maintaining office hours and student advisement.
In evaluating research, narrow attention to quantity of publications (which is tempting to naive evaluators because it is both easy and objective) would encourage people to produce numerous "quick-and-dirty" works rather than fewer significant ones. Exclusive attention to works published in a single year would discourage long-term research projects unless the results would lend themselves to piecemeal publication. Overreliance on citation counts of published works could fail to account for the nature of the citations; some works are often cited because of their grave limitations! Evaluation requires professional judgment; counting alone does not suffice.
In service, looking primarily at the number of assignments people accepted rather than the quality of their contributions would place a premium on purely nominal participation. For example, it is far easier to count committee memberships than to assess effectiveness (or even counterproductivity) on the committees. But to limit the evaluation to mere counting is to abdicate a mandated administrative responsibility.
Clearly, all those taking part in the creation of evaluation systems should be mindful of the impact that evaluation can have upon the behavior of those evaluated. In devising effective systems, both faculty and administrators must carefully identify the activities and achievements they wish to encourage and then make sure that the system promotes and rewards those contributions. Likewise, all interested parties should identify possible outcomes they wish to avoid and then devise the system to avoid rewarding such behavior.
In addition, those who draft evaluation systems must consider the virtue of simplicity and the danger of "overkill." Because evaluation is an important and ego-involving activity, some departments develop systems that are extremely elaborate and detailed. As a consequence, the process becomes so demanding that its participants lose patience, its potential benefits are threatened, and the normal work of the unit suffers. For example, procedures which require colleagues to rate the quality of each individual publication are probably not worth the required investment of time. A global rating of each person's publications would probably suffice. In any event, the evaluation process should not be so elaborate and time consuming that its participants develop needlessly negative reactions to it.
Yet a certain amount of complexity is necessary to achieve the desired ends. For example, it is necessary to stipulate the relative importance of the major areas of professional activity. Likewise, it is necessary to use computational methods that ensure that the desired weights are, in fact, achieved.
The goal of achieving objectivity in evaluation can also lead to the establishment of systems characterized by excessive precision. Professional activities are too diverse to be accommodated by totally objective means of assessment. Thus it is not ordinarily appropriate to establish a hard and fast system of points for sole authorship, senior authorship, or junior authorship in national refereed journals, regional refereed journals, unrefereed journals, etc. There is simply too much diversity within any category of activities to make a rigid, predetermined point system defensible. For example, single-authored articles within a given journal often differ greatly in significance and importance so that some would typically merit several times as much credit as others. Effective faculty evaluation must be based on sound professional judgment, and the system of evaluation must allow for the exercise of judgment.
The necessity for professional judgment introduces a degree of subjectivity into evaluation that makes many people uncomfortable. The subjectivity can, however, be reduced. One way to limit the effects of subjectivity on the process is to make use of multiple judgments. Competent persons will ordinarily arrive at similar, although not identical, judgments regarding the merit of professional activities, and the pooled judgment of several competent professionals tends to be more reliable than the sole judgment of any one. Use of multiple raters clearly enhances the reliability with which such things as professional publications, instructional materials, student ratings of teaching effectiveness, and various service activities are evaluated. But there is a tradeoff; use of multiple judges increases both the cost and complexity of the enterprise. Used in moderation, this is likely to be a desirable tradeoff.
A department can organize its evaluation system to provide for multiple raters either by having selected individual faculty members offer advisory judgments to the head or by having a committee perform that task. The use of individual raters has the advantage of offering several independent judgments for comparison, while the committee provides the benefit of shared insights.
Employing multiple judgments does enhance the reliability of the evaluation, but it also involves costs. These include the professional time used and the reduction of confidentiality. When, as is normally the case, the additional raters come from with in the department, the possible influence of personal friendships and animosities on the evaluation is a factor to be considered. These problems are exacerbated when the use of multiple raters is carried to an extreme, as in the occasional proposal to have every member of a department rate every other member. Such a process could consume significant amounts of time without any proportional improvement in the result.
-
Bases of Evaluation
-
Implementing the System
Evaluation for salary adjustment is an annual procedure, and University policies specify the most important responsibilities of the people to be evaluated and the initial evaluator for the unit, usually the department head, as well as those who review the evaluations at higher administrative levels. The purpose of this section is to provide some practical advice for department heads and faculties on implementing evaluation at the initial and most important level.
Unit heads must be aware that the University regards effective personnel evaluation as one of the most important administrative responsibilities. It requires informed, thoughtful judgment and should never be done superficially, carelessly, or with undue haste. At the same time, it is a task that often must be performed relatively quickly. The University's budgetary calendar imposes limits on the time available, but equally important is the strain that the process places on working relationships within a unit. Despite all efforts to emphasize its positive aspects, evaluation is a stressful experience for the people subject to it. It is, therefore, important that the evaluator carry out the process with all deliberate speed in order to conclude it in the shortest time consonant with accuracy and fairness.
Also related to timing is the need to have clear and reasonable deadlines for the submission of materials for evaluation. Because the process is an annual one, all but first-year faculty members should have a general idea of when the materials are due. Nevertheless, both professional courtesy and practicality dictate that the exact deadline for a given year should be announced sufficiently early to give people ample opportunity to assemble and submit the required materials. If any person, in spite of reasonable notice, fails to provide the necessary information, the head should send a written reminder. Nevertheless, if after being informed of the possible consequences the person still does not make the materials available, the head may reasonably assign that individual an unsatisfactory evaluation. Since annual evaluation provides the basis for salary recommendations, any faculty member who fails to submit materials provides the head with no justification for recommending a salary increase.
-
Communication
One of the most fundamental elements of effective evaluation is maintaining clear communication. There are two major communication requirements associated with the evaluation process. The first is to create a mutual understanding as to what the individual will be held accountable for in the coming year, the relative importance of each assignment, and the specific methods that will be used to assess performance. The second is to communicate the evaluation clearly and constructively.
Near the beginning of the evaluation period, each unclassified person will meet with the unit head to establish personal goals and objectives for the new evaluation period and to discuss their relative importance within the context of the unit's goals. The head should incorporate the results of this discussion into a statement of expectations for the individual for the coming year. This statement is intended to guard against misunderstandings regarding work assignments and expectations. Of course, it may be necessary to modify the statement as the year unfolds, since it is impossible to anticipate all contingencies, and modifications should be communicated in writing, again to avoid misunderstandings. The individual should also be reminded to consult the department's written evaluation document regarding the unit's common policies and procedures.
The written evaluation itself must be carefully prepared. Many department heads may find it helpful to compose a draft for each individual early in the process and then to edit these rigorously at a later date. To insure that each evaluation receives adequate care and attention, it is best not to attempt more than two or three written evaluations in one sitting.
The written evaluation should contain four distinct parts, (1) a review of the individual's assignment and the weight attached to each type of responsibility, (2) a summary of the substantive evidence which was used to arrive at evaluative judgments, (3) succinct assessments of effectiveness in performing each responsibility and a statement of the overall evaluation, which must be consistent with the weights assigned to the individual ratings, and (4) where appropriate, formative suggestions for improvement.
After the evaluations have been drafted and edited, it is desirable to arrange for interviews with each unclassified person for purposes of reviewing the draft. The faculty member should be invited to correct any errors of fact and to supply additional documentation to correct possible errors of judgment. The purpose of the interview is to insure that the "final evaluation," prepared after the interview, represents the most valid, fair statement of professional achievement possible.
Once the final evaluation has been prepared, the department head will recommend a salary adjustment for each person evaluated. The recommended percentage increases based on the annual evaluations for persons with higher levels of accomplishment shall exceed those for persons with lower levels of accomplishment. If merit salary categories are used, then the percentage increases recommended for persons in the first category will be higher than those for the second category, which in turn shall exceed those for the third category, etc.
Each unclassified person will review and must have the opportunity to discuss his or her final written evaluation with the individual who prepared it. Before the unit head submits it to the next administrative level, each unclassified person must sign a statement acknowledging the opportunity to review and to discuss the evaluation and his or her relative position within the unit. Because the amount of funds available for merit increases is generally not known at this time, specific percentage increases should not normally be discussed at this stage.
-
Temporary Assignments
A problem may arise when faculty members are asked to assume an unusually difficult assignment on a temporary basis. Examples include directing an important search, accepting administrative responsibilities on a temporary basis, assuming the duties of a colleague on a temporary basis, chairing a key committee, directing an accreditation study, etc. It is natural to want to express appreciation for performing these "extra duties" by recommending a larger merit salary increase, and this would often be appropriate. However, this may not always be the most appropriate method of providing reward because of the one-time nature of such duties. It may be preferable in certain cases to find some other way to reward performance of temporary duties on a temporary basis (e. g., payment to a Developmental Reserve Account, a summer appointment, support of travel, purchase of equipment, provision of part-time student employees for a year, etc.).
-
Potential Errors
-
The Error of Unintended Weighting
This problem arises at the stage of integrating the evaluations of individual areas of effort into an equitable overall assessment. There are two major steps in combining separate ratings of component parts (e.g., teaching, research, service) into total ratings. First, a decision or judgment must be made as to how much weight each part is to bear, and these weights should be specified either in the unit's written guidelines or in the head's assignments of individual responsibilities. Second, the evaluator must apply the agreed upon weights in a competent fashion. The second step is purely mechanical, but it is not as simple as it may appear.
Although it seems counterintuitive to some people, the weight of each part of a total score is not a function of possible points. Nor is it a function of average scores awarded for the respective parts. The weight that the respective parts contribute to the total is approximately proportional to their standard deviations.
An elementary example will illustrate the problem and a procedure for dealing with it. This simple procedure approximates the results that could be achieved with more sophisticated statistical tools, and for the purpose of annual evaluations, it will generally yield quite reasonable results. Suppose that an evaluation system includes only two component areas, research and teaching. Further suppose that the department wishes them to contribute equally to the overall evaluation. Finally, suppose that the department head rates each component on a seven-point scale, with a score of 7 representing the highest possible ranking. On completing the evaluations of the component areas, the head discovers that the ratings assigned in research have a much wider range than those in teaching:
Component Range Research 1-7 Teaching 4-6
Each of two faculty members, Dr. Brown and Dr. Green, is rated at the top of the department on one of these components and at the bottom on the other. Since the department intended the two elements to count equally, the two professors ought to receive an equal number of points, indicating that they have been evaluated at the same level. But this is not the case. Dr. Brown is at the top of the department in research, with a score of 7 points, and at the bottom in teaching, with a rating of 4, yielding a total of 11 points. Dr. Green, on the other hand, is at the bottom in research (1) and at the top in teaching (6), giving him a total of only 7 points.
Research Teaching Total Dr. Brown 7 4 11 Dr. Green 1 6 7
Contrary to intentions, the two components did not count the same. Instead, the component with the larger range of ratings has clearly weighed more heavily in determining the final result.
In order to give equal weight to each component, the ranges need to be made equal. The ratings for the research component have a range from 1 to 7, so the difference between the top and bottom ratings is 6. In teaching, the range is from 4 to 6, and the difference is only 2. In other words, the range of ratings in research is three times the range of ratings in teaching. Thus, if each person's rating in teaching is multiplied by 3, the range will be 12 to 18, and the difference will be 6, i.e., equal to that of the unadjusted scores in research. A recomputation of the rankings using the new scores for teaching gives each professor a total of 19.
Research Teaching Total Dr. Brown 7 12 19 Dr. Green 1 18 19
By rendering the range of the two distributions equal, their relative contribution to the total has been made equal as well.
Many elementary texts in statistics or educational measurement address this topic and offer solutions. Persons responsible for implementing a department's evaluation system have a responsibility to become familiar with this topic. Assistance can also be obtained through the Office of Planning and Evaluation services.
-
The Error of the Permanent Doghouse
Most individuals display some level of inconsistency in their performance, and sometimes a faculty member has an exceptionally bad year or performs a particular assignment very poorly. If, in evaluating performance in subsequent years, the department head keeps referring to (or silently keeps recalling) these disappointing performances, she or he is committing the error of the permanent doghouse. The merit policy focuses on performance in the specified evaluation period. It intentionally provides individuals with the opportunity to "turn a new page." At the same time, it discourages individuals from resting on their laurels. The effective evaluator is able to set aside memories of the past and focus exclusively on performance during the preceding year. It is, of course, equally important to avoid the inverse error of the permanent halo in which an evaluator's rating of current productivity is favorably biased by the faculty member's past achievements.
-
The Error of Disproportionality
Evaluative evidence needs to be placed in perspective, and no single finding should sway the conclusion suggested by the bulk of the data. The error of disproportionality occurs when one aspect of the evaluation is given excessive weight in arriving at the overall evaluation. This would be the case if a faculty member were rated as "needs improvement" in teaching after receiving low assessment on one of the six courses taught during the year, even though the evaluations on the other five were all above average. The error of disproportionality would also be present if a head granted a high overall rating based primarily on an individual's exceptional contributions in non-directed service to the department, when that had been designated as a minor responsibility.
-
The Error of the Unsupported Judgment
This error occurs when evaluations are made without reference to supporting data. Simple decrees that an individual is in the "top category" or "needs improvement" do not insure that the evaluation is fair and valid. Because evaluations require human judgment, it is doubtful that any two observers will agree precisely. Nevertheless, if this error is to be avoided, enough relevant evidence should be presented that any two credible observers would arrive at similar evaluative conclusions.
-
The Error of Non-evaluation
It sometimes happens that those responsible for preparing evaluations fail to offer judgments of effectiveness or of the "value" of a given performance. Instead, the written "evaluation" consists exclusively of a thorough description of professional responsibilities. Details of what activities the person engaged in do not describe how well those activities were performed. When description is confused with evaluation, the error of non-evaluation occurs.
-
The Error of Bashing the Beginner
New faculty members, particularly those at the beginning of their careers, cannot reasonably be expected to be as productive as they will be later. If a department makes no provision for the need for startup time in evaluating the work of beginners, these persons are placed at unfair and potentially demoralizing disadvantages. One way or another, adequate systems of annual evaluation provide time for newcomers to "get up to speed." For example, a department could have less ambitious publication expectations for people in their first two or three years out of graduate school. Or a department that employs a rolling average could specify that average "ratings" shall be entered into the data for the years preceding the faculty member's employment by Kansas State University (and possibly also, where it would be advantageous, for the first year of employment). Or a department could routinely move any below-average ratings of beginners some specified amount toward the mean, e.g., two-thirds of the way for the first evaluation year and one-third of the way for the second year.
-
The Error of Unintended Weighting
-
Persistent Inequities
Systems of annual evaluation are intended to result in equitable salaries, but inequities do sometimes appear. Therefore, each year salaries should be reviewed in light of the cumulative relative contributions of the faculty. If such a review reveals significant inequities that cannot reasonably be rectified within the limits of a single year's allocation, these inequities should be brought to the attention of the dean and handled within the guidelines for justifying and making salary adjustments on bases outside the annual evaluation.
-
Communication
-
Creating Or Revising an Evaluation System