Evaluation Definition: What is Evaluation?
==What is evaluation?<ref> Understanding Evaluation, The Way to Better
Prevention Programs], a publication written by Lana Muraskin, a consultant to Westat Inc.under contract to ED. http://www.ed.gov/offices/OUS/PES/primer1.html </ref>==
As defined by the American Evaluation Association, evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness.
Evaluation is the systematic collection and analysis of data needed to make decisions, a process in which most well-run programs engage from the outset. Here are just some of the evaluation activities that are already likely to be incorporated into many programs or that can be added easily:
- Pinpointing the services needed for example, finding out what knowledge, skills, attitudes, or behaviors a program should address
- Establishing program objectives and deciding the particular evidence (such as the specific knowledge, attitudes, or behavior) that will demonstrate that the objectives have been met. A key to successful evaluation is a set of clear, measurable, and realistic program objectives. If objectives are unrealistically optimistic or are not measurable, the program may not be able to demonstrate that it has been successful even if it has done a good job
- Developing or selecting from among alternative program approaches for example, trying different curricula or policies and determining which ones best achieve the goals
- Tracking program objectives for example, setting up a system that shows who gets services, how much service is delivered, how participants rate the services they receive, and which approaches are most readily adopted by staff
- Trying out and assessing new program designs determining the extent to which a particular approach is being implemented faithfully by school or agency personnel or the extent to which it attracts or retains participants.
Through these types of activities, those who provide or administer services determine what to offer and how well they are offering those services. In addition, evaluation in education can identify program effects, helping staff and others to find out whether their programs have an impact on participants' knowledge or attitudes.
The different dimensions of evaluation have formal names: process, outcome, and impact evaluation.
Rossi and Freeman (1993) define evaluation as "the systematic application of social research procedures for assessing the conceptualization, design, implementation, and utility of ... programs." There are many other similar definitions and explanations of "what evaluation is" in the literature. Our view is that, although each definition, and in fact, each evaluation is slightly different, there are several different steps that are usually followed in any evaluation. It is these steps which guide the questions organizing this handbook. An overview of the steps of a "typical" evaluation follows.
Process Evaluations describe and assess program materials and activities. Examination of materials is likely to occur while programs are being developed, as a check on the appropriateness of the approach and procedures that will be used in the program. For example, program staff might systematically review the units in a curriculum to determine whether they adequately address all of the behaviors the program seeks to influence. A program administrator might observe teachers using the program and write a descriptive account of how students respond, then provide feedback to instructors. Examining the implementation of program activities is an important form of process evaluation. Implementation analysis documents what actually transpires in a program and how closely it resembles the program's goals. Establishing the extent and nature of program implementation is also an important first step in studying program outcomes;that is, it describes the interventions to which any findings about outcomes may be attributed. Outcome evaluation assesses program achievements and effects.
Outcome Evaluations study the immediate or direct effects of the program on participants. For example, when a 10-session program aimed at teaching refusal skills is completed, can the participants demonstrate the skills successfully? This type of evaluation is not unlike what happens when a teacher administers a test before and after a unit to make sure the students have learned the material. The scope of an outcome evaluation can extend beyond knowledge or attitudes, however, to examine the immediate behavioral effects of programs.
Impact Evaluations look beyond the immediate results of policies, instruction, or services to identify longer-term as well as unintended program effects. It may also examine what happens when several programs operate in unison. For example, an impact evaluation might examine whether a program's immediate positive effects on behavior were sustained over time. Some school districts and community agencies may limit their inquiry to process evaluation. Others may have the interest and the resources to pursue an examination of whether their activities are affecting participants and others in a positive manner (outcome or impact evaluation). The choices should be made based upon local needs, resources, and requirements.
Regardless of the kind of evaluation, all evaluations use data collected in a systematic manner. These data may be quantitative such as counts of program participants, amounts of counseling or other services received, or incidence of a specific behavior. They also may be qualitative such as descriptions of what transpired at a series of counseling sessions or an expert's best judgment of the age-appropriateness of a skills training curriculum. Successful evaluations often blend quantitative and qualitative data collection. The choice of which to use should be made with an understanding that there is usually more than one way to answer any given question.
Why Do Evaluation?
Evaluations serve many purposes. Before assessing a program, it is critical to consider who is most likely to need and use the information that will be obtained and for what purposes. Listed below are some of the most common reasons to conduct evaluations. These reasons cut across the three types of evaluation just mentioned. The degree to which the perspectives of the most important potential users are incorporated into an evaluation design will determine the usefulness of the effort.
Evaluation for Project Management
Administrators are often most interested in keeping track of program activities and documenting the nature and extent of service delivery. The type of information they seek to collect might be called a "management information system" (MIS). An evaluation for project management monitors the routines of program operations. It can provide program staff or administrators with information on such items as participant characteristics, program activities, allocation of staff resources, or program costs. Analyzing information of this type (a kind of process evaluation) can help program staff to make short-term corrections ensuring, for example, that planned program activities are conducted in a timely manner. This analysis can also help staff to plan future program direction such as determining resource needs for the coming school year.
Operations data are important for responding to information requests from constituents, such as funding agencies, school boards, boards of directors, or community leaders. Also, descriptive program data are one of the bases upon which assessments of program outcome are built it does not make sense to conduct an outcome study if results can not be connected to specific program activities. An MIS also can keep track of students when the program ends to make future follow-up possible.
Evaluation for Staying on Track
Evaluation can help to ensure that project activities continue to reflect project plans and goals. Data collection for project management may be similar to data collection for staying on track, but more information might also be needed. An MIS could indicate how many students participated in a prevention club meeting, but additional information would be needed to reveal why participants attended, what occurred at the meeting, how useful participants found the session, or what changes the club leader would recommend. This type of evaluation can help to strengthen service delivery and to maintain the connection between program goals, objectives, and services.
Evaluation for Project Efficiency
Evaluation can help to streamline service delivery or to enhance coordination among various program components, lowering the cost of service. Increased efficiency can enable a program to serve more people, offer more services, or target services to those whose needs are greatest. Evaluation for program efficiency might focus on identifying the areas in which a program is most successful in order to capitalize upon them. It might also identify weaknesses or duplication in order to make improvements, eliminate some services, or refer participants to services elsewhere. Evaluations of both program process and program outcomes are used to determine efficiency.
Evaluation for Project Accountability
When it comes to evaluation for accountability, the users of the evaluation results likely will come from outside of program operations: parent groups, funding agencies, elected officials, or other policymakers. Be it a process or an outcome evaluation, the methods used in accountability evaluation must be scientifically defensible, and able to stand up to greater scrutiny than methods used in evaluations that are intended primarily for "in-house" use. Yet even sophisticated evaluations must present results in ways that are understandable to lay audiences, because outside officials are not likely to be evaluation specialists.
Evaluation for Program Development and Dissemination
Evaluating new approaches is very important to program development in any field. Developers of new programs need to conduct methodical evaluations of their efforts before making claims to potential users. Rigorous evaluation of longer-term program outcomes is a prerequisite to asserting that a new model is effective. School districts or community agencies that seek to disseminate their approaches to other potential users may wish to consult an evaluation specialist, perhaps a professor from a local university, in conducting this kind of evaluation.
Three Levels of Evaluation<ref>Evaluation Handbook , W.K. Kellogg Foundation (PDF)</ref>
Project-level evaluation is the evaluation that project directors are responsible for locally.The project director, with appropriate staff and with input from board members and other relevant stakeholders, determines the critical evaluation questions, decides whether to use an internal evaluator or hire an external consultant, and conducts and guides the project-level evaluation.The Foundation provides assistance as needed. The primary goal of project-level evaluation is to improve and strengthen Kellogg-funded projects.
Ultimately, project-level evaluation can be defined as the consistent, ongoing collection and analysis of information for use in decision making. Consistent Collection of Information
If the answers to your questions are to be reliable and believable to your project’s stakeholders, the evaluation must collect information in a consistent and thoughtful way.This collection of information can involve individual interviews, written surveys, focus groups, observation, or numerical information such as the number of participants.While the methods used to collect information can and should vary from project to project, the consistent collection of information means having thought through what information you need, and having developed a system for collecting and analyzing this information.
The key to collecting data is to collect it from multiple sources and perspectives, and to use a variety of methods for collecting information.The best evaluations engage an evaluation team to analyze, interpret, and build consensus on the meaning of the data, and to reduce the likelihood of wrong or invalid interpretations.
Use in Decision Making
Since there is no single, “best” approach to evaluation which can be used in all situations, it is important to decide the purpose of the evaluation, the questions you want to answer, and which methods will give you usable information that you can trust. Even if you decide to hire an external consultant to assist with the evaluation, you, your staff, and relevant stakeholders should play an active role in addressing these questions.You know the project best, and ultimately you know what you need. In addition, because you are one of the primary users of evaluation information, and because the quality of your decisions depends on good information, it is better to have “negative” information you can trust than “positive” information in which you have little faith. Again, the purpose of project-level evaluation is not just to prove, but also to improve.
People who manage innovative projects have enough to do without trying to collect information that cannot be used by someone with a stake in the project. By determining who will use the information you collect, what information they are likely to want, and how they are going to use it, you can decide what questions need to be answered through your evaluation.
Project-level evaluation should not be a stand-alone activity, nor should it occur only at the end of a program. Project staff should think about how evaluation can become an integrated part of the project, providing important information about program management and service delivery decisions. Evaluation should be ongoing and occur at every phase of a project’s development, from preplanning to start-up to implementation and even to expansion or replication phases. For each of these phases, the most relevant questions to ask and the evaluation activities may differ.What remains the same, however, is that evaluation assists project staff, and community partners make effective decisions to continuously strengthen and improve the initiative.