Evaluation Research Essay

Cheap Custom Writing Service

Evaluation research originally emerged in the education and human services arenas as a means for improving social programs, but it now focuses more on determining the overall effectiveness of programs. Both approaches are valuable in that they yield evidenced-based research to secure funding or develop and implement effective programs.

One type of evaluation research, formative evaluation, deals with influencing the beginning stages of program development. The evaluation researcher collects and interprets data about how a program operates in its early stages and then translates this information into concrete suggestions for improvement to be shared with program staff. For example, if a local school board requests a formative analysis of a new Holocaust Education program, the researcher would carefully review the program’s goals, objectives, curriculum, and instructional materials as well as collect data from students, teachers, and administration about how the program functions. After analyzing and interpreting the data, the evaluator could offer program improvement suggestions in such areas as curriculum, instructional materials, and overall program management.

The second type of evaluation research, process evaluation, determines whether a given program was implemented as designed. Understanding the sources of variation in program delivery allows evaluators to understand better how a program works. Process evaluations are especially important when trying to understand multisite programs such as the Gang Resistance Education and Training program. The focus is not to determine if a program works, but rather to determine how it works and what variations in its delivery could potentially contribute to its overall effect.

Summative evaluation, the third type of evaluation, focuses on determining the program’s outcomes. In other words, did the program work? Did it meet its professed goals and objectives? This type of evaluation research resonates strongly with funding agencies. For example, in the 1990s, the National Cancer Institute funded several health promotion programs reaching out to Hispanic women to raise awareness, increase knowledge, and improve access to cancer screening opportunities. For funding eligibility, each proposed program had to include a summative evaluation component. The proposal design comparing the intervention group with another group had special appeal to the funding agency.

Program evaluations, regardless of type, typically encompass a sequence of stages. The initial two stages involve understanding the essence of the program. More specifically, stage 1 involves the formulation of the program’s goals and objectives, while stage 2 involves developing an understanding of the program’s delivery, setting, and participants. Stage 3 focuses on designing the evaluation. The design can include both qualitative and quantitative research methodologies. Choice of methodologies depends on several factors, including purpose of the evaluation and type of information needed. The fourth stage involves collecting, analyzing, and interpreting the data. The fifth and final stage of the evaluation involves utilizing the evaluation results to improve the program or to verify that it works.

Evaluation Research Designs

Evaluation researchers typically use a variety of research designs when evaluating a program. Design choices depend on several aspects of the evaluation, including the intended audience, available resources, and ethical concerns. Three main types of evaluation research designs exist: experimental designs, quasi-experimental designs, and qualitative research designs.

Experimental designs use summative evaluations to assess whether the program worked. Because this design involves the random assignment of participants to an experimental or a control group, researchers can be reasonably certain that any outcome differences between the two groups result from program participation as opposed to preexisting group differences. For example, if an elementary school principal applies for funding for a social-skills training program for fifth-grade students, she must include an evaluation design in her proposal. Recognizing that experimental design is a stringent, respected means for evaluating program effects, she proposes that her school’s fifth-grade students will be randomly assigned to either an experimental group or a control group. She hypothesizes that those students assigned to the experimental group will demonstrate more positive social skills than those in the control group.

Quasi-experimental designs are more commonly utilized than the more stringent experimental design. Examples of quasi-experimental designs include non-equivalent control groups and time series. Like experimental designs, the nonequivalent control group design involves both an experimental and a control group. However, because random assignment is not possible, the researcher must find an existing “control” group that is similar to the experimental group in terms of background and demographic variables. Despite the matching process, the two groups may still differ in terms of important characteristics. Because of the nonequivalent nature of the groups, pretests will determine baseline differences on the outcome variable. Controlling for these differences allows the researcher to isolate program effects. Another type of quasi-experimental design, time-series, involves measurements made over a designated period of time, such as the study of traffic accident rates before and after lowering the speed limit. Time-series designs work in those situations in which a control group is not possible.

Qualitative research methodologies are another means for obtaining evaluation research data. These methodologies are especially useful because they can provide in-depth information about program processes, which in turn can inform program modifications. Focus groups, in-depth interviews, and content analyses of program materials are all useful tools for the program evaluator. For example, in a process program evaluation of a school-based parent-involvement program, Spanish-speaking parents are less likely to attend sessions than English-speaking parents, despite the fact that sessions are available in Spanish. An evaluator decides to find out why this is the case by setting up focus groups with Spanish-speaking parents. Results indicate that although Spanish-speaking parents want to be involved in their children’s schooling, they view their lack of English proficiency as a barrier to effective involvement. Program staff can use this information to modify recruitment strategies, ensuring communication to Spanish-speaking parents that their involvement encompasses more than schoolwork assistance.

From Research to Practice

It might appear that evaluation research would have a significant impact on whether a program continues. However, this is not always the case, particularly when a program is immensely popular and taps into the public’s perception of what should work. The Drug Abuse Resistance Education (D.A.R.E.) program provides a good example of a program that receives continued funding despite a plethora of evaluation research indicating that it has no long-term effects for preventing and reducing adolescent drug use. The D.A.R.E. case illustrates that program evaluation research can only be effective when program stakeholders, those with a vested interest in the program’s success, actually utilize the results either to modify the existing program or to design a more effective one.

Bibliography:

  1. Adler, Emily S. and Roger Clark, eds. 2007. How It’s Done: An Invitation to Social Research. 3rd ed. Belmont, CA: Thomson/Wadsworth.
  2. Birkeland, Sarah, Erin Murphy-Graham, and Carol Weiss. 2005. “Good Reasons for Ignoring Good Evaluation: The Case of the Drug Abuse Resistance Education (D.A.R.E.) Program.” Evaluation & Program Planning 28:247-56.
  3. Fitzpatrick, Jody, James R. Sanders, and Blaine J. Worthen. 2004. Program Evaluation: Alternative Approaches and Practical Guidelines. 3rd ed. Boston: Allyn & Bacon.
  4. Patton, Michael Q. 2002. Qualitative Evaluation and Research Methods. 3rd ed. Thousand Oaks, CA: Sage.
  5. Rossi, Peter H., Mark W. Lipsey, and Howard E. Freeman. 2003. Evaluation: A Systematic Approach. 7th ed. Thousand Oaks, CA: Sage.

This example Evaluation Research Essay is published for educational and informational purposes only. If you need a custom essay or research paper on this topic please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.

See also:

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality

Special offer!

GET 10% OFF WITH 24START DISCOUNT CODE