Evaluation

Resource

How does evaluation help in HIV prevention?

what is evaluation?

Most HIV prevention service providers conduct evaluation and data collection activities on a regular basis, although they may not consider it evaluation. Writing case notes during a prevention case management session, discussing client feedback on program services, watching needle exchange in action, taking notes at staff meetings: these are all examples of “informal” data collection that happen every day. The evaluation provides systems for collecting data and then helps providers make sense of the data they collect so that they can use it in their work. Evaluation can help providers increase their knowledge, better understand the populations they serve, improve programs, and make decisions. Evaluation is a way to identify program strengths and areas for improvement.1 It is a way for service providers to be responsible to the communities in which they work, to show accountability to funders, and to ensure that programs have the intended result. Evaluation can be integrated into all phases of planning and implementing an intervention:

  • Before (formative evaluation, needs assessment): To understand the context of the lives of community members and what puts them at risk, how they avoid risk, and what kinds of resources they need to reduce risk and maintain health and wellness. This can help shape the program and provide baseline data to help measure any change.
  • During (process monitoring and evaluation, quality assurance): To find out what actually occurs in practice and if the program is operating as planned; document interactions with participants; discover which components work best and if the program meets the needs of the participants. This can help develop any changes to the program.
  • After (outcome and impact evaluation): To determine what, if any, effect (short and long-term) the program has had on the participants, their partners/families, program staff, and the community. At this time, program staff can reassess their objectives and use findings to further refine their programs.

why do evaluation?

Evaluation can help agencies work more efficiently and improve their programs. It is not a substitute for staff and providers’ experience and knowledge but can offer complementary information. Using systematic data collection to design an intervention or program can help agencies make smart choices about what elements to incorporate and what behaviors, influences, and life issues to address.1 Evaluation can help agencies successfully compete for funds and be precise in seeking funding. Funders often require agencies to show that they have systematically thought through current and proposed interventions.2 Evaluation can also help agency staff know exactly what services best serve their program participants so that they don’t respond to all grant proposals, but only ones that specifically relate to their participants’ needs.1

what’s a good evaluation question?

Good evaluation questions come from well-written program objectives that are both realistic and measurable. When agencies design program activities based on the desired outcome (example: conduct outreach—increase the number of women tested for HIV), forming an evaluation is much easier.3 It is hard to evaluate programs that lack direction. The mark of a good evaluation question is that you care about the answer. Is it more helpful to know the number of condoms handed out in a month, or what clients do with the condoms after they take them? A good question should also be answerable. Many agencies may want to know if their programs are effective, but broad questions like that may require more time, money, and staff than are possible. For example, instead of asking, “Has our agency lowered HIV rates among gay men?” a more helpful and more easily answerable question might be, “Have the men who attended our living room discussion program reduced their use of methamphetamine or increased their use of condoms?”

what are perceptions of evaluation?

A common perception of evaluation is that it is only used by funders to pass judgment, to “prove” that a program worked or failed. It is no wonder that many agencies are wary of the idea of evaluation.4,5 Yet evaluation is a way for providers to know for themselves what works, what doesn’t work, what adjustments might improve their program, and whether what they are doing makes a difference for their clients and community. Another perception of evaluation is that it is often “done to” an agency. However, evaluation can be incorporated into regular program planning and all agency members can be active participants. This strengthens the process and ensures that results will be understood and used. Evaluators, front line staff, representative community members or program participants should be included in all phases: designing evaluation questions, reviewing forms or guides, discussing results, and brainstorming action points.6

how is an evaluation conducted?

There are many ways for agencies to conduct the evaluation. One way is to train staff in evaluation or hire an internal staff person with experience doing research to be in charge of evaluation and data. This approach may work best at a large agency with many resources. For example, AIDS Project Los Angeles’s evaluation team worked with their Commercial Sex Venues (CSV) initiative to design and evaluate risk-reduction activities in nine CSVs in Los Angeles County. Program and evaluation staff collaboratively developed and implemented formative research, pre/post evaluations, outreach forms, program evaluation and an annual needs assessment of patrons. CSV patrons reported decreased unprotected sex in CSVs at a one-year follow-up.7 Another example is for an agency to hire an external evaluator either for a one-time or ongoing evaluation. This approach can be less expensive than hiring a staff person, can build agency capacity to conduct some aspects of evaluation internally, and may be perceived by funders as having less bias than an internal evaluator. Agencies have had both positive and negative experiences with external evaluators. These evaluations are more likely to be successful when there is a strong partnership; agreement about roles, responsibilities, and expectations; and a dedication to collaboration between the evaluator and the agency. When hiring an evaluator, agencies can look for someone with a history of successful collaboration, extensive skills and experience with evaluation in a service setting, and knowledge or experience with the program population.8 A third example is for an agency and a local evaluator to work collaboratively to develop evaluation approaches and build agency capacity. Chicago HIV Prevention and Adolescent Mental Health Project (CHAMP), is a long-term collaboration between researchers at the University of Chicago and parents, schools, and community agencies. Together, they have designed, implemented, and evaluated an HIV prevention program for Black youth and families. The collaboration began in 1995 and continues today.9 As HIV prevention service providers have become more involved in the evaluation, technical assistance, and capacity building assistance (CBA) programs for providers have increased. The Centers for Disease Control and Prevention (CDC) has funded capacity building that integrates program planning, monitoring, and evaluation activities in agencies. A national network of CBA providers builds organizational, HIV prevention program and evaluation capacity in agencies that serve Asian & Pacific Islanders, Latino/as, American Indians/Alaska Natives, and African Americans.10

what still needs to be done?

Without specific time and money set aside, evaluation can get lost in the crisis-oriented world of client services. Agencies should be encouraged to foster an atmosphere of learning along with service provision. Agencies can write evaluation time into job descriptions and allow time for staff to read and discuss what they’ve learned in regular meetings. Funders need to cover all costs related to evaluation so that it can be appropriately staffed. Often-overlooked costs include staff time and training, data entry, data analysis, write-up of findings, and dissemination costs. Making sure that evaluation findings are shared is crucial. Agency staff needs to write about their findings and present them at regional and national conferences. Funders need to share reports with agencies and other funders and synthesize the lessons learned for all their grantees. Health departments can organize regional report-backs and encourage networking among agencies with similar evaluation needs.


Says who?

1. Gandelman AA, DeSantis LM, Rietmeijer CA. Assessing community needs and agency capacity—an integral part of implementing effective evidence-based interventions. AIDS Education and Prevention. 2006;18:32-43. 2. Holtgrave DR, Gilliam A, Gentry D et al. Evaluating HIV prevention efforts to reduce new infections and ensure accountability. AIDS Education and Prevention. 2002;14SA:1- 3. Nu’Man J, King W, Bhalakia A, et al. A framework for building organizational capacity integrating planning, monitoring, and evaluation. Journal of Public Health Management and Practice. 2007;Suppl:S24-32. 4. Kegeles SM, Rebchook GM. Challenges and facilitators to building program evaluation capacity among community-based organizations. AIDS Education and Prevention. 2005;17:284-299. 5. Napp D, Gibbs D, Jolly D, et al. Evaluation barriers and facilitators among community-based HIV prevention programs. AIDS Education and Prevention. 2002;14:38-48. 6. Gilliam A, Davis D, Barrington T, et al. The value of engaging stakeholders in planning and implementing evaluations. AIDS Education and Prevention. 2002;14:5-17. 7. Mutchler M, Colemon L. A model for community-based participatory evaluation: Benefits, challenges, and lessons of evaluating HIV prevention in commercial sex venues. Presented at the 2005 Nat’l HIV Prevention Conf, Atlanta, GA. #M3-D0601. 8. Center for AIDS Prevention Studies. Working Together: A Guide to Collaborative Research in HIV Prevention. 2001. 9. Baptiste DR, Paikoff RL, McKay MM, et al. Collaborating with an urban community to develop an HIV and AIDS prevention program for black youth and families. Behavior Modification. 2005;29:370-416. 10. Taveras S, Duncan T, Gentry D, et al. The evolution of the CDC HIV Prevention Capacity-building Assistance Initiative. Journal of Public Health Management & Practice. 2007;13S:S8-S15.

Evaluation resources:

Manuals

  • Good Questions, Better Answers: A Formative Research Handbook
  • Program Evaluation, NMAC

Researchers

  • Behavioral and Social Science Volunteer Program (BSSV)

Training

  • STD/HIV Prevention Training Centers

Tools

  • AETC National Evaluation Center
  • American Evaluation Association
  • CDC Program Evaluation Resources
  • The Community Toolbox
  • Virtual Program Evaluation Consultant (VPEC)

*All websites accessed 10/2007


Prepared by Dara Coan,* Oscar Macias,* Janet Myers,** Kevin Khamarko*** *San Francisco Public Health Department, **CAPS, ***AETC Evaluation Center December 2007. Fact Sheet #44ER Special thanks to the following reviewers of this fact sheet: Carl Bell, Alice Gandelman, Ellen Goldstein, Carol Kong, Gene Shelley, Stacy Vogan, Duane Wilkerson. Reproduction of this text is encouraged; however, copies may not be sold, and the University of California San Francisco should be cited as the source. Fact Sheets are also available in Spanish. To receive Fact Sheets via e-mail, send an e-mail to [email protected] with the message “subscribe CAPSFS first name last name.” ©December 2007, University of CA.

Research Date