Evaluation Use
Use of an evaluations findings (i.e., lessons learned) and process use (i.e., evaluation use that takes place before lessons learned are generated and feedback initiated) are two of the clearest, simplest examples of the uses for evaluations. (Fleischer and Christie (2009) offer other examples, but recognizing they don't have clear definitions, they wont be discussed here.) By now there is much agreement that there is a great deal of useful information generated during the evaluation process itself, information that could increase involvement and learning.
Instituting practices that foster involvement in the evaluation process will lead to increased evaluation use, right? This idea seems to be common sense, but why is it that common sense concepts are often hard to implement or forgotten all together?
Id offer that often common sense ideas sound reasonable but may still a bit too abstract to be used. For these ideas to go from abstract to implementation, some preliminary questions may arise and should be answered, including:
- Does process use play a role in my outcomes (e.g., through learning)?
- How is each stakeholder going to use the evaluation?
- Is the evaluator going to be in charge of involving the stakeholders and facilitation?
Fleischer and Christies (2009) study found that nearly three quarters of evaluation participants thought nonuse of evaluation findings was a concern. They felt that a lack of understanding and transparency about the evaluation process made it difficult for people to place much value in the results. Stakeholders would reject findings based on personal beliefs and values rather than on the data that came out of the evaluation.
Creating understanding and a transparent evaluation process takes resources and planning, not to mention the requirement that key stakeholders need to regard evaluation use as a priority. Clearly evaluation use is important to any evaluation. Therefore concepts should be explicit from the beginning so the evaluator can properly orchestrate the dissemination of varying reports to the correct users. For example you have a program with an overall anticipated outcome of reducing youths recidivism. The funders and directors of the program are going to be especially interested in the primary findings, while those implementing the program and working more intimately with the families may be more interested in knowing more of the process use outcomes.
This approach doesn't mean generating numerous lengthy reports and seeing dramatically increased costs as a result. It just involves more planning and organization in the beginning, requires a timeline for disseminating key pieces of information to relevant stakeholders throughout the program, and breaking a larger report down into manageable chunks for dissemination to certain stakeholders, depending on the needs they identified at the beginning of the evaluation.
Breaking a topic or a request into manageable chunks can help clarify what is involved and promote evaluation use.
Resource: Fleischer, D.N., & Christie, C.A. (2009). Evaluation use results from a survey of U.S. American evaluation association members. _American Journal of Evaluation, 2_, 158-175.
Recent posts
Prioritize Your Data and Reporting Needs When Shopping for a Data Management System
Marry your Data Management System or Date Other Systems: The Difference Between an All-in-One System and a Best-of-Breed (or Best of Need) Solution
Navigating the Cloud: AWS vs Azure
Let’s work together!
Most nonprofits spend days putting together reports for board meetings and funders. The Inciter team brings together data from many sources to create easy and effortless reports. Our clients go from spending days on their reports, to just minutes.