What constitutes a good evaluation?
After a bit of a hiatus from the blog, we are happy to be back! We plan to post a new blog entry each month and hope our entries incite some discussion among you in the comments section.
So to get things rolling, we have a question to pose to you...
What constitutes a good evaluation?
Think about that for a minute.
Good questions? Good design? Good data? Good analysis? Good report writing? Good goals?
Should I even give the shallow answer "it depends" or "it is a combination of them all"? Why do these answers satisfy us (or at least get us to move onto the next question)? Probably because they are reasonable and they make some sense; however, a response to a question is different than an answer. Giving an answer to the overall question of what makes a good evaluation will take some thought. An article by Pullin and Knight (2009) in the New Directions for Evaluation journal discussed data credibility. Their opening paragraph offers that collecting and analyzing data is necessary to answer what works and what doesn't, with the caveat that the data need to be credible. That sounds straightforward enough. It is like Snapple- better stuff means a better drink. In an ideal world, an evaluation would start with good goals and questions. Then a design appropriate to those questions would be developed. The data would be immaculately collected and maintained, with concurrent analyses and reports generated.
But in reality, we are more likely to start asking the questions once we all ready have the data than we are to start with our goals and questions. The method will be modified to fit budgetary and timeline constraints, and as such will likely no longer address the original questions exactly. As the program begins and the data are collected it will become clear that a few changes are necessary. The changes will be made with the short-term goal of the program moving forward, but not the questions and outcomes in mind. The report will be generated with caveats and suggestions for next time. Is this likely reality still a good evaluation?
Pullin and Knight (2009) suggest implementing high standards for the methods and determining the necessary strength of evidence. I'd have to agree that determining your methods capabilities initially may create more realistic expectations and attainable outcomes. A good evaluation above all has to meet the stakeholders' needs in a useful way. If you have strong methods you will be starting from a credible base.
Resources: Pullin, A.S., & Knight, T.M. (2009). Data credibility: A perspective from systematic reviews in environmental management. In M. Birnbaum & p. Mickwitz (Eds.), _Environmental program and policy evaluation: Addressing methodological challenges. New Directions for Evaluation_, 122, _65-74
Recent posts
Prioritize Your Data and Reporting Needs When Shopping for a Data Management System
Marry your Data Management System or Date Other Systems: The Difference Between an All-in-One System and a Best-of-Breed (or Best of Need) Solution
Navigating the Cloud: AWS vs Azure
Let’s work together!
Most nonprofits spend days putting together reports for board meetings and funders. The Inciter team brings together data from many sources to create easy and effortless reports. Our clients go from spending days on their reports, to just minutes.