|

Happy New Year! And AEA 2013 Recap, Part 2

Happy New Year from all of us at CRC to our friends, evaluation colleagues, clients, and miscellaneous blog followers!

If you’re like us, the transition period spanning the end of one year and the beginning of the next is a very busy time masquerading as a relaxing one. But, even as we’d all like a bit more down time, the busyness of this transition time is a fertile one for reviewing lessons learned from the departing year and for setting goals for the new one…

and, with that in mind, we’d like to share (with apologies for the delay) the second half of CRC’s recap of our experiences at the 2013 AEA conference! So, without further ado…

Tracy, Research Associate, provided the following reflections on her experience at a session on the topic of being mindful of when evaluation is appropriate:

There seems to be this push to evaluate everything these days but, in some cases, a program may not be ready for evaluation. Maybe the program is not fully operational, or maybe there are no efficient ways to collect the necessary data. Regardless of the reason, it is important that we as evaluators consult with our clients about the best approach to obtain the most meaningful information. In some cases, this may entail conducting an “evaluability assessment” to examine whether the program is suited for evaluation. The timing was a bit uncanny, but after hearing this discussion at an AEA plenary session, a few days later our office was presented with an RFP to assess the feasibility of conducting a randomized controlled trial (RCT) for a national learning initiative. I was left thinking, “Bravo!” to this organization for their willingness to invest time and money to determine whether it is appropriate to pursue this approach.

Sarah, Database Analyst, took an Eval 101 course to brush up on fundamentals:

The course was really informative and focused on some of the broader issues facing evaluatorsa big change for me, since I usually focus on the minutiae of collected data. In particular, the instructor raised some questions of ethics in designing evaluations: for example, if an experimental learning program is tested in public schools through a longitudinal study, and it was determined that children getting the experimental services were far more successful than the control group, is it fair to keep children in the control group from accessing these services? I knew that similar issues come up in clinical studies (e.g., if a new experimental drug proves to be vastly more effective than the current standard of care, the option of taking this drug should be offered to the control group), but I never thought about how this might apply in social services, where the measure of success is often less cut-and-dry than experimental drug trials. It got me thinkingwhere do we draw the line? Virtually any experimental program aims to improve some aspect of human life, so who decides which improvements are significant enough?

Sheila, Research Associate, and Jill, Research Assistant, attended several informative sessions most enjoyed attending the annual business meeting of the AEA’s Data Viz topical interest group (TIG). They were excited to be part of the discussion about how the TIG will focus its efforts in 2014, but more so were pleased to continue their roles as “support crew” for Taj’s Ignite presentation, informally known as WE LOVE MAPS. Taj had just 5 minutes to demonstrate why data nerds (like us at CRC) need to be using more maps…and how to know a good map from a bad one.

AEA Ignite 2013: WE LOVE MAPS

Check it out! And stay tuned for Taj’s next engagement at Ignite Baltimore this spring!

Let’s work together!

Most nonprofits spend days putting together reports for board meetings and funders. The Inciter team brings together data from many sources to create easy and effortless reports. Our clients go from spending days on their reports, to just minutes.