Experimental Design versus Applied Research Design in Evaluations
Experimental design, a major component of pure (i.e. basic) research is considered the gold standard for research. The premise of experimental design is that a group of participants are randomly assigned to treatment or intervention groups. This random assignment is intended to limit differences between groups. Additionally, the participant and/or experimenter are often blind to which group the participant belongs. With this type of design, you can effectively compare the outcomes across groups at the end of a program. Presumably, the group that received your intervention will show the expected outcomes, while the group that didn't receive the intervention will not. Any conclusions drawn by the experimenter that the intervention worked or didn't work should be taken as concrete because all other factors that could influence the outcomes were controlled (either through random assignment or blinding).
The use of experimental design in evaluation research has been debated for years. Given the control and causality associated with the experimental design, it is often thought to be the best research design. However, most evaluation projects take place in real-world settings where control is difficult and the environment is ever-changing, making them better-suited for applied research design. Applied research operates well in the real world. It looks at the data or evidence and tells a story of that specific situation. It is flexible and conforms to less stringent guidelines than those required in pure research such as random assignment. Unlike the experimental design where the controlled experiment allows one to generalize the findings, applied research produces results that are applicable for the program being studied.
The goal of applied research is to fully explore or solve community level problems. Rigid protocols, such as those required in experimental design, are often not practical or even possible when working in an applied research setting. For example, when evaluating an out-of-school time program, how can an evaluator randomly assign students to groups that receive and/or participate in the program do or not?!
In our evaluation work, we routinely take an applied research approach to telling the story of a program. We collect quality quantitative data to measure the implementation of the program along with any possible effect or change that the program may be having on participants. But we also gather qualitative data from participants and program staff to learn about the direct effects the program has had on them. These mixed methods allow us to present outcomes. Just in a different way than through a randomized, purely experimental design.
Recent posts
Prioritize Your Data and Reporting Needs When Shopping for a Data Management System
Marry your Data Management System or Date Other Systems: The Difference Between an All-in-One System and a Best-of-Breed (or Best of Need) Solution
Navigating the Cloud: AWS vs Azure
Let’s work together!
Most nonprofits spend days putting together reports for board meetings and funders. The Inciter team brings together data from many sources to create easy and effortless reports. Our clients go from spending days on their reports, to just minutes.