Rigor for the Rest of Us
What part of “Y*i1 = Ziπ+ε1i where Yi1 = Y*i1 if Y*i1 >; Yi1 = c if Y*i1 ≤ c and Y2i = Yi1δ + ε2i” do you not understand?
If you are like me, you aren’t quite certain why these paired equations have touched off a tsunami of discussion in the world of microfinance. It seems that one set of researchers got the formula wrong (including a mismatched sign, bad dog), and the other set of researchers is ticked off, but if you are following the dog-fight, the wronged party is also wrong, assuming Wrong (causality) is more wrong than wrong (that little ole’ mismatched sign) … show that to your 6th grade teacher.
The scuffle opens up a larger issue: the growing discomfort among practitioners about evidence. Is there one path to the truth (Randomized Control Trials)? Are there several? And what if we don’t do RCTs – are we a pack of feckless losers?
If you help organize savings groups, take heart … we are offering up practical ideas from “researchers from the real world”, who care about rigor but are familiar with our circumstances.
Recently, Microfinance USA held a well-attended conference that included Chris Dunford, CEO of Freedom From Hunger. Dunford takes on three titans (video) of randomized control trials (Jonathan Morduch, Dean Karlan, and David Roodman), each with deep experience in microfinance research and each committed to the RCT as the surest way to discover evidence. Chris, with much experience in leading his organization “through more RCTs than any other NGO,” challenges the titans. Among his points.
- RCTs often tell us what we already know if we are in the least bit clued in.
- RCTs are hugely expensive.
- RCTs often cannot be generalized from one very specific set of circumstances to another. Microfinance, including savings groups, varies by product, place and people.
I wish to add a forth point, that the validity of RCTs are challenged by scientists themselves, indeed vigorously.
A recent Scientific American article, An Epidemic of False Claims, written by a respected medical researcher points out the failures of control trials. Thwarting their success are professional jealously, competition, sloppiness, secrecy, and a poor set of incentives to motivate cooperation, data sharing, rigor and honesty.
And there’s more. A December New Yorker article entitled The Truth Wears Off adds this ingredient to the witches brew of misleading evidence – the sheer mystery of flukes, that sometimes scientific results not only confound, they cannot be repeated, even in similar conditions and under the care of the most earnest researchers. As a friend of mine at the Harvard School of Public Health says, disturbingly, the last thing a researcher should do is try to replicate his findings. Get the prize and move on.
If scientists themselves question the scientific method, then what are we to think about its application to the world of dynamic human interaction in its infinite variability (our world)? Hint: It doesn’t look good. Do we abandon RCTs in program evaluation or product development? Or, do we can use RCTs if we are lucky enough to have the time and money, along with other research methods, in much the way Oxfam’s Savings for Change does? Or, do we try as best we can to incorporate their message of non-biased, controlled experimentation into less expensive forms of study?
The next few posts in this series will be by researchers and practitioners who take the issue of program design and impact seriously, and who believe different approaches can help our services become more relevant, powerful, and efficiently deployed.
Reader Comments (2)
David Roodman responds to this post on his blog. See here: http://blogs.cgdev.org/open_book/2011/06/rcts-are-people-too.php
And Jonathan Lewis writes on The Huffington Post in a related thread on RCTs: http://www.huffingtonpost.com/jonathan-lewis/social-impact-evaluation-_b_881296.html
Wed, June 22, 2011 | Caitlin McShane
Thanks, Kim, for bringing this up!
As a small, humble, community-based organization, we have a snowball's chance in....to carry out a RCT. The cost of the research would dwarf our operational budget to the point of being ridiculous. How could we justify that spending? AND we would never be able to sustain such type of research. On the other hand, we firmly believe in M&E to 'keep us honest'; in other words, we believe in being held accountable to what we say we will achieve. Does this make us less worthy? Does not old fashioned M&E still have a place for organizations like ours? Can we not leave the heavy lifting to those with the resources, time and man power?
Thu, June 23, 2011 | Jill Thompson (email@example.com)