Two problems plague academic research: nimbleness and quality. The upshot is that research is not all that relevant to the decisions practitioners need to make on an ongoing basis.
I found myself considering the limits of academia at a conference at MIT. Alex Counts, CEO of Grameen Foundation challenged a panelist about calling RCTs - randomized contronl trials - the gold standard of research. Alex suggested I take a look at his blog post of last fall, which my inspired my response below.
Academics made, I believe, a big contribution to microfinance by pointing out that the outsized effects so splashily advertised by suppliers were paler than imagined, and that in some instances, interventions caused harm. Microfinance needed to be taken down a peg and made real if it were to improve. The researchers helped to do that.
However, the time has come for practitioners to take on the task of building internal research capability. The university with which I am associated is besieged by research requests of all kinds - marketing, operational, impact - and I often wonder why. There are two problems with delegating research to universities.
First, research is crucial to the development of institutional knowledge. MFIs and other financial inclusion organizations (including those that promote savings groups) must continually gather intelligence in order to make informed tweaks to their value proposition. Research should be accompanied by constant adjustment, with an institution’s full grasp of the idea that what is perfect today will be imperfect tomorrow. No outsider, especially a large university complex, can do this nimbly or efficiently.
Second, the quality of university research is in question. I have supervised many students who have interned at various esteemed university research organizations - the big brands. What actually goes on and counts as data is impressive, if not alarming. It only gets worse in the analytical phases, where data is plunged through various sieves until the desired findings are shaken out and held to the light. Studies are anything but rigorous. In fact, they are biased in favor of the outcome an external researcher wishes to find, usually influenced by a motivation to publish, to raise money, or, to build a brand.
Academic research is anything but objective. Researchers often seek to discover: experimental proof of their own hypotheses (“aren’t I smart?”); disproof of someone else’s intervention (“X, Miracle or Myth?” - though X never claimed to be a miracle). Or, more recently, benign inconclusivity (“I need to say slightly positive things about X to keep my research funded”). To repeat, these biases reflect a researcher’s need to get published, to get funded, or, to build a brand.
This should come as no surprise as the medical field has understood the problem of research quality for years (see works by Dr. John Ioannidis – e.g. An Epidemic of False Claims in Scientific American orthis in Nature). Medicine is considered a softer science than physics, and economics far softer than medicine. So if medical academics have begun to question the work of peers, what does this say for the very soft science of economics? And the equally soft science of anthropology? It says that the studies are not to be trusted unless we know the individual researcher, have seen him or her in action in the field, and can ascertain his or her competence. There is nothing generically solid about economic or anthropological study. It’s an individual matter.
If the purpose of research is to learn and make decisions, then financial inclusion entities must bring economic and anthropological talent (with full understanding of the limits of these disciplines) into the heart of their organizations, and to bolster such talent with indwelling organizational knowledge. Key of course is leadership committed to using research to drive internal change. Practitioner research would limit the hype that surrounds many academic studies, which make claims rarely verified by additional studies (in the same place) or by follow-on studies (of the same population and intervention). Think of all those “interesting findings” that were never re-proven over time. The essence of science is to prove and reprove hypotheses; verification is part of the scientific endeavor. You won’t find that in academia in the area of financial inclusion. To publish, academics must produce something flashy.
What would universities then do were they to cease with their natural and unnatural experimenting? Universities might train practitioners about practical ways to collect and analyze data on important behavioral and cultural concepts. And it would be beneficial to all if universities continued to study the socio political-environment that surrounds and shapes financial inclusion.
The trick will be for any research effort, whether practitioner-based, university-based, or third party-based, to include the dissemination of real information even when the findings disappoint.
Originally published 16th February 2014 by Kim Wilson
Reader Comments (8)
Nice post and thanks for highlighting my question and my blog. My question during the MIT conference was to ask Christina Riechers of Evidence Action to clarify if, by calling RCTs "the gold standard" in her Powerpoint, she was saying that RCTs were inherently superior to all other research methodologies (e.g., qualitative research, quasi-experimental design, etc.). She did not really address the question in her response.
I think the idea of building internal research capacity within practitioners is important. Complementing that could be more practitioner-friendly external research. I have been suggesting a paper titled "Lessons for Practice" which culls all of the findings in recent research on microfinance which suggests what works best in terms of leading to poverty reduction and related positive outcomes.
An op-ed in today's New York Times by Nicholas Kristof echoes some of the ideas on your blog: http://www.nytimes.com/2014/02/16/opinion/sunday/kristof-professors-we-need-you.html?hp&rref=opinion&_r=0
Sun, February 16, 2014 | Alex Counts
I have done work in developing countries since the 1970s. Previously I worked in the corporate world in various management capacities including being a CFO. My academic training was in engineering, then economics and finally accountancy.
In my experience, the use of data for decision making in the corporate environment is light years ahead of what academics and other experts are doing in the socio-economic development space, including everything to do with microfinance. I am glad that Alex Counts called the academics on the idea that a randomized control trial (RCT) is a gold standard for analysis of performance. In my view RCTs are expensive without being at all effective.
It amazes me that after more than 60 years of development experience, organizations like the World Bank and the United Nations have surprisingly little understanding of the performance characteristics of development initiatives. But it is also understandable. These organizations ... as well as academia ... want to use sophisticated algorithms to figure out what is going on, when pretty basic management accounting would tell them a whole lot more, and might even be understandable to the average interested person who is neither academic nor expert.
There needs to be metrics ... but metrics should be more like accounting than what is going on at the present.
Peter Burgess - TrueValueMetrics
Multi Dimension Impact Accounting
Sun, February 16, 2014 | Peter Burgess
This is an excellent observation. How often are papers touted as research little more than PR stints? But equally, how often is the conclusion of a research document clear simply by reading the name of the author? Conflicts of interest, biases and ulterior motives are present in research as much as in any other area. Indeed, is true objectivity possible, particularly in the softer social-sciences? Sir Karl Popper suggested decades ago that there is no logic of proof, only of disproof. And we can always resort to immunizing strategems - any hypothesis is veiled in hundreds of assumptions, some stated, others implied, others barely visible. Refutation of a hypothesis can always be blamed on a flaw in one of the (possibly infinite) assumptions, and perhaps this is entirely valid. Or perhaps not. RCTs were touted as the save-all for microfinance research, but have been criticised. Are they the best method we have, or merely the latest fad?
And were there any doubt over the interpretation of a paper, one should look at the Pitt/Khandker/Duvendack/Roodman spat. What may initially appear clear can in fact be extremely complex, and the entire conclusion altered accordingly.
Even if a finding is rigously demonstrated by credible authors and subject to peer-review etc., to what extent are findings in Mexico relevant in Mali? Indeed, to what extent can findings in Peru in 2010 be relevant even in Peru in 2014?
I examined a recent paper on the impact of Compartamos (link below). Technically I thought the paper was sound, and the results clearly presented. Where I was concerned was in the subjective interpretation of these results. One could almost feel the frustration of the authors in finding such modest results (albeit presented rigorously in tables etc). It seemed to me that they extracted whatever positive findings they could from a relatively mediocre set of outcomes. But how can a researcher truly dis-attach him or herself from the project? It must be so frustrating to spend months and months in the blistering desert only to conclude "we didn't discover much". In fact, I find this an equally important finding to some startling discovery. Sometimes finding nothing is as important as finding something. But I can understand that those directly involved might not share this view.
I therefore applaud the concluding remark in this post: "The trick will be for any research effort, whether practitioner-based, university-based, or third party-based, to include the dissemination of real information even when the findings disappoint".
Mon, February 17, 2014 | Hugh Sinclair
Dear Alex - Thank you so much for the NYT piece by Nicholas Kristoff. Interestingly, he calls it Academics, We Need You. If you read the entire post, it seems like a better title would have been Academics, We Don't Need You. It's true that Christina did not answer your question but to say that RCTs worked best in health. (Read: not so good for financial inclusion.)
Dear Peter - Yes, indeed. I have often found that inner drive to research and innovate coupled with simple accounting tools and a good dose of honesty, would do about 80% of the heavy lifting needed for financial inclusion entities to effect important changes.
Dear Hugh - The Compartamos study has to be the most bewildering yet. There were few financial effects but Deus Ex Machina, there was an effect on Empowerment. But the researchers could hardly keep trashing the industry that is feeding (funding) them. At some point, they have to say something nice, even when the research does not warrant it.
Mon, February 17, 2014 | Kim Wilson
Great thoughts. I often struggle with the fact that all non-profits in our industry are expected to share research results or "lessons learned," intended to help the industry as a whole, although these lessons or research findings are often propriety information which no private company would be expected to share in another industry. This tension leads to many conversations about "lessons learned" that are, in fact, thinly disguised PR messages supporting the organization itself, and which hind any real challenges or negative findings.
I would hope that using external researchers and/or academics would help fight this bias; however, you might a good case as to why it doesn't. So, building internal research capacity at organization is a great idea - as long as it coupled with the capacity to disseminate research findings across an organization in order to ensure that lessons, both positive and negative, are at least absorbed across different departments within the same organization. I can see how academics could be a strong part of supporting the process of building capacity for internal research and learning (while leaving the research to them entirely could make this worse, as they have no capacity to understand the internal dynamics of the organization.) I definitely think it's a difficult but necessary conversation for the development industry and it's great that you're bringing it up.
Mon, February 17, 2014 | Chrissy
The point on enhancing organizations’ internal research capacity is excellent, because in an ideal world this would occur. My concern is that under the current industry structure, it may be untenable because most research is still aimed at pleasing the donor. The perverse incentives go both ways – for the institution, under pressure to show program impacts, and to the academic, who wants to be called back for a second research study.
An institution conducting its own research will still need to share the findings in order to get more funding, thus dashing any hopes of honest reflection. As Chrissy above describes, should the institution be able to hold this information as proprietary, perhaps it would be more honest and ‘true’ research, adaptive and for the purpose of better design and implementation. This is what we can hope for. But then what would the institution point toward in order to obtain more funds?
Another good point: “Research should be accompanied by constant adjustment, with an institution’s full grasp of the idea that what is perfect today will be imperfect tomorrow.” This is where the line is drawn between impact evaluation and other forms of more ‘traditional’ social program evaluation (e.g., action, theory-based, utilization-focused evaluation), and/or monitoring. Practitioners need to be very aware of this line, and push back when what they really need is not an impact evaluation. For example, traditional evaluations can provide valuable information as to how and why a change occurred, something far outside the realm of an RCT. The term impact ‘evaluation’ has blurred the line between research and evaluation.
The pendulum in international development broadly has swung to a point where all evaluation efforts considered rigorous use an RCT conducted by outside academics. Donors must reclaim and promote the value of traditional evaluation methods, which are much more in reach for practitioner organizations to conduct internally, and can be structured to do just what you describe – build institutional capacity, adjust with the times, be nimble and (if capacity is there), high quality to boot.
Mon, February 17, 2014 | Rebecca Furst-Nichols
Dear Rebecca -
True, the implementing organization will always want to put it's best foot forward (as proven by the PR in mf) but there is no reason for them to hype anything other than impact. Impact research does not need to be done by academics but could be done by credible third parties who have no intent on publishing. The key is that impact researchers not be paid by the institution doing the implementing. Again, this is to eliminate bias.
Tue, February 18, 2014 | Kim Wilson
Kim et al
What a fascinating topic. I very much agree with what Kim has put forth about developing the internal capacity of an organization to conduct research, especially for the purposes of improving (as opposed to proving) impact. One of my favorite papers (written long ago...by Hulme maybe?) about impact research challenged readers with the question: "Whose reality counts?" And the answer to this from a purely practitioner point of view is our savings group members.
Is that naive? Maybe. But I don't care if it is.
I only know that the most illuminating research that I have ever done was completely qualitative. I'm talking about MicroSave's PRA adapted tools for market research. And it was illuminating because I learned more about microfinance clients perspectives and experiences from these tools than any other research I've been involved with. And later I used the tools with savings group members to find out what was working with our methodology and what needed adaptation. Thus we came up with a good approach and slight modifications that worked better for our largely urban township communities.
And that is what we need to maintain our integrity as an organization..to be sure that we are delivering the service that we promise.
I for one applaud any effort by any university to train us (practitioners) in the use of research tools that are affordable, relatively easy to apply and that enable our members to speak to us in a way that allows us to improve our program's impact on them.
Wed, April 23, 2014 | Jill Thompson