What Makes for a Good Research Project? Be Concrete.

June 19, 2013

Recently we’ve been thinking a lot about research projects at BCLC. With the launch of our new framework, we’re busy coming up with eight applied research projects – one for each of 2013’s Issue Networks. With research projects on the brain, now seems like a good time to share some insight about what makes for a good corporate responsbility (CR) research project.

Every day I see the results of CR studies coming across my news reader. When I click on one, I frequently ask myself: Why this one?  Why didn’t I ignore it like the dozens I skip daily?  The ultimate aim of all research is to be read, and there are lots of reasons why some get clicked and some don’t. Some of it is personal preference. Marketing is also a big part. However, marketing can only do so much. Research is designed to prove something, and what’s going to be interesting and effective is largely related to the kind of proof you show. 

Many of the articles that I skip make grand claims like: “Green Scoreboard finds $4.1 T in Private Green Investments.” One might think that such provocative claims would grab my attention, and yet these are often the ones that I skip.

The problem is that I see lots of big, abstract claims. Worse, I know from frequent experience that when you make such an abstract claim, the study must be based on some serious estimation. There’s no way to directly measure every dollar going to green investments. To do so, you have to estimate the total amount of investment in green projects and the portion of each investment that was for a green purpose. You can be really clever or rigorous in your estimations, but still they remain several steps away from directly demonstrating an outcome that happened in the world.  

Big, abstract estimates are not bad. We need them to get the best possible guess at what’s going on out there. We have to run our economy somehow, and if these big estimates come with careful documentation about their limits, then estimates are great for planning purposes. But they are not very good for the purpose of convincing the educated public. 

They seem tempting – you think to yourself: “What if I could show the total investment in green projects? Then the American public must recognize how big and important they are.” But when it comes down to it, we know that there is an error term in that estimate. You can’t really say that there were a total of $4.1 trillion in green investments  – you can probably say that there was $2.6 to $5.6 trillion, if you assume x and y and z. If you vary those assumptions, then the estimate ranges from $1 trillion to $20 trillion. As a member of the public, I automatically have a suspicion about the conditions behind your evidence.

What’s the alternative? Well, the issue of proof comes down to trust. You need people to buy a big claim, using results that are definitive and concrete. They need to be sure that they understand what you did in your study, and that the results of that study undeniably support your claim. In this way, a small concrete study can paradoxically be more effective at proving a big claim than a big abstract one.

Consider an infamous example. In 1961, Stanley Milgram conducted his famous experiment, in which participants were goaded into administering electric shocks to a “learner.”  The shocks were never really administered to anyone, but the volunteers playing the role of “teacher” never knew that. Two-thirds of the teachers (65%) administered enough electricity to kill the learner, because they were told to do so by an authority figure in a white lab coat. Milgram’s colleagues, polled before the experiment, guessed that around 1.6% of participants would do so.

This is compelling proof of Milgram’s claim that people everywhere are subject to the power of authority. The case also highlights the importance of being concrete. It’s very easy to understand the conditions of the experiment, and very hard to deny the meaning of the results. If Milgram had chosen another path, say surveying 2,000 people on their willingness to administer a shock, we wouldn’t be talking about it. Instead, his study on 40 people was much more compelling, because it was concrete.

While Milgram’s methods were extreme, we can learn from his experiment. When we set out to do research into CR, it’s important for us to find those designs that make a big point with concrete results. In a way, this is good news – with limited budgets, we might only be able to work with 40 people. We need to be rigorous when we study these 40 people, but if we do it right, our findings will inspire clicks and confidence in our claims.