Wednesday, June 25, 2008

Review of UnChristian: Critiquing research about Christianity

(Part 5 in a series)

In writing a critique of UnChristian, and let me preface it by saying that I am mildly uncomfortable doing so. Not because of the book itself, but because I get no particular joy out of pointing out the shortcomings and limitations of other peoples’ research. I certainly have plenty in my own research to worry about, so why should I bother with other peoples’ research?

A somewhat unique situation has arisen in American Christianity in which much of the data we get about the Church comes from organizations that, to my knowledge, don’t go through any peer-review process. With academic journals, in contrast, when I submit a paper for publication, they send it to several reviewers. These anonymous reviewers, fellow experts in the field, read the manuscript and suggest improvements as well as make recommendations for publication. (They always find a lot to fix with my research.) Certainly the anonymous review system has its own problems, but any academic journal article has been vetted by at least several researchers.

I don’t think that this is the case with research such as published by Barna, Reveal, Lifeway, and various other popular Christian data sources. Instead, there’s an element of “trust us” in accepting their material. I have no doubt that these organizations seek to be as honest and rigorous as possible with their data, but that doesn’t mean that they (or anyone) always get it right. This is confounded by what seems to be a general reluctance to publish the specific methods of their research. When reading material from these sources, I rarely find the survey instrument itself or basic methodological information such as response rates. On top of this, these organizations rarely share their data with others, which makes their analyses difficult to replicate.

The end result is almost a blind faith in the quality of the analyses being presented. Certainly these researchers are devout Christians dedicated to advancing the Kingdom (of that there is no doubt), and their work is often cited by experts, so shouldn’t we just accept what they say at face value? It feels like bad form to question or critique what they are doing—as if we’re hindering someone else’s ministry.

Instead of faith, I would like to promote a model of informed consumption of research about Christianity. Some of it is outstanding, some of it is off the mark, and there’s a lot in between. As readers of it, we should critically evaluate it. Rather than having faith in all data about Christianity, we should first separate the wheat from the chaff.

Ultimately, a more critical engagement of Christian statistics should identify those findings from which the church has the most to learn, that can best guide us in advancing the Kingdom.

Next: Compared to whom?


Anonymous said...

Interesting observation, though one with its own bias.

You want to apply an academic standard to church-based research, but why is this the standard?

Why is business-based or market research not the model? Or more rigorous scientific methods? Or military/government standards?

It seems to me that, unlike the academic world, churches are looking for real-time actionable data, something that is more closely found in the market place or even military.

Are these less valid means of gathering information?

Isn't there a place for both?

jeremy said...

Brad, Hear hear. Well said. You may find this article informative--or, at the very least, entertaining!

Anonymous, I think we can all agree that the church should first and foremost be interested in *truth*, over and above "real-time actionable data." The problem with "real-time actionable data" is that it has heightened odds of being wrong, and then--should we follow it--we've found ourselves on the wrong course. Truth trumps expediency.

jeremy said...

Following up on my previous comment...

This is not to say that studies like UnChristian are without value or have no place. Only that evangelicals should not take them as inscrutable gospel truth, which they often do. And, all else being equal: Peer-reviewed study > Barna study.

Brad Wright said...

Anonymous, you make a good point about there being different models of research.

I would say that much of academic research is not that valuable for the Christian church, not because it's academic and therefore peer-reviewed, but rather because most academics are not interested in pursuing topics that would help the church.

Peer-review need add only 6-12 months to research, which seems to be a good price to pay for the increased care with the data.

This doesn't seem like too much time. For example, the issues raised in UnChristian seem just as relevant this summer as they were at the end of last year.

This doesn't address the issue of Christian shops being proprietary in their methods--why not release methods and even data? There's no cost there in terms of time.

Brad Wright said...


Thanks for the link. I love that article and have already cited it once or twice in articles I'm working on.

Glen Davis said...


I've been going nuts wondering if I'm the only person asking these questions.

I eagerly await your analysis of the data.

Peer review is like capitalism and democracy: it's flawed but it's the least flawed method we've found so far.

Brad Wright said...

Thanks, Glen. I too am a fan of peer review (though my like for it might just be a case of Stockholm syndrome).

Corey said...

I'm puzzled by the assertion that there is a difference between academic standards and "business-based or market research" (well, at least good market research).

There is nothing mysterious about the methods of good social science. One asks tough questions and then seeks out the best evidence to answer. Good social science works hard at challenging taken-for-granted assumptions and received wisdom.

Good market research does the same. If the goal is to find out what it takes for consumers to purchase a product, it behooves the analysis not fool him or her self. Of course there are lots of practitioners of what I would call bad market research that will create measurements to reinforce the client's preconceptions...

I believe Brad's point about Barna and the lack of peer review is that no one (outside of Barna, that is) really knows what their sampling fractions are, their response rates, their non-response bias, etc and so forth. If they did a random digit dial of all U.S. phone numbers, how often did people hang-up? Or terminate the interview? How do we know that the people who supplied data are representative of the rest of us? (In academic social science the answer to these questions are forced through the peer-review process; in good market research, the answer to these questions are forced by consequences of having bad data.... "Dewey Defeats Truman!" I'm not convinced that Barna's shop and its competitors face these consequences for having bad data. Thus, goofy statistics are circulated and sustained (see Christian Smith's article linked by Jeremy).