Monday, July 21, 2008

Review of UnChristian: Should research on Christianity be primarily useful or accurate?

(Part 13 in a series)

I think I'll wrap up this series today, and I want to use this post to raise an issue that I have been mulling over for awhile now. What is the purpose of applied research on Christianity? Is it first and foremost to be useful or is it to be accurate?

In giving the backstory to writing UnChristian, the authors recount Gabe Lyons initial interest in pursuing this project. As told on page 13, Lyons writes that before the project, he believed that the "image young people have of the Christian faith is in real trouble." These perceptions are "incredibly negative."

Fastforward: Kinnamon and Lyons collect a bunch of data and conclude that, lo and behold, young hold a series of negative images of Christians. They use these data to prompt the church and its members to do better on various counts.

Frankly, I'm a little suspicious of someone "knowing" the answer to their research question before they even collect data, and then finding data the confirm their expectations. That almost never happens to me, for I am constantly surprised by what I find. Suppose that's why I'm in the business.

As such, the emphasis of UnChristian is using data to illustrate ideas already held by the authors, and using these data to bring about useful change. Emphasis is on useful.

What about another approach. Say Lyons started out with the question, rather than answer, of whether Christians have an image problem. If so, this empirical agnosticism might have lead UnChristian to a different survey with a different conclusion--one perhaps more in line with existing research literature. Emphasis is on accuracy.

Obviously we would like both useful and accuracy, but if we had to let one go, which would it be?

I can understand why people want to emphasize the useful. Why not use statistics, as well as anything else we can find, to advance the Kingdom?

And yet... if we're not 100% accurate in our creation or use of research, then that starts to eat away at the credibility of our work.

(This is not to imply that UnChristian is not accurate or that its authors do not care about accuracy, rather its a difference in emphasis.)

Here's an example of how this might play out. Suppose an author is concerned about Christians having some moral problem. S/he then finds all the statistics consistent with this "problem" hypothesis, ignoring ones that might contradict it. The end result: A skewed presentation of who the world works, but a presentation designed to get Christians to do the right thing.

I suppose this issue revolves around questions of the ends justifying the means. I would even say that some of the egregious misuse of statistics about Christianity are done with the best of intentions. Here's an example, and here's another one.

Me, I want to go wherever the data lead me, though I realize that I have my own biases and limitations that can get into the way. Ultimately, if it is truth we're after, cutting corners on our means of getting there isn't going to help.

8 comments:

Unknown said...

Great series. It motivated me to post this series of statistics from a book about the American church in crisis.

Have you looked at these statistics? I think your post here drives at the questions I was asking; the the value of stats is sometimes determined by usefulness to an end goal.

Ultimately, I appreciate your work here and I am learning a lot on how to read and apply statistics.

Unknown said...

Bradley, you also have me thinking now, "how do these stats fit in with George Barna's larger goal for the church?" It seems all his stats are leaning this way of late, and now I wonder, "are hey accurate or useful?"

Anonymous said...

Great series, Brad. I just linked to it over at my blog. I think it is too easy for us to take what is published in the Christian world for granted, and I hope your posts will help us to be a little more discerning. I know I have learned a lot from them.

Anonymous said...

Great series...

BUT:

You say: "And yet... if we're not 100% accurate in our creation or use of research, then that starts to eat away at the credibility of our work."

But no research is 100% accurate -- that is a straw man you are making.

If sampling and observation and human beings are involved, it won't be 100% accurate.

If that was the standard, no research would pass your test.

I'm not sure how accurate it needs to be, but I am certainly comfortable with some margin of error, as long as you know that going in...

Brad Wright said...

Thanks JR and Ray. Good suggestion on a book, JR.

Anonymous, yeah, you're right... I think that I meant being as accurate as we can be or something like that.

I'm well aware of how my own research is far, far from 100% accurate.

Sid said...

Great series!
Very helpful in so many ways and one of the reasons I know longer read anything by Barna. When the assumptions and agenda begin to influence the way the questions are asked it begins to seem fairly obvious and transparent.
As you said it begins to raise questions as to their usefulness.

Anonymous said...

I try to teach my science students about the error of pre-determining the outcome of their research and find it harder each year. Data mining to support a researcher's postulated outcome has become way too acceptable. Great job in pointing out the obvious weakness in the apparent "research" design that is the basis for the book.

I think that what we are seeing is an appeal to the presumed trustworthy nature of a claim if it is supported by data that is presented as being acquired in a scientific manner. As a scientist, I'd rather have no data for a contention (then I know that it is opinion) rather that pseudoscientific "data" that may actually be misleading the reader.
Keep up the good work.

Brad Wright said...

Thank you... yes, it isn't entirely clear that the conclusions in the book flow directly from the data.