Public Science Fail?

Some time ago, a big science news-story received this response from another researcher. For now, it’s not important what the story was (we’ll come to that shortly).

There’s a difference between controls done to genuinely test your hypothesis and those done when you just want to show that your hypothesis is true.  The authors have done some of the latter, but not the former.  They should have mixed pregrown E. coli or other cells with the arsenate supplemented medium and then done the same purifications.  They should have thoroughly washed their DNA preps (a column cleanup is ridiculously easy), and maybe incubated it with phosphate buffer to displace any associated arsenate before doing the elemental analysis.  They should have mixed E. coli DNA with arsenate and then gel-purified it.  They should have tested whether their arsenic-containing DNA could be used as a template by normal DNA polymerases.  They should have noticed all the discrepancies in their data and done experiments to find the causes.

I don’t know whether the authors are just bad scientists or whether they’re unscrupulously pushing NASA’s ‘There’s life in outer space!’ agenda.

The body of scientific knowledge attacked is thoroughly scarred. It’s hard to imagine it ever taking any further steps toward wider, scientific consensus.  If what Professor Redfield claims in this paragraph is true, which I think it is, we have a number of problems: pushing a bias, proving a preconceived hypothesis, setting up bad or no controls (whether as a group or method), testing alternate hypotheses.

The stereotypical picture for “science”.

She was responding to some ideas addressed by Felisa Wolfe-Simon, et al., from a paper and a TED Talk on the paper’s findings.  The researchers investigated a microbe found in California’s Mono Lake, which was entirely “uninteresting” according to Wolfe-Simon (reporting back from TED) — except for one thing: Arsenic was helping its development which isn’t “supposed to happen”.

“Diversity of life is actually unified,” she explained, before illustrating that no matter how physically different lifeforms appear to be, we’re actually very similar. The microbe GFA-1, she said, was doing something “just a little different.”

Wolfe-Simon said that this discovery poses a new question, “Could we be missing alternatives to biochemistry here on earth?” Additionally, the findings can help in the study of astrobiology, indicating to researchers “more of what we need to look for and how.”

This sounds like amazing stuff. I would be excited if not for the cricket-sounds emerging from the rest (or most) of the scientific community and journos.

I don’t claim to be a scientist, but I am aware of errors in thinking. The scientific method, grounded mainly for me in uncertainty, self-criticism and reflexivity, is used as the benchmark method for engaging with reality as a whole. When judging arguments (“I heard a noise, Ghosts make noises, therefore it is a ghost”; “Lots of people believe in god, therefore god exists”), we must judge arguments against bias, control, alternate hypotheses, etc., before we can gain clarity on any form of knowledge. That is why within the halls of science, when it appears that most of these incredible safe-guards are left sleeping at the gates, it is problematic. A spy in the house of knowledge must surely be blind, since he usually ends up rummaging in the garbage heap.

I won’t pretend to understand a fifth of what Professor Redfield’s post is about. I am not a scientist (which is actually no excuse!), so I had to read her calculations carefully. The story is rather interesting because we must ask whether the scientists acted correctly or wrongly. Not just whether their research is right or wrong but their conduct in engaging with the public is right or wrong.

I find the idea of not responding to criticisms or withholding aspects of research to be suspicious behaviour. Rebecca Boyle from PopSci reports:

Critics say Wolfe-Simon et. al are wrong, however, and that their own methods prove it — arsenic breaks down in water, so if it really was in the microbe’s DNA, it should have broken apart when the researchers washed it to remove other contaminants. Wolfe-Simon said the bacteria did not have enough phosphorus to account for all its growth, but critics say it might have been enough after all.

Ideally, all this should have been resolved in the peer-review process — journal referees should have raised these questions before accepting the paper for publication.Wolfe-Simon told Zimmer critics’ questions in the blogosphere “do not represent the proper way to engage in a scientific discourse and we will not respond in this manner.”

Boyle correctly says: “The controversy goes beyond the scientific debate. It’s also a lesson in how not to inform the public about something so complicated and potentially profound.” (My emphasis)

FURTHER PROBLEMS

There are further problems, as has been noted with academic nonsense occurring with journals in “myfield. I don’t claim to be a science-writer, a scientist or anything special. What I do claim is that our engagement should be twofold: firstly, toward our fellow researchers, colleagues and educators and, secondly but as important, the wider public (if our work is to mean anything. Here’s looking at you literary theorists!). This is complicated stuff.

The difficulty of course is that scientists are not usually trained or particularly interested in translating their work into digestible format. This is not their fault, since this is what we expect from various bridge-builders like those who can write and understand the field (like Carl Sagan and Richard Dawkins); we also expect science-reporters to know something about the science (like the consistently excellent Carl Zimmer, who uses his own curiosity and “non-expertise” to guide him to explain it to others, thoroughly). Yet, we must remember that we, as an audience, are also expected to have some standard of engagement. We must be willing to think critically about reports and findings — yes, science is the most powerful, most important engagement with reality that we have, but it’s performed by fallible human beings who have interests and biases. The method of science attempts to weed it out but sometimes and oft times fails. Reverence for the peer-review process can’t be used as the definitive filter of nonsense or failure of research. Peer-review is done not with computers but with people. Things slip through the cracks no matter how tightly woven those cracks are.

Yet, consistency above all else matters to me. If you engage publicly with your research then revert to shady responses, it seems worrying. If you began with the public space as the maternity ward for your research, you can’t suddenly claim that we must deal with it elsewhere. It’s there in the public sphere, overshadowed by indulgent headlines and the NASA-bias of “life, life everywhere! We still matter!”. And if your research is in the public sphere, in fact, if anything is presented in the public sphere (willingly and with your knowledge), then I see no reason to suddenly claim it’s the wrong avenue. You put it there: You can’t expect the public to ignore it when your fellow scientists start poking and prodding it in broad public light. Next time, keep it in the journals and let it face the fire before bringing out a half-formed monstrosity and claiming its definitive beauty.

H/T Kenneth Lipp.

Advertisements

2 thoughts on “Public Science Fail?

  1. I look forward to the day when the peer-review system does use computational mechanisms to review academic work. DeepThought gets my vote.

    Seen we are speaking about fields we are hardly experts in (or even understand well in my case), this week I came across the idea of privileging the importance of null hypothesis testing when running experiments. First it was mentioned on the SGU podcast. Then while researching my philosophy essay I came across a paper by Randy Gallistel that also addresses the ideas.

    DISCLAIMER: I don’t know nearly enough about statistic (bayesian analysis in particular).

    But if Im following the argument correctly, it seems to make an important point. I bring it up because it occurs to me that if the researcher you are talking about had defined a clear null hypothesis, one that they were actively testing for, a lot of this might have been resolved by simply sticking to the method.

    Cool post though. Even though I think you take an unfair swing at the literary theorists ;)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s