mistakes were made

“No one really knows how the game is played
The art of the trade
How the sausage gets made
We just assume that it happens”

 

A while back Elena Rivas and I posted a response to a bioRxiv preprint (Tavares et al, 2018) that challenged some of the conclusions in our 2017 Nature Methods paper on R-scape, a method for detecting support for conserved RNA secondary structure in sequence alignments by statistical analysis of base pair correlations. At the Benasque RNA meeting last summer, Zasha Weinberg told us we’d made a mistake in our description of how the Weinberg and Breaker R2R program annotates “covarying” base pairs. We’ve just updated our PDF with a correction, and I added an update section to my July blog post to describe the mistake, and our revision. Thanks, Zasha!

Meanwhile, the Tavares et al. manuscript is still up at bioRxiv, with no response to our comment there, despite the fact that we argue that the manuscript is based on an artifact, one of the more spectacular artifacts I have seen in my career. The manuscript describes a method that finds “covariation” support for RNA base pairs in sequence alignments that have no variation or covariation at all.

I’m told that peer review is broken, and that what science really needs is preprint servers and post-publication review. How’s the preprint and open review culture doing in this example? I bet there are people out there citing Tavares et al as support for an evolutionarily conserved RNA structure in the HOTAIR lncRNA, because that’s what the title and abstract says. If people are even aware of our response, I bet they see it as a squabble between rival labs, because lncRNAs are controversial. I bet almost nobody has looked at our Figure 1, which shows Tavares’ method calling “statistically significant covariation” on sequence alignments of 100% identical sequences. I dunno that you’ll ever find a crispier example of post-publication peer review.

In my experience, this is just the way science works, and the way it has always worked. It is ultimately the responsibility of authors, not reviewers, to get papers right. No amount of peer review, pre-publication or post-, will ever suffice to make the scientific literature reliable. Scientists vary in their motivations and in how much they care. Reviews help, but only where authors are willing to listen. A pre-publication peer reviewer can recommend rejecting a paper; the authors can send it elsewhere. A post-publication review can point out a problem; the authors don’t have to pay attention. And who has time to sort out the truth, if the authors themselves aren’t going to?

Most of the literature simply gets forgotten. What matters in the long run is the stuff that’s both correct and important. It’s mostly pointless to squabble about the stuff that’s wrong; point it out politely, sure, but move on. You can’t change people’s minds if they don’t want to hear it.

I guess what I want you to know is that I’m on the side of wanting to get our work right. If you ever see something wrong in one of our papers, or our code, or any product of our laboratory or whatever comes out of my mouth, I want to know about it. Mind you, I won’t necessarily be happy about it – I’m thin skinned, my second grade report card said “DOESN’T TAKE CRITICISM WELL” — but I care deeply about making things right, both in big picture and in every detail.

So again: thanks, Zasha.

 

 

 

A computational biologist walks into a museum

Field Museum logo

I’ve written parts of HMMER’s code in the shadow of a massive Tyrannosaurus, and this week I’ll get to do it again. I’m on an advisory committee for the Field Museum of Natural History in Chicago, surely one of the few places you can sit in a cafe amongst dinosaurs. We’re meeting this Thursday and Friday at the museum. Getting a backstage pass to one of the great museums of the world is an awesome perk, but we’ve also got a serious job to do. In an age of iPads and ubiquitous information and entertainment, what should the future of a great natural history museum be?
Continue reading →

A man for our season

Peter Lawrence and Michael Locke wrote an essay that made an enormous impression on me (“A Man for Our Season”, Nature, 1997). For a long time a copy hung on the wall of the lab. I was reminded of it last week when I read a recent interview with Lawrence (“The Heart of Research is Sick”, Lab Times, 2011).

When it’s hard to reach me because I’m busy with my own research work; when I have to decline to travel to give seminars; when postdocs in my lab publish their own independent work without my name on their papers; when our papers go to open-access journals that do a good job of delivering substantive content regardless of that journal’s supposed “impact”; when I spend time on the details of a constructive peer review; when I help HHMI recruit and mentor younger scientists — and indeed when I moved to Janelia Farm, to be part of the idealistic culture that we want to build here — it’s principles much like Peter Lawrence’s that I’m aspiring to.


Real lives and white lies in the funding of scientific research

PLoS Biology, 2009

Retiring retirement
Nature, 2008

The mismeasurement of science
Current Biology, 2007

Men, women, and ghosts in science
PLoS Biology, 2006

The politics of publication
Nature, 2003

Rank injustice
Nature, 2002

Science or alchemy?
Nature Reviews Genetics, 2001

A man for our season
Nature, 1997