mistakes were made

No one really knows how the game is played
The art of the trade
How the sausage gets made
We just assume that it happens

A while back Elena Rivas and I posted a response to a bioRxiv preprint (Tavares et al, 2018) that challenged some of the conclusions in our 2017 Nature Methods paper on R-scape, a method for detecting support for conserved RNA secondary structure in sequence alignments by statistical analysis of base pair correlations. At the Benasque RNA meeting last summer, Zasha Weinberg told us we'd made a mistake in our description of how the Weinberg and Breaker R2R program annotates “covarying” base pairs. We've just updated our PDF with a correction, and I added an update section to my July blog post to describe the mistake, and our revision. Thanks, Zasha!

Meanwhile, the Tavares et al. manuscript is still up at bioRxiv, with no response to our comment there, despite the fact that we argue that the manuscript is based on an artifact, one of the more spectacular artifacts I have seen in my career. The manuscript describes a method that finds "covariation" support for RNA base pairs in sequence alignments that have no variation or covariation at all.

I'm told that peer review is broken, and that what science really needs is preprint servers and post-publication review. How's the preprint and open review culture doing in this example? I bet there are people out there citing Tavares et al as support for an evolutionarily conserved RNA structure in the HOTAIR lncRNA, because that's what the title and abstract says. If people are even aware of our response, I bet they see it as a squabble between rival labs, because lncRNAs are controversial. I bet almost nobody has looked at our Figure 1, which shows Tavares' method calling "statistically significant covariation" on sequence alignments of 100% identical sequences. I dunno that you'll ever find a crispier example of post-publication peer review.

In my experience, this is just the way science works, and the way it has always worked. It is ultimately the responsibility of authors, not reviewers, to get papers right. No amount of peer review, pre-publication or post-, will ever suffice to make the scientific literature reliable. Scientists vary in their motivations and in how much they care. Reviews help, but only where authors are willing to listen. A pre-publication peer reviewer can recommend rejecting a paper; the authors can send it elsewhere. A post-publication review can point out a problem; the authors don't have to pay attention. And who has time to sort out the truth, if the authors themselves aren't going to?

Most of the literature simply gets forgotten. What matters in the long run is the stuff that's both correct and important. It's mostly pointless to squabble about the stuff that's wrong; point it out politely, sure, but move on. You can't change people's minds if they don't want to hear it.

I guess what I want you to know is that I'm on the side of wanting to get our work right. If you ever see something wrong in one of our papers, or our code, or any product of our laboratory or whatever comes out of my mouth, I want to know about it. Mind you, I won't necessarily be happy about it - I'm thin skinned, my second grade report card said "DOESN'T TAKE CRITICISM WELL" -- but I care deeply about making things right, both in big picture and in every detail.

So again: thanks, Zasha.