Sick Science, Bro: How to Talk About COVID-19 Research
A recent article in Mother Jones with the click-bait headline “Science has an Ugly, Complicated Dark Side. And Coronavirus is Bringing It Out” (DUN DUN DUN) plays right into the burgeoning mistrust of science surrounding our current situation.
A couple of months ago, it started to dawn on me how unprecedented watching a global pandemic unfold over social media would be. Between then and now, we’ve seen just about every form of half-baked, unsupported misinformation, sometimes alarmingly posted by friends of ours who seem otherwise relatively level-headed. My personal experience has been an almost daily encounter with the weirdest and most far-fetched hooey, with just about every new day bringing some new unsupported, hilarious yet sad nonsense. The litany is vast, most recently represented by claims that wearing surgical masks causes hypoxia, which, when I first saw posted in earnest, I thought had to be satire. The pandemic is playing out on social media in full confirmation of the Age of Poe’s Law. It’s *almost as if* there is a concerted campaign to spread the silliest misinformation and sow as much discord and mistrust in science and expertise as possible.
Aside from this probably astroturfed circus of conspiracy theories, pseudoscience, bullshit “likely stories,” and all the ways we are seeing a widespread failure of not only fundamental science education but also plain common sense, there’s a deeper problem. The Mother Jones article wants to address it, but, in my opinion, fails to do so in any effective or enlightening way. It’s not a bad outline of the problem, but it misses some crucial elements and bypasses some relatively painless solutions.
In order to understand how to deal with the daily onslaught of science-y sounding “studies,” a little bit of background in the current realities of scientific publishing is needed.
What many people outside of the arcane world of publishing in peer reviewed science journals don’t know is that the industry is shifting rapidly. One of the noble efforts that may well cloak some borderline unethical profit motives is to make the vast body of published science available for free to the public. This is referred to as “open access publishing.” This is a highly laudable goal, as, otherwise, going to primary source, original, peer reviewed journal articles is often prevented by outrageous paywalls of $40 or more per article. Researchers with institutional affiliations have free access to almost all science publications, which access used to be provided exclusively by the subscription fees paid by their institution. Open access publishing started largely in “society journals,” that is, journals published by organizations funded by donations and membership fees. For example, articles published in the American Journal of Botany have long been open access, since that journal is published by the Botanical Society of America, and is funded by membership dues.
However, large journal publishing houses such as Elsevier and Wiley have recently started requiring authors to participate in open access publishing, along with which comes “article processing charges” that, for higher impact journals, range from $1500 to $4000 per article, fees that are paid by the authors. It used to be only junk, low impact “predatory” journals that would charge authors to be published. But, while the emerging system of open access publishing results in peer reviewed research being available to the general public, there are obvious potential conflicts of interest and inequities behind a system that funds peer reviewed publication via money provided by the authors of those publications.
Connected with this relatively recent change, there has been a radical shift toward what are called “pre-prints.” A pre-print is an online upload of a draft of a journal article that has not yet been peer reviewed. The noble ideal behind pre-prints is similar to that behind open access: let the public in on the arduous process of peer review, and engage the “larger scientific community” in aspects of peer review. Nice.
However, this means that a huge number of articles that have been accepted for review by other scientists are now available to the public and to science journalists, prior to peer review. For example, my co-authors and I currently have an article in pre-print, that has been submitted to the journal Ecology and Evolution. It has been assigned a “DOI” (digital object identifier) and the pre-print will now always be accessible through that DOI. If the article passes a couple rounds of reviewer comments and is accepted for publication, the peer reviewed, officially published article itself will have a different DOI.
But here’s the thing: it was accepted and put into pre-print after an only cursory editorial check to be sure it met journal formatting guidelines and was worth sending to reviewers. Sometimes articles get what are called “desk rejections,” where they are not even accepted for review. But, in the case of our current pre-print, as you can see from the history at the top of the web page, it was submitted in January, returned to me with extensive suggestions for major revisions in March, and then sent back for a second round of review on April 27. If you merely click on the pre-print, you don’t see the reviewer comments, and you have no idea what some of the flaws were in my original article that I am now endeavoring to correct, due to the arduous process of peer review. If you squint, you can see, in light grey type, “This is a pre-print and has not been peer reviewed. Data may be preliminary.” Personally, I think the disclaimer should be much stronger. If it were up to me, I would change it to “This is not even real science yet. Until it is peer reviewed, it is about as reliable as your unhinged ranting uncle. DO NOT report results from this study at all. If you feel you must, please do so with extensive caveats.” And I would say this about *my own* pre-print.
So, circling back to what this has to do with COVID-19, and understanding scientific publishing during the emergency of a pandemic:
Here’s a list of suggestions for encountering new studies, following the links, checking out a few basic aspects of the science, and how to put the information in context, when you post a link on social media. I’ll use a specific recent article, that got some press, as an example. This is a non-peer reviewed pre-print of a study suggesting that eyes are a transmission pathway for SARS-CoV-2. I saw some examples of journalists writing about it, posted on Facebook. Note that this was an article written for employment advice website, TheLadders, perhaps written by an author with little to no science background. Note the sensationalism of the headline, in particular: “COVID-19 found to be spread through eyes and is 100 times more infectious than SARS.” In that link, you’ll see the following phrase: “a new study published in bioRxiv and affiliated with the Department of Pathology, Johns Hopkins University School of Medicine has revealed that the pathogen can even be transmitted through the eyes. Even worse, SARS-Cov-2, which causes the COVID-19 infection, is almost 100 times more effective at infecting the eye as well as airways than SARS.” Notice that the author doesn’t even mention that the “new study” is in fact a pre-print, and has not yet been peer reviewed. Which leads to the A number 1 suggestion:
- Always find out if a “new study” is actually peer reviewed and published, or is only a pre-print. Many journalists mention that the study they are referencing “has not yet been peer reviewed” or that it is a pre-print. I think a lot of people, including journalists and even some scientists, fail to realize how significant that really is. In science, if it hasn’t been peer reviewed and revised, and officially published, it is simply not a publication, nor is it reliable. Yes, peer review has issues, which I won’t get into here. It is still largely a rigorous, working system of quality control. I personally never post links to any pre-print studies. Period. YMMV.
- If the article you’re reading has a link to the actual study (which it always should, although lately, I’ve been finding that writers are not even linking to original research), *always* click on that link and check out the original study.
- What are you looking for? A few key things. Some of this may seem incredibly dull, but you can usually very quickly scan and find some basics. How big of a sample was involved in gathering data? In the case of the eye transmission pre-print, one of the red flags is that the sample size is not mentioned in the summary of the methods. As a reviewer, I would suggest they include sample size up front. If you have to dig to discover how many samples they had, that’s not the greatest sign. It turns out that they sampled 10 individuals. Their entire study is based on a *miniscule* sample size. This immediately casts the strength of their conclusions into question. It doesn’t mean the study sucks, but it indicates a need for a low level of inference and cautionary interpretation.
- Jump to the discussion, and take note of the usually somewhat cautious inference. The analysis they conduct finds that the necessary proteins to which SARS-CoV-2 is suspected to bind are present in all of their samples. The authors write: “In summary, the presence of ACE2 and TMPRSS2 in conjunctival and corneal epithelial cells supports the ocular surface as a secondary site of infection following respiratory tract, or possibly even as the initial portal of entry to an individual. Infection of ocular surface cells could lead to the eye as being an important carrier, with ocular virus shedding constituting a significant mechanism for infection of other individuals. Our study therefore highlights the importance of safety practices in the general community to prevent infection and spread (hygiene, face masks) and need for extra caution among ophthalmologists.” Note the words “possibly,” and “could.” But also note that the authors jump from “possibly” and “could” to the public health implications of their findings.
Now, contrast the experience of taking even a quick look at the pre-print to the article in Ladders. I think, even without much or any science training, one can clearly grasp the difference.
When posting COVID-19 research, I’d recommend always mentioning if it is a pre-print, and adding whatever caveats are probably in the discussion section of the original article, that the writer of the secondary article probably has left out in order to build interest. If you read a lot of peer reviewed science articles, you’ll notice that the claims and conclusions are often a lot more tentative and hedged than how they are reported. I’m noticing that happening a lot with COVID-19, which makes sense, since we all are caught up in a daily storm of information and misinformation.
We ordinary consumers and promulgators of information play a central role in helping safeguard the reliability and relevance of well-researched science. The last thing our current situation needs is for good, peer reviewed, thoughtful, well-designed and well analyzed, clearly reported science to lose credibility, especially as we try to wade through a sometimes overwhelming flood of bullshit pseudoscience in a post-fact world.
Before you post, dig a little. Provide some caveats for your friends. Take seriously that a study is a pre-print, because it makes a world of difference. Create an atmosphere of inquiry and critical thinking. Avoid unintentionally spreading panic or misinformation. Wear a facemask, wash your hands, stay six feet apart, and assiduously fend off bullshit: the new requirements for a sane and happy life.