Transparency in science1 is a problem (please hold your pitchforks until the end). I’m not concentrating here on the various problems transparency in science has,2 but on a problem it constitutes: How do we sort through a mountain of information to get to the parts we need? - the parts which are as close to the raw data as we have time and expertise to assess. This is a problem. It’s not unsolvable (search engines have done a terrific job on similar problems outside of academia), it’s not even new (the key questions of how scientists communicate what to whom haven’t changed), but it does need taking seriously.

Scientists are responsible for finding out new (true)3 things about the world, and communicating those findings so that they can be used to make life better for people.4 Here we’ll assume the first bit, finding out new things about the world, is being done properly: Nelson, Simmons, and Simonsohn (2012) provide a reasonable argument against transparency in science if this first condition isn’t met. Assuming, then, that we’re doing good science, to whom are we trying to communicate that science?

There are many different kinds of people to whom scientists should communicate their research, of which a non-exhaustive census might highlight:

  • fellow experts in our field, whose work will be influenced by what we find and who may be able to comment on the appropriateness of particular methodologies and analysis techniques,5 and who may be able to incorporate data we produce in their own work
  • colleagues in related fields, who may want to use analogous reasoning for their own areas, who might want to support arguments in their own research with our findings, or who might be interested in specific methodologies or analyses shared between fields
  • government, business, and other organisations, who might want to turn our findings into practical policies or technologies which will influence the lives of many people
  • journalists and professional science communicators, who can form a bridge between experts and non-experts
  • amateur members of the public and students, who will integrate our findings into a wider understanding of naturalistic mechanisms
  • hostile amateurs, who may have particular reasons for resisting our findings (e.g. members of anti-vaccination, climate change denial, flat-earth, or anti-genetically modified organism groups)

It is not necessary that all scientists communicate their findings to all of these (and other) target audiences.6 It is important, however, that the findings of all scientists are communicated appropriately so that they can form part of the wider ecosystem of understanding and exploiting scientific knowledge.

Currently, specific solutions exist for most of these communication niches: seminars and preprints for immediate colleagues, lectures and academic papers for other scientists; reports for organisations; press releases7 for journalists; public lectures and social media engagement for members of the public; and debates for hostile audiences. The many non-expert audiences may be underserved by a world where science sharing focuses less on interpretation and more on showing the data.

Even as a working scientist, I can approach academic articles with a variety of goals: to get a quick overview of a foreign area of research; to get a sense of how techniques are used in a given area; to understand the evidence underlying a particular phenomenon; to understand a particular piece of evidence; or to reanalyse a particular finding for myself (etc.). Only some of these goals are well served by moving academic publishing away from the current peer-reviewed journal article format.

The tempting solution to this is for academics to produce an assortment of communication outputs starting with well-annotated open data and ending with a 280-character summary interpretation of the findings, or as far in that direction as is appropriate. Which magic time tree the academics are going to harvest from in order to do this is not clear.

Practical solutions require some kind of trade-off: time spent communicating research is not time spent doing research, and it is not even necessary that the person doing the research do all of the communicating work. If we consider communication of science important then we need to invest in it, and in the open and transparent future this means giving appropriately trained people the time to offer increasingly interpretation-heavy summaries of research at various levels of complexity for a variety of audiences. And we need to keep it away from the metrics which have driven sensationalization of press releases.

In the ideal world we’ll have something like: open data and a detailed description of the methodology; which links to an analysis of the data to answer a well-posed question; which links to an academic article summarising that process and interpreting the results in the context of a research area; which links to a broader overview of the research area and puts the findings in appropriate context for non-experts; which links to a simple summary of the phenomenon of interest; which gets advertised on social media.8 We’ll need to invest in communication explicitly to achieve this.


References

Nelson, L. D., Simmons, J. P., & Simonsohn, U. (2012). Let’s Publish Fewer Papers. Psychological Inquiry, 23(3), 291–293. https://doi.org/10.1080/1047840X.2012.705245

Morgan Jones, M., Manville, C., & Chataway, J. (2017). Learning from the UK’s research impact assessment exercise: A case study of a retrospective impact assessment exercise and questions for the future. The Journal of Technology Transfer. https://doi.org/10.1007/s10961-017-9608-6

Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C. A., Davies, A., … Chambers, C. D. (2014). The association between exaggeration in health related science news and academic press releases: Retrospective observational study. BMJ, 349(dec09 7), g7015–g7015. https://doi.org/10.1136/bmj.g7015

Notes

  1. Also other areas of academia, but science is my focus here. 

  2. Examples include: protection of private data; management of large datasets; accreditation of reused data; abuse of transparency mechanisms (e.g. spurious Freedom of Information Requests to climate scientists). 

  3. Perhaps we’ll tackle philosophy of science another time; if ‘true’ bothers you, perhaps you’ll allow me ‘increasingly more predictive of unobserved data’. 

  4. I include improvements to life from technologies and policies as well as the broader improvement that comes from an understanding of the mechanisms of the world and our place in it. 

  5. It is important to the process of science that the results are tentative, and subject to reinterpretation by a better theory or refutation by an appropriate quantity and quality of new evidence. 

  6. The insistence that every scientific discovery be communicable to every taxpayer is as unhelpful as the insistence that every scientific project indicate in advance the practical impact it will have (Morgan Jones et al. 2017). 

  7. Which have their own problems (Sumner et al. 2014). 

  8. The same structure applies to videos - 7s gifs sometimes say more than 20-minute mini-lectures, especially to some audiences.