Transparency Is the Key to Quality

A workshop held last June by the National Institutes of Health (NIH) Director's Office, Nature Publishing Group, and Science focused on the role that journals play in supporting scientific research that is reproducible, robust, and transparent. The “Principles and Guidelines for Reporting Preclinical Research” (http://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research) that emerged from the workshop have since been endorsed by nearly 80 societies, journals, and associations. 
 
Rigorous, objective peer review was widely acknowledged as the key to publication of high quality science. The expert, dedicated members of the JBC Editorial Board provide an invaluable service to the community of JBC authors and readers by ensuring the rigor, reproducibility, and transparency of research reported in the Journal. Over the past year, the JBC Associate Editors have been working to make sure that JBC reviewing editors, and ultimately our readers, have the information they need from authors for rigorous evaluation of the scientific content of JBC manuscripts. This effort has led to extensive revisions of our Instructions for Authors (http://www.jbc.org/site/misc/ifora.xhtml) for reporting experimental uncertainty, animal studies, biological materials, immunoblot data, and imaging results. 
 
The JBC has identified three major gaps in overall data reporting. We expect that filling these gaps will have an immediate impact on improving transparency in our journal. These gaps, which will be readily recognized by much of the bioscience community, include the need for (i) more complete disclosure of experimental design and reporting of experimental uncertainty and reproducibility, (ii) improved statistical and graphical presentation of quantitative data, and (iii) revised guidelines for the presentation and quantitation of immuno (“Western”) blots.

Editors, reviewers, and the general JBC readership want to know about the reproducibility and also the variation in the observations among multiple experiments. To this end it is critically important that the number of independent biological replicates, the number of technical replicates, and the number of repeated experiments are clearly separated and specified. Surprisingly, this simple rule is often broken or misrepresented. Sometimes, the value of N is, and can only be, one. An example of this is an experiment to compare drug treatments in a line of patient-derived stem cells. Although there may be numerous technical replicates for such an experiment, and the experiment could be repeated several times, the patient is one independent biological sample. As another example, if six metatarsals were harvested from the front paws of a single mouse and cultured as six individual explant cultures, the number of biological replicates (again, the value of N) is equal to one, but with six technical replicates. The JBC now requires authors to include explicit information describing N and its value in the figure legends and to distinguish between the biological (independent) and technical replicates for each experiment. More uncertainty is generally seen in the former.

Statistical and Graphical Presentation of Quantitative Data
Many experiments published in the JBC yield qualitative data that are not amenable to statistical analyses, e.g. electrophoresis, histology, chromatography, electron microscopy. In these cases authors need to clearly indicate the number of independent replicates that the figure represents. However, when quantitative data are presented, appropriate statistical analyses to portray experimental reproducibility and support an interpretation that experimental manipulations yield significant differences are needed. The JBC encourages the use of the 95% confidence interval (CI) as error bars because they are easier to interpret; the 95% CI is defined as the interval that encompasses 95 out of 100 independent samples from a population. Use of standard deviation (S.D.) or standard error of the mean (S.E.) is also permitted. Both S.E. and CI are inferential statistics, which are used to make inferences about the data; when N ϭ 3, the 95% CI is ϳ mean Ϯ 4 S.E., but when N 10, the 95% CI is ϳ mean Ϯ 2 S.E.
As a complementary measure to graphically improve transparency, the JBC now strongly encourages the use of scatter plots for small data sets (Ͻ30 independent samples) or box and whisker plots to compare large data sets. These plots, inclusive of appropriate error bars, provide more transparent information about the variability within the data than the ubiquitous dynamite plunger plots (bar graphs) that historically dominate scientific publications, including JBC (1). Statistical analyses of variation and precision for establishing differences between experimental groups can be reported in the same plot, preferably using the S.D. or 95% CI. The JBC now requires authors to include specific information describing experimental uncertainty and reproducibility of each data set in the figure legends.

Presentation and Quantitation of Western Blots
Western blots have become a standard technology in the tool kit of most biology or biochemistry laboratories, particularly because commercial antibodies are now available for many proteins, even those that have barely appeared in the literature. The JBC requires users of Western blot technologies to define the species of origin and source of all antibodies used, including catalogue/lot numbers, in the "Experimental Procedures" section of their manuscripts. A description of the data supporting the specificity of all antibodies is required. In cases where novel antibodies are used, we are asking authors to describe how the antibody was made, including preparation and purification of the epitope/ antigen, and also to provide data validating the specificity of the antibody. As far as possible, data showing loss of immunoreactivity in samples following genetic or other molecular modifications to the antigen are a welcome addition to confirm monospecificity of the antibodies. The specificity of antibodies designed to specifically detect post-translational modifications, e.g. methylation, oxidation, phosphorylation, glycosylation, or neoepitopes (2), should also be validated as appropriate and be reported.
An increasing number of journals, including the JBC, do not allow surreptitious splicing of Western blots. If it is essential to remove lanes from an original blot for presentation purposes, then the splice positions must be clearly marked and explained in the figure legend. Of course, splicing together lanes from more than one blot is not allowed under any circumstances.
Authors should also be careful to avoid "overcropping" sections of Western blots for presentation in figures. Sufficient surrounding background regions should be retained including the positions of at least one, but preferably more, molecular weight markers above and below the band of interest.
Quantitation of Western blots is not always required but it can be fraught with traps for the unwary investigator and often sparks lively debate among scientists. It is not uncommon to "correct" Western blot signals for protein loading by normalizing to a second Western blot for a housekeeper protein, e.g. ␤-actin, ␣-tubulin, transferrin, GAPDH, HPRT1. The problem with this approach is that a linear relationship between signal intensity and the mass or volume of sample loaded must be confirmed for every antigen. This is further complicated by the fact that some detection methods, in particular enhanced chemiluminescence using x-ray film, have a very restricted linear range, and careful attention to the experimental conditions is necessary to ensure linearity. It is typically better to normalize Western blots using total protein loading as the denominator (3)(4)(5)(6)(7)(8)(9)(10)(11). To avoid potential pitfalls and with a focus on improving transparency, the JBC strongly recommends that authors describe their methods used to quantify signal intensity, how the linearity of signal intensity with antigen loading was established, and how protein loading was normalized between lanes. We prefer that signal intensities are normalized to total protein by staining membranes with Coomassie Blue, Ponceau S, or other protein stains, and we strongly caution against the use of housekeeping proteins for normalization, unless there is a clear demonstration that expression of the housekeeping protein is unaffected by the experimental treatments.
EDITORIAL: Transparency Is the Key to Quality DECEMBER 11, 2015 • VOLUME 290 • NUMBER 50

JOURNAL OF BIOLOGICAL CHEMISTRY 29693
Authors should be prepared to submit raw data showing original Western blots or validating their reagents and quantitative analyses during the review of a manuscript upon request from the reviewers or an Associate Editor. JBC also reserves the right to digitally analyze all figures for undisclosed splicing of gels or other inappropriate image manipulation (http://www.jbc.org/site/misc/ifora.xhtml#manipulation).
It is worth reiterating a point made by other transparency advocates (12)(13)(14)(15)) that while statistics is a necessary mainstay for data interpretation by clinical researchers, psychologists, and epidemiologists, whose conclusions depend wholly on statistics, the interpretation of data in papers published in the biological sciences, including the JBC, do not always require sophisticated statistical analyses. JBC papers are selected because they provide novel and important mechanistic insights into cellular or biological processes at the molecular level. To this end, the JBC requires diligent data reporting and transparency so that readers, reviewers, and journal editors can identify sound papers with reliable data.