JBC on Journal Ranking

Over the past two decades, there has been a marked shift in the way scientific journals are published and disseminated. Gone is the reliance on the print versions of most journals; everything is now available on-line. Journals have also multiplied, with highly regarded journals like Nature and Cell spinning off new publications in selected areas of research interest, such as genetics, medicine and metabolism. Increased competition for the publication of scientific research has led to an increased emphasis on determining the perceived "quality" or "status" of a specific journal. After all, scientists, like everyone else, want to publish papers in journals where their work is likely to have the highest impact. Thus, the ISI Impact Factor has achieved widespread use as a system of rating the quality of a journal, and by inference, the quality of the papers published in that journal. Impact Factor has the advantage of being a very simple method of evaluating the relative importance of a journal. The number of citations in the data base of more than 6000 scientific journals to papers in a given journal over a two year period is divided by the number of articles published by the journal over the same time. As one would immediately predict, any journal that publishes primarily review articles will be more highly ranked using Impact Factor as a metric. Notably, The Annual Review of Immunology had the highest Impact Factor score in 2005 with The Annual Review of Biochemistry rating second. This raises the question of whether citations in reviews should, in fact, be included in the data base used to calculate Impact Factors. After all, the actual data are in the research papers, not in the reviews. High Impact Factor journals, such as Science and Nature, publish letters, commentaries, and even retractions, all of which have citations that are included in the numerator without inclusion of their number in the denominator of the Impact Factor. This also raises the question of whether citations in letters, commentaries and retractions should be included in the data base used to calculate Impact Factors. In addition, a key element to generating a high Impact Factor is selectivity. Since the denominator in the Impact Factor equation is the number of articles published over a two-year period, it pays for a journal seeking a high Impact Factor score to be very selective regarding the number of articles published. This has led to a system of review in which manuscripts submitted to some journals are triaged rather than reviewed for their scientific content. It also leads to a system that favors areas of research that are considered "trendy" and excludes others that are not. This has the effect of leaving a large segment of scientific research for those journals with lower Impact Factor scores. For example, the Journal of Biological Chemistry, which publishes papers across broad areas of biochemistry, would be able to increase its Impact Factor by excluding papers in sub-disciplines of biochemistry with low citation frequency.

All of this would be a matter of passing interest, were it not for the now pervasive and inappropriate practice that the quality of an individual's research, and hence promotions and appointments within institutions and awarding of research grants is now being evaluated based on the Impact Factor of the journals where the work is published. Why is it assumed that if a journal has a high Impact Factor, a research paper it publishes must also have a high impact (!)? This has the potential of greatly limiting the number of journals where scientists attempt to publish their research. There is an even a more potentially damaging consequence to the over-reliance on the Impact Factor as an indicator of the quality of a scientific journal. In their desire to improve their Impact Factor, journals may change their mode of operation, lest they suffer the fate of becoming a "B" journal with a low Impact Factor. The Journal of Biological Chemistry has faced this problem over the past decade. The Journal has been in existence since 1905 and has always been a repository for a broad cross section of research in the biochemical sciences. Like the New York Times, it publishes "all the news that's fit to print," without consideration of its relative trendiness. As a result of this policy, the Journal has grown over the past 20 years in parallel with the growth of research in the biological sciences, to the point that today it is the world's largest and most cited journal. This is not, however, necessarily a good thing for the presumed status of the Journal; it may be highly cited, but in 2006 it ranked only 260 among the 6,164 scientific journals evaluated by Impact Factor metrics. Consequently the leadership of the Journal of Biological Chemistry today faces the dilemma of being the victim of its own success. The result has been an effort to be more stringent and to accept fewer and, hopefully, better papers for publication. However, nothing short of abandoning our traditional policy of a fair review for all manuscripts submitted is likely to change the Journal's Impact Factor substantially. For those of us who have grown up with the Journal of Biological Chemistry and consider it to be the first choice for our manuscripts, this is something of a "Faustian Bargain"; it is somewhat akin to the New York Times trying to emulate the New Yorker (not a good idea). Surely there are other, fairer ways to evaluate the quality of scholarly journals.

With this in mind, we were delighted to read the paper by Johan Bollen and colleagues, entitled "Journal Status", that appeared in Scientometrics in December 2006. It would have been easy to miss this article, except for the fact that it was the subject of a short piece in Nature. The paper describes a new metric for the assessment of journal quality that is based on the use of the PageRank, the algorithm named after Larry Page, one of its creators, and now used by the Google search engine to provide "order on the web". PageRank uses the Perron-Frobenius theorem to provide the relative ranking of various web sites. This theorem is now used to rank football teams, generate complex schedules for professional sports teams and to establish tennis ladders. As Bollen et al. point out, "Google's PageRank algorithm computes the status of a web page based on a combination of the number of hyperlinks that point to the page and the status of the pages that the hyperlinks originate from. By taking into account both the popularity and the prestige factors of status, Google has been able to avoid assigning high ranks to popular but otherwise irrelevant web pages." In actuality, PageRank discriminates among web sites based on their relative "prestige" as compared to their "popularity."

When PageRank was applied to 5,709 scholarly journals, using the ISI citation data base for 2005, the Journal of Biological Chemistry ranked first. Notably, review journals lagged far behind. Why does PageRank rate the Journal so much higher than does Impact Factor? Part of the reason is that the Impact Factor is calculated simply on the basis of static citation rates and publication numbers, whereas PageRank is an iterative algorithm that converges upon what is termed a "stationary probability distribution." A more detailed description of PageRank and its application for assessing the quality of scholarly journals can be found in a recent article, 'Impact Factor PageRankled', in the July issue of ASBMB Today. We subscribe to the comments of Bollen et al. that "as an ever growing collection of scholarly materials becomes available on the web, and hence becomes searchable through Google and Google Scholar, our perception of article status (and hence of journal status) will change as a result of the PageRank-driven manner by which Google lists its search results. In the future, PageRank, not the ISI's Impact Factor, may very well start representing our perception of article and journal status."

Vincent C. Hascall, Associate Editor, JBC

Richard W. Hanson, Associate Editor, JBC