Journal Impact Factor

Can apply to: Journals selected for indexing in Science Citation Index Expanded and/or Social Sciences Citation Index are eligible to have a Journal Impact Factor.

Metric definition: The Journal Impact Factor is a measure reflecting the annual average (mean) number of citations to recent articles published in that journal. A Clarivate Analytics essay describes the measure as a ratio, stating “The impact factor…is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published.”

Metric calculation:  From Wikipedia: “In any given year, the impact factor of a journal is the number of citations received in that year by articles published in that journal during the two preceding years, divided by the total number of [research, proceedings, and review] articles published in that journal during the two preceding years.”

There is a discrepancy between the numerator and denominator used to calculate each journal’s impact factor: the numerator counts citations for all items published in a journal, whereas the denominator is calculated based upon the number of “citable items” within the journal, defined by the creators of the Journal Citation Report (Clarivate Analytics) to be research articles, proceedings articles, and reviews.

Data sources: Citation data used to calculate journal citations is sourced from all citations found in the Web of Science Core Collection

Appropriate use cases: The JIF can be useful in comparing the relative influence of journals within a discipline, as measured by citations. As Haustein (2012) notes: “Mean citation rates, such as the impact factor, normalize citation counts by the number of documents which received them in order to enable comparison of periodicals of different output size…the impact factor was developed as a size-independent measure to compare journal impact for identifying and monitoring the most influential scientific periodicals.” Used appropriately and in conjunction with other metrics, the JIF can be useful in collection development decisions made by librarians. As with all metrics, the JIF should be presented with appropriate context.

Limitations: The JIF has been published annually since 1975, and an extensive literature on its characteristics, limitations, and common misunderstandings related to its use is available. Scholars should be aware of the following limitations for their own use. There is debate about the degree of transparency related to guidelines for the journals and article types selected for inclusion. Some claim there is not enough transparency while others feel that Clarivate’s documentation on the selection process is sufficient. A related debate surrounds the differing scope of the numerator and denominator used for calculating the Journal Impact Factor.

Authors have expressed concerned with the broad application of the JIF, in part because it is not available for all journals (especially in the arts and humanities) and also because in some fields of the social sciences, arts, and humanities, the journal article is not the primary research product. The journals included in Clarivate Analytic’s Citation Indexes, used to calculate the JIF, are not necessarily representative of the articles published in a particular field, region1,2,3, or in languages other than English.

The usefulness of the JIF varies by field. The JIF is influenced by disciplinary differences in how researchers cite and write (e.g., frequency of publication, typical number of citations per article, average length of article, etc.). Coverage varies by discipline, which makes context such as citation density and rates crucial for interpreting the JIF. Thus, field-normalized metrics are more appropriate for making comparisons between disciplines. Review articles have a higher average citation rate than other types of articles and will influence the net value of the JIF accordingly.

The JIF has been the subject of an ongoing and rather technical debate for its influence on editorial policies. For an introduction to these issues, we recommend reading Measuring Research: What Everyone Needs to Know by Sugimoto and Lariviere and a pre-print from the same authors on the history of the JIF.

Inappropriate use cases:

As a journal level metric, the JIF should not be used as an indicator for the quality of particular articles or authors. Put another way, the JIF is not statistically representative of (the citations to) individual articles and cannot summarize the quality of an author’s entire body of work.

As a retrospective measure of past citations to a journal, the JIF is not a good predictor of whether an individual article will be highly cited. Due to the skewed distribution of citations (relatively few articles receive most citations, sometimes described as “the long tail”), the use of the mean rather than the median value of citations per article does not offer a reliable prediction for the average number of citations an article can expect to receive.

Caution should be exercised when using bibliometric indicators like the JIF without sufficient context. These indicators “should not be taken at face value and need to be interpreted against the background of their inherent limitations, such as the language of research papers.

Available metric source: Journal Citation Reports

Transparency: The formula for calculating the JIF is public, though there is some debate about the transparency of how journals are selected for inclusion (see above). For those with subscription access to the JCR, a journal’s JCR listing includes a list of the counted items in the denominator. The citation data network and the summarized values used for all metrics are available to subscribers for download.

Website: https://clarivate.com/products/journal-citation-reports/

Timeframe: Recently, the JCR has been released about 6 months after the year in question. For example, the JIF for 2016 were released in June 2017.