Can apply to: Scholarly outputs such as journal articles, reviews, and preprints.
Metric definition: The number of times that a journal article or preprint has appeared in the reference list of other articles and books.
Metric calculation: Many citation databases use a combination of text-mining and manual classification to build their lists of citations, based upon the reference lists of articles and books that they index. However, the scope of these databases varies, with Web of Science being the most selective (in terms of the quantity of journals and disciplines covered) and Google Scholar being the least selective (indexing a great deal of non-peer-reviewed content and various research output types). Outside of Google Scholar and Microsoft Academic, it is difficult to track citations to unpublished articles (preprints).
Data sources: Citations are mined from the references sections of articles published in a manually curated list of journals, or in the case of Google Scholar, from any domain identified as being scholarly in nature.
Appropriate use cases: There are many diverse reasons why scholars cite each others’ work, so it’s impossible to say that there is one way that citations should be interpreted. The closest one can get is to say that citations are a measure of influence amongst other scholars, and that influence can sometimes be negative (especially in the humanities). Citations to journal articles are generally better applied to the evaluation of STEM research, given the dearth of coverage of humanities, arts, and social sciences research in most citation databases.
Limitations: One needs to consider the context of a citation to understand its true meaning. Many factors can impact citation counts including database coverage, differences in publishing patterns across disciplines, citation accrual times, self-citation rates, the age of the publication, observation period, or journal status.
Database coverage. Citation databases like Web of Science and Scopus have been recognized to have limited coverage of humanities, arts, and social sciences research as compared to the sciences, as well as limited coverage of local and specialized journals, especially those written in languages other than English.
Discipline-specific publishing norms. Moreover, differences in authorship norms between disciplines–some fields regularly have dozens of authors for a paper, where others tend to have single-author papers–meaning that citations cannot always measure the full extent of an author’s contributions towards a work. Citations accrue at different rates across disciplines, depending on the publishing volume and other norms. For example, a paper in oncology may accrue 10 citations in the first year after publication, while a paper in philosophy may take several years to accrue as many citations.
Self-citation rates: Author self-citation is an essential part of scholarly communication and can impact citation counts. Identifying the number of self-citations provides supplementary information about the citations themselves.
Age of publication. Citations are impacted by the age of the paper. More recently published papers have had less time to accrue citations. Most papers “receive a growing number of citations to arrive at a peak somewhere between two and six years after publication before the citation count decreases, while some receive most of the citations within a year or two, others are cited constantly for a long period, and still others remain unmarked before a sudden wave of citations arrives seven or ten years afterwards.”.
Observation Period. If limiting the number of years from which a citation is counted, the overall citation count may decrease for a publication.
Inappropriate use cases: Citation counts should never be interpreted as a direct measure of research quality and should not be used as a measure of positive reputation for individual researchers. Citation counts should not be used to compare papers of different age (i.e. publication year), type (i.e. articles, reviews, etc.) or subject areas. A metric more suited for this type of comparison is the Field Normalized Citation Impact.
Transparency: Citations are only as transparent as the availability of the citing article or book allows them to be. One may not always be able to read citations in context, given the prevalence of subscription journals to which reviewers are not guaranteed access. Most databases that report citations report the full list of citing articles, at the very least, linking through to full-text articles where possible (even if only for subscribing institutions).
Timeframe: In theory, it is possible to track citations to journal articles as far back as the advent of the scientific journal. While some coverage exists prior to 1900, coverage for Scopus and Web of Science is strongest for 1900 – present.