Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Volume 5(1); 2018 > Article
Review
Overview of journal metrics
Kihong Kim1orcid, Yeonok Chung2orcid
Science Editing 2018;5(1):16-20.
DOI: https://doi.org/10.6087/kcse.112
Published online: February 19, 2018

1Department of Energy Systems Research and Department of Physics, Ajou University, Suwon, Korea

2Department of Social Welfare, Jangan University, Hwaseong, Korea

Correspondence to Kihong Kim khkim@ajou.ac.kr
• Received: January 21, 2018   • Accepted: February 12, 2018

Copyright © 2018 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

prev next
  • 16,181 Views
  • 473 Download
  • 25 Web of Science
  • 21 Crossref
  • 29 Scopus
  • Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.
There exist a variety of metrics that are used to indicate the level and the influence of scholarly journals. Most of these metrics are obtained by analyzing the citation data of journal articles. Among them, the impact factor is the best-known and most influential index. This index is calculated by a very simple and easy method, but it also has several problems. A number of other metrics have been proposed for the purpose of correcting these problems and providing more reliable estimates. In the present review, we introduce the definitions of several journal metrics and the methods to calculate them and explain their characteristics and defects briefly.
The idea of impact factor was proposed by Eugene Garfield in 1955 [1]. The Science Citation Index (SCI) was created based on this idea in 1964 and a quantitative evaluation of scholarly journals was launched for the first time. This index is annually announced in the Journal Citation Reports (JCR), which is currently managed by Clarivate Analytics, and is widely used by academic communities. Many related indices are also announced in the JCR.
Impact factor, 5-year impact factor, immediacy index, and impact factor without self cites
In a given year, the impact factor of a certain journal is defined as the average value of citations per paper received by the items published in the journal in two previous years. More specifically, its definition is given by
 Impact factor of the journal J in the year X=A/B,
where A is the number of total citations in the year X received by all items published in the journal J in the years (X-1) and (X-2) and B is the total number of all citable items published in the journal J in the years (X-1) and (X-2). Citable items include only papers and reviews and do not include errata, editorials and abstracts. In the counting of A, however, citations to all items published in J are included.
The 5-year impact factor in the year X is similar to the ordinary (2-year) impact factor, except that it is calculated using the citation data during the 5 years from the year (X-1) to the year (X-5). This index is useful in the academic disciplines where the number of citations is small or it takes some time for published results to be accepted by many researchers. On the other hand, the immediacy index is calculated similarly to the impact factor using the total number of citations received in the year X by all items published in the same year X. If this index is large, it means that the papers published in that journal are cited rather quickly.
The journal self-citation means the case where a paper published in the journal J is cited in the same journal. In the JCR, the impact factor without self cites, which is obtained after excluding journal self-citations, is also announced. If the difference between the impact factor and the impact factor without self cites is significantly large for a certain journal, sometimes that journal is excluded from the JCR list.
Cited half-life and citing half-life
The cited half-life is calculated using the number of citations received in the year X by all items published in a certain journal in all years. For example, let us suppose that the journal J received 1,285 citations in 2017. In Table 1, we show the (hypothetical) number of citations and the cumulative percentage classified by the published year of cited items. We find that the cumulative percentage becomes 50% between 2009 and 2008. If we assume that papers were cited equally in every month and calculate the year when the cumulative percentage becomes 50% up to the first digit after the decimal point, then we find that the cited half-life is 9.1 years. This index measures for how long the published contents are cited. In a similar manner, one can calculate the citing half-life using the papers cited by the journal J.
Median impact factor and aggregate impact factor
There is a problem with the impact factor in that it shows rather large variations among academic disciplines. For that reason, the JCR classifies journals based on the subject category and provides several metrics representing each category. The median impact factor is that of the journal placed precisely in the middle when the journals in a certain category are arranged in the order of their impact factors. When the total number of journals in the category, N, is an odd number, it is the impact factor of the [1+(N-1)/2]-th journal. When N is even, it is the average of the impact factors of the (N/2)-th and [1+N/2]-th journals.
The aggregate impact factor is obtained by dividing the total number of citations received by all items published in all journals in a certain category in the year X by the total number of citable items published in all journals in that category in the years (X-1) and (X-2). Since the distribution of impact factors is not linear but highly skewed, the aggregate impact factor tends to be substantially larger than the median impact factor, as can be seen in Table 2. The aggregate immediacy index, the aggregate cited half-life, and the aggregate citing halflife are also provided in the JCR.
Problems of the impact factor and the editorial ethics
As we mentioned already, there is a problem with the impact factor in that it shows large variations among academic disciplines. In Table 2, we show the aggregate impact factor, the median impact factor, the aggregate cited half-life, and the average number of citations per paper for several subject categories listed in the JCR in 2011 and 2013. We notice a trend that the impact factors are usually larger in the disciplines where more papers are cited on average and the cited half-life is shorter.
The impact factor is obtained by the arithmetic mean of the number of citations received by the items published in a certain journal. However, it is well-known that the distribution of the number of citations in a given journal is highly skewed. There exists a tendency that the impact factor overestimates the importance of individual papers. In other words, most papers are cited substantially less than what the journal impact factor indicates. Therefore it is not accurate to judge the quality of an individual paper or researcher based on the journal impact factor.
As competition among scholarly journals becomes stronger, it sometimes occurs that some journal editors adopt policies to manipulate journal impact factor deliberately. One practice that is ethically troubling is to induce authors to do journal self-citation. Publishing more review papers than is necessary and publishing papers which have a higher chance of citation deliberately at the beginning of a year are similar practices. This behavior occurs because too much importance is given to the impact factor and distorts the metric unfairly.
The Eigenfactor score and the journal influence score were developed by Bergstrom et al. [2] to overcome the defects of the impact factor and have been provided by the JCR since 2007. The concept of Eigenfactor is based on the theory of complex networks. For its calculation, one uses a method similar to the PageRank algorithm, which was proposed by Brin and Page [3] and has been used in the Google search engine. In order to calculate the Eigenfactor score, we first define a database consisting of N journals and construct an N×N matrix H, the ij component of which is given by
Hij=Zijk=1NZkj
where Zij represents the number of citations in the journal j in the year X received by the items published in the journal i during the five years from the year (X-5) to the year (X-1). Since journal self-citations are excluded in the calculation of the Eigenfactor score, all diagonal elements of the matrix Z are zero. Next we define a vector, a, called the article vector. The i-th component of this vector, ai, is obtained by dividing the total number of papers published in the journal i during the 5 years from the year (X-5) to the year (X-1) by the total number of papers published in the whole database during the same period. In the calculation of this kind of problem, one needs to take a special care of the dangling nodes and the dangling clusters. An example of the dangling node is the case where a certain journal j does not cite any of the journals in the database, but its papers are cited by other journals. Then the matrix elements Zkj are zero for all k. Since the j-th column of the matrix H is undefined, it is necessary to replace this column by a suitable vector. We define a matrix H*, which is obtained by replacing all columns corresponding to the dangling nodes by the article vector a, and then introduce an N×N matrix P given by
P=αH* + 1-αa1a1aNaN
where α is an appropriate constant and is usually selected to be 0.85. The journal influence vector, v, is defined to be the eigenvector corresponding to the largest eigenvalue of the matrix P. The i-th component of the vector v has the meaning of the weighting factor representing the relative importance of the journal i in the group of journals in the database. Finally, the Eigenfactor score of the journal i, Fi, is calculated using
Fi=100j=1NHijνji=1Nj=1NHijνj
According to this definition, the sum of all Eigenfactor scores for all journals in the database is equal to 100. Since this quantity is not normalized by the total number of papers published in a given journal, it tends to be larger for journals publishing larger number of papers, if all other conditions are the same. A useful characteristic of the Eigenfactor is that it makes it possible to compare journals belonging to different academic disciplines directly because those differences are adjusted for in this metric. The article influence score Ii measures the influence of individual papers published in the journal i and is defined by
Ii=0.01Fiai
This quantity can be used as an alternative to the impact factor. The mean article in the entire JCR database has an article influence of 1.
In this section, we review three journal metrics provided by the Scopus database, which are the CiteScore, the Source Normalized Impact per Paper (SNIP), and the SCImago Journal Rank (SJR).
CiteScore
The CiteScore is very similar to the impact factor. It is calculated using the Scopus data and is defined as the average value of citations per item received by the items published in the journal in three previous years, rather than in two previous years as in the case of the impact factor. Another difference from the impact factor is that both numerator and denominator include all document types.
SNIP
The SNIP was proposed by Moed [4] as a metric that adjusts for different citation patterns across different academic disciplines. This metric is provided in the Scopus and can be used instead of the impact factor. The SNIP is defined as
 SNIP=RIP/RDCP,
where the acronyms RIP and RDCP stand for “raw impact per paper” and “relative database citation potential” respectively. The RIP is the number of citations in the year X received by the papers published in the three previous years, (X-1), (X-2), and (X-3) in a certain journal divided by the total number of papers. It is similar to the impact factor, except that the 3-year citation window is used and only citations of papers are included and those of errata and editorials are excluded. In order to define the RDCP, one needs to define the DCP, which means the database citation potential, first. Let us consider the references of the papers which cited in the year X the papers published in a certain journal in the three previous years, (X-1), (X-2), and (X-3). Among these references, we consider only the references published during the same 3-year period. The DCP is obtained by dividing the total number of those references by the number of citing papers. In this calculation, only citations of the journals belonging to the database are included and other journals are ignored. The RDCP is obtained by normalizing the DCP by the median DCP of the database.
SJR
The SJR is provided by the Scopus together with the SNIP [5]. It is calculated iteratively in the following manner. First, one introduces a vector S, which is meant to represent the relative importance of the journals belonging to the database of N journals. Si is the weighting factor of the journal i. In the first stage of the iteration, the values of Si are assigned arbitrarily. The final result does not depend on the choice of the initial values. In the next step, the updated values of Si are calculated using the formula
1-d-eN+eai+dj=1NHij*Sji=1Nj=1NHij*Sj1-kdangling nodesSk+daikdangling nodesSkSi
where the constants d and e are chosen to be d=0.85 and e=0.1 and the matrix H* and the article vector a are defined similarly to the case of the Eigenfactor calculation, except that the 3-year citation window is used. Using the updated values of Si, new calculations are repeated until all values converge. Finally, the SJR of the journal i is calculated using
SJR=SiAi
where Ai is the total number of papers published in the journal i during the 3-year period.
The h-index was proposed by Hirsch in 2005 [6] as a new metric for evaluating the ability researcher. This index is calculated using all citations received by the papers published by a specific researcher. If we arrange those papers in the order of citations received by them and if h papers are cited a least h times, then the maximum number of h is the h-index of that researcher. Since it is possible to assign an h-index to the group of papers published in a specific journal in a specific year, it can be used also as a journal metric.
Since the h-index is obtained by using the total number of citations of each paper, it increases monotonically with time. It has a shortcoming that researchers with a small number of very influential papers have low indices. In order to correct this shortcoming, Leo Egghe proposed a modified index named g-index. This index is defined as the maximum value of g when g papers among a certain group of papers were cited at least g2 times. The g-index is always larger than the h-index. In addition to the h-index, Google Scholar provides a metric named i10-index, which is the total number of papers authored by a certain researcher cited at least 10 times.
In this review, we have surveyed the definitions and the characteristics of various kinds of metrics used for the quantitative evaluation of scholarly journals. All of these metrics are obtained from the analysis of citation data. In addition to the metrics surveyed here, new kinds of metrics continue to be devised. More recently, interest in alternative metrics, or ‘altmetrics,’ which go beyond conventional citation analysis, has been growing rapidly. We emphasize, however, that no metric is perfect and all metrics have limits and problems. Therefore it is necessary not to rely on quantitative measures too much when we evaluate journals, papers, researchers, and institutions.

No potential conflict of interest relevant to this article was reported.

Table 1.
Number of citations received in 2017 and its cumulative percentage classified in terms of the published year of cited items
2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007–all
Citations in 2017 23 65 147 138 58 44 51 45 68 62 584
Cumulative percentage 1.79 6.85 18.29 29.03 33.54 36.97 40.93 44.44 49.73 54.55 100
Table 2.
The aggregate impact factor, the median factor, the aggregate cited half-life, and the average number of citations per paper for several subject categories listed in the Journal Citation Reports in 2011 and 2013
Subject category Aggregate impact factor
Median impact factor
Aggregate cited half-life
Average number of citations per paper
2011 2013 2011 2013 2011 2013 2011 2013
Cell biology 5.760 5.816 3.263 3.333 6.9 7.2 53.4 55.0
Chemistry, multidisciplinary 4.738 5.222 1.316 1.401 5.9 5.6 40.9 44.6
Nanoscience & nanotechnology 4.698 4.902 1.918 1.768 3.8 4.1 35.5 39.1
Astronomy & astrophysics 4.242 4.462 1.683 1.676 6.8 7.0 49.3 53.2
Materials science, multidisciplinary 3.107 3.535 1.132 1.380 5.2 5.4 32.2 34.8
Physics, multidisciplinary 2.680 2.953 0.983 1.300 7.7 8.0 30.4 33.8
Engineering, mechanical 1.232 1.573 0.743 0.889 7.6 8.0 25.2 28.1
Mathematics 0.709 0.729 0.561 0.582 > 10.0 > 10.0 19.8 21.0

Figure & Data

References

    Citations

    Citations to this article as recorded by  
    • The introspections of contemporary business research: a call for scientific creativity
      Kuldeep Singh
      Society and Business Review.2024; 19(4): 648.     CrossRef
    • Understanding Trends in Green Accounting Studies: A Bibliometrics Analysis
      Sully Kemala Octisari, Dwi Artati, Irman Firmansyah, Arya Samudra Mahardhika, Romandhon, Aris Susetyo, Arif Sapta Yuniarto, Roni Budianto
      HOLISTICA – Journal of Business and Public Administration.2024; 15(1): 119.     CrossRef
    • Tracking the trajectory of frankia research through bibliometrics: trends and future directions
      Ridha Mhamdi, Maher Gtari
      Canadian Journal of Microbiology.2024;[Epub]     CrossRef
    • Analyzing AI use policy in LIS: association with journal metrics and publisher volume
      Eungi Kim
      Scientometrics.2024;[Epub]     CrossRef
    • Towards a new paradigm for ‘journal quality’ criteria: a scoping review
      Mina Moradzadeh, Shahram Sedghi, Sirous Panahi
      Scientometrics.2023; 128(1): 279.     CrossRef
    • Recommendations and guidelines for creating scholarly biomedical journals: A scoping review
      Jeremy Y. Ng, Kelly D. Cobey, Saad Ahmed, Valerie Chow, Sharleen G. Maduranayagam, Lucas J. Santoro, Lindsey Sikora, Ana Marusic, Daniel Shanahan, Randy Townsend, Alan Ehrlich, Alfonso Iorio, David Moher, Shahabedin Rahmatizadeh
      PLOS ONE.2023; 18(3): e0282168.     CrossRef
    • A multidimensional journal evaluation framework based on the Pareto‐dominated set measured by the Manhattan distance
      Xinxin Xu, Ziqiang Zeng, Yurui Chang
      Learned Publishing.2023; 36(4): 619.     CrossRef
    • Journal quality criteria: Measurement and significance
      O. V. Kirillova, E. V. Tikhonova
      Science Editor and Publisher.2022; 7(1): 12.     CrossRef
    • Bibliometric analysis of artificial intelligence algorithms used for microbial fuel cell research
      Luis Erick Coy-Aceves, Benito Corona-Vasquez
      Water Practice and Technology.2022; 17(10): 2071.     CrossRef
    • Predicting the citation count and CiteScore of journals one year in advance
      William L. Croft, Jörg-Rüdiger Sack
      Journal of Informetrics.2022; 16(4): 101349.     CrossRef
    • The Journal Citation Indicator has arrived for Emerging Sources Citation Index journals, including the Journal of Educational Evaluation for Health Professions, in June 2021
      Sun Huh
      Journal of Educational Evaluation for Health Professions.2021; 18: 20.     CrossRef
    • Kind Attention to the Altitude of Altmetrics
      Shekar Shobana
      Brazilian Dental Journal.2020; 31(5): 457.     CrossRef
    • Comments on “Scientificity and H-Index.”
      Ali Yavuz KARAHAN
      Acta Medica Alanya.2020; 4(2): 203.     CrossRef
    • Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics
      Ernesto Roldan-Valadez, Shirley Yoselin Salazar-Ruiz, Rafael Ibarra-Contreras, Camilo Rios
      Irish Journal of Medical Science (1971 -).2019; 188(3): 939.     CrossRef
    • CiteScore metrics: Creating journal metrics from the Scopus citation index
      Chris James, Lisa Colledge, Wim Meester, Norman Azoulay, Andrew Plume
      Learned Publishing.2019; 32(4): 367.     CrossRef
    • High Impact and Highly Cited Peer-Reviewed Journal Article Publications by Canadian Occupational Therapy Authors: A Bibliometric Analysis
      Ted Brown, Yuh-Shan Ho, Sharon A. Gutman
      Occupational Therapy In Health Care.2019; 33(4): 329.     CrossRef
    • Corrective factors for author- and journal-based metrics impacted by citations to accommodate for retractions
      Judit Dobránszki, Jaime A. Teixeira da Silva
      Scientometrics.2019; 121(1): 387.     CrossRef
    • A New Metric for the Analysis of the Scientific Article Citation Network
      Livia Lin-Hsuan Chang, Frederick Kin Hing Phoa, Junji Nakano
      IEEE Access.2019; 7: 132027.     CrossRef
    • Bibliographic measures of top-tier finance and information systems journals
      Thomas Krueger, Jack Shorter
      Journal of Applied Research in Higher Education.2019; 12(5): 841.     CrossRef
    • Journal metrics of Clinical and Molecular Hepatology based on the Web of Science Core Collection
      Sun Huh
      Clinical and Molecular Hepatology.2018; 24(2): 137.     CrossRef
    • Journal Metrics of Infection & Chemotherapy and Current Scholarly Journal Publication Issues
      Sun Huh
      Infection & Chemotherapy.2018; 50(3): 219.     CrossRef

    Related articles
    Overview of journal metrics
    Overview of journal metrics
    2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007–all
    Citations in 2017 23 65 147 138 58 44 51 45 68 62 584
    Cumulative percentage 1.79 6.85 18.29 29.03 33.54 36.97 40.93 44.44 49.73 54.55 100
    Subject category Aggregate impact factor
    Median impact factor
    Aggregate cited half-life
    Average number of citations per paper
    2011 2013 2011 2013 2011 2013 2011 2013
    Cell biology 5.760 5.816 3.263 3.333 6.9 7.2 53.4 55.0
    Chemistry, multidisciplinary 4.738 5.222 1.316 1.401 5.9 5.6 40.9 44.6
    Nanoscience & nanotechnology 4.698 4.902 1.918 1.768 3.8 4.1 35.5 39.1
    Astronomy & astrophysics 4.242 4.462 1.683 1.676 6.8 7.0 49.3 53.2
    Materials science, multidisciplinary 3.107 3.535 1.132 1.380 5.2 5.4 32.2 34.8
    Physics, multidisciplinary 2.680 2.953 0.983 1.300 7.7 8.0 30.4 33.8
    Engineering, mechanical 1.232 1.573 0.743 0.889 7.6 8.0 25.2 28.1
    Mathematics 0.709 0.729 0.561 0.582 > 10.0 > 10.0 19.8 21.0
    Table 1. Number of citations received in 2017 and its cumulative percentage classified in terms of the published year of cited items

    Table 2. The aggregate impact factor, the median factor, the aggregate cited half-life, and the average number of citations per paper for several subject categories listed in the Journal Citation Reports in 2011 and 2013


    Science Editing : Science Editing
    TOP