World Rural Health Conference
Home Print this page Email this page Small font size Default font size Increase font size
Users Online: 3389
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents 
COMMENTARY
Year : 2019  |  Volume : 8  |  Issue : 11  |  Page : 3475-3479  

Impact factor: Mutation, manipulation, and distortion


1 Department of Microbiology, Govt. Doon Medical College, Dehrakhas, Patelnagar, Dehradun, Uttarakhand, India
2 Department of Pharmacology, People's College of Medical Sciences and Research Centre, Bhapur, Bhopal, Madhya Pradesh, India
3 Principal, Govt. Doon Medical College and, Professor and Head, Department of Surgery, Govt. Doon Medical College, Dehrakhas, Patelnagar, Dehradun, Uttarakhand, India

Date of Submission01-Jul-2019
Date of Decision22-Aug-2019
Date of Acceptance09-Oct-2019
Date of Web Publication15-Nov-2019

Correspondence Address:
Mr. Deepak Juyal
Department of Microbiology, Govt. Doon Medical College, Dehrakhas, Patelnagar, Dehradun - 248 001, Uttarakhand
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jfmpc.jfmpc_515_19

Rights and Permissions
  Abstract 


Currently, there is an increased dependency on the impact factor in scientific research publications. Sometimes the overzealousness in implementing the impact factor value to a publication is detrimental to the growth of the scientific authors, especially the junior lot, for no fault of theirs. The inept and myopic application of the impact factor defeats the purpose of making the value judgment and hence has been criticized by many learned authors. The scientist Eugene Garfield, who formulated the impact factor, feels wrong that it is being not judiciously used. A corollary is the invention dynamite by Alfred Nobel which instead of helping, aiding, and reducing human effort and endeavor it was more misused for human annihilation and pained the scientist who gave it to the world. The authors reexamine the application of impact factor to scientific manuscripts for rightful application of the value judgment.

Keywords: Predatory journals, Research integrity, Science Citation Index, Thomson Reuters


How to cite this article:
Juyal D, Thawani V, Sayana A, Pal S. Impact factor: Mutation, manipulation, and distortion. J Family Med Prim Care 2019;8:3475-9

How to cite this URL:
Juyal D, Thawani V, Sayana A, Pal S. Impact factor: Mutation, manipulation, and distortion. J Family Med Prim Care [serial online] 2019 [cited 2019 Dec 5];8:3475-9. Available from: http://www.jfmpc.com/text.asp?2019/8/11/3475/270922



“In 1955, it did not occur to me that the impact would one day become so controversial. Like nuclear energy, the impact factor is a mixed blessing. I expected it to be used constructively while recognizing that in the wrong hands it might be abused.” –

Eugene Garfield


  Background Top


The journal impact factor (JIF) has become an important indicator of the quality of research publication. Hence, while considering the research impact, most consider the JIF as a barometer of research. Although it was never intended to be used to evaluate individual scientists, but rather as a measure of the quality of academic journals, JIF has been increasingly misused in this way.[1] Researchers are often ranked on the basis of their publication in the journals with high IF, and in some countries, publication in a journal with an impact factor <5.0 is officially of no value.[2] Science ministries in certain countries offer cash rewards to scientists publishing in high IF journals viz Nature, Science, Cell etc.[3],[4] Thus, it has become imperative for scientists to publish their work in journals with high IF. However, serious concerns have been raised about the use of JIF as a surrogate marker for the quality of research, individual articles, or a researcher itself.[2],[5]

Impact factor calculation

The JIF was devised by Eugene Garfield in 1955 to help research libraries differentiate between journals when deciding which one to subscribe to.[6] The term, IF was first used in 1961, after publication in Science Citation Index (SCI) in 1963, and the first ranking of the journals on the basis of IF was published in 1972. As a part of SCI and the social sciences citation index, Thomson Reuters began publishing Journal Citation Report (JCR) annually in 1975.[6] The JIF is actually a measure of how frequently the articles published in that journal are cited. The IF of a journal for any specific year is calculated by dividing the total number of citations received by the articles published in the journal during the preceding 2 years (numerator), by the total number of articles published in the same 2 years (denominator).[7] For example, if the IF of a journal is 3.0 in 2015, it reflects that on average the articles published in 2013 and 2014 were cited thrice in the collection of all Thomson Reuters indexed journals published in 2015.

Impact factor distortions

Scientific publications contribute to the dissemination of the research findings and knowledge to improve life. However, these fundamental tenets of research publications have probably been forgotten in IF mania, and misuse of JIF is extensive. Based on the notion that a journal is representative of its articles, publishing in high-IF journal is considered to evaluate the author's scientific achievement.[1] This, in turn, has led to IF-based assessment for the appointment, allocation of research grants, and academic advancement of the researchers.[8],[9] Researchers also tend to publish their manuscripts in journals with high IF and are more concerned about “where they publish, rather than what they publish.”[10] The contradiction is evident; on one hand we want our journals to attain international standards but when it comes to publishing our exemplary findings, we prefer international journals rather than the Indian ones.[11] This takes even longer for our journals to get recognition and improve IF. The 2014 edition of JCR contained 8474 science and technology journals from all over the world, including 98 from India. Among 98 of these Indian journals only two had IF >2.000.[12]

Since the JIF is calculated over a period of 2 years after being indexed in Thomson Reuters, a recently launched journal or a journal not indexed in Thomson Reuters cannot have an IF. Moreover, there are many peer-reviewed journals that are not indexed in Thomson Reuters and, therefore, do not have an IF. Researchers dislike publishing their findings in journals with no or low IF. Taking advantage of such existing IF craze, many agencies have started allocating fake IFs to the journals on payment basis, which may resemble the original IF.[13],[14] These bogus IF agenciesseem to be hand in glove with “predatory journals”[15] displaying fake IFs predominantly on their websites, and we all have our mailboxes filled with their e-mails with a soliciting manuscripts. The sole aim of these dubious journals is to earn from publishing fees. Due to this demand and supply culture, budding researchers and even academic institutions fall prey to them. However, some researchers knowingly use sham publications and fake scientometrics for their academic advancement based on the poor quality articles posing serious threat to the academic standard and integrity.[15]

Impact factor manipulation

Over the years, critics have argued that the JIF, per se, may not reflect anything informative about the quality of empirical research.[9] It is not an appropriate metric to measure the scientific content of individual articles or a scientist's credibility, and if applied to individual researchers, publications, or grants, it exerts an increasingly detrimental influence on the scientific enterprise. There is a great degree of mutation and manipulation in the evaluation of IF, and we enumerate some of these in the following text.

Eugene Garfield, the inventor of the IF never predicted that it would be used in the scientific community as a criterion for judging the quality of a scientist and determining the provision for research grants. Unfortunately, the IF of a journal is not statistically representative of its individual articles and Garfield himself has reported a poor correlation between the IF of a journal and the actual citation rates of its articles.[6] It has been seen that citations of many articles may not peak until after the second year of publication and is beyond the brief period of time considered for calculating IF.[16] In fact, Lariviere and Sugimoto, in their six-point critique of a JIF, explain that a 2-year period for citations could accidentally favor certain disciplines over others.[17] Moreover, JIF can be skewed by publication of more reviews (which tend to be cited more frequently) or by self-citation of the articles. Recent example is of a journal Acta Crystallographica Section A: Foundations of Crystallography (pISSN 0108—7673) which had an IF of 2.051 in 2008, which changed to 49.926 in 2009 and then increased to 54.333 in 2010 and the IF in 2014 was 2.3074, reason being a single review article receiving a large number of citations.[7]

Journals are also under continuous pressure to raise their IF, which can lead to editorial misconduct. Sometimes a journal may request authors to include references from its own previous publications in order to inflate its IF.[18] Such practice was recently brought to light, where three Brazilian journals conspired to cite each other's published papers in a mutual effort to increase their JIF.[19] Moreover, some papers are cited multiple times for negative reasons and yet these negative citations contribute to improving JIF.[18] An article published in Science showed that many studies that have been proven to be fraudulent are not even retracted and continue to be cited.[20]

While calculating the JIF, only original papers and review articles are counted in denominator while all the published materials (editorials, letters to editor, news, book reviews, etc.) including original papers and review articles are accepted in the numerator. This significantly boosts the JIF. Interestingly, even some of the reputed journals such as Nature and Science have been found to do so in order to boost their JIF.[21],[22] Also, the continuous pressure for publication in high-IF journals leads to “performance anxiety” among researchers and they indulge in unethical publication practices (data falsification and fabrication).[23],[24] Such cases are mostly reported from countries where regulatory bodies demand academic faculty to regularly publish in high-IF journals. The pressure to publish creates a bias that discourages high-risk research and reduces the likelihood of unexpected breakthrough discoveries. Vannevar Bush had commented nearly 70 years ago that “Basic research is performed without thought of practical ends…. Many of the most important discoveries have come as a result of experiments undertaken with very different purposes in mind.”[25]

Suggestions for reforms

Scientist's unhealthy obsession with JIF has been widely criticized, yet many are trapped into this value system when submitting their own work or judging the work of others. A recent study by Madhan et al. pointed out that cumulative JIFs were still being utilized as a criterion for prestigious awards such as the Tata Innovation Fellowship, Innovative Young Biotechnologist Award, National Bioscience Awards for Career Development, and so on.[26] Similarly, the Indian Council of Medical Research routinely uses average JIF as a measure of the performance of its various laboratories.

In order to stop JIF misuse, the researchers should put a halt on the relentless chase for IF and rather focus on the originality and quality of their research work. In this regard, American Society of Microbiology (ASM), on 11th July 2016 announced to remove the IF from its journals and website, as well as from marketing and advertising, a move which was appreciated by many.[27] Of note are prestigious journals such as Nature, Science, The New England Journal of Medicine, The Lancet, etc., which have existed and prospered for long, some even for centuries, before the advent of IF.

The misuse of the JIF as a metric of an individual scientist's or article's importance has been decried in a consensus statement from the San Francisco Declaration of Research Assessment (DORA).[28] The aim of DORA was to put an end to the practice of using JIF as a valuation metric of individual researchers. The declaration states that “the impact factor must not be used as a surrogate measure of the quality of individual research article, to assess an individual scientist's contributions, or in hiring, promotion or funding decisions.” The DORA has made one general and 17 specific recommendations, which are enumerated in [Table 1].
Table 1: Recommendations made by San Francisco Declaration of Research Assessment (DORA)

Click here to view


In order to boost the growth of quality research in our country, a policy statement was released by the Indian National Science Academy on Dissemination and Evaluation of Research Output in India.[29] This document elaborately discusses basic policy parameters such as promoting preprint repositories and incorporating quality peer review, minimizing interference caused by predatory journals as well as predatory conferences, policies for categorizing and evaluating research effort and rationalizing payment policies in the Indian scenario. However, such recommendations are yet to be executed, in actuality.

Over the years, several approaches have evolved to address the limitations posed by JIF in the valuation of researchers and research publications.[30] Some of them are enumerated in [Table 2]. However, due to the inherent lacunae, there is no one size that fits all, set of metrics that can assess the credibility of the researchers or their publications. Research organizations should be consistent about which valuation metrics to be used while maintaining the ethos and values of scholarly scientific publishing over the mere accumulation of publications in prestigious journals.
Table 2: Alternative and diverse evaluation metrics to measure the scientific impact

Click here to view



  Conclusion Top


Despite having widespread recognition that the IF is being misused, the misuse continues and is likely to continue because of the diverse confluence of forces within the scientific community that encourage, promote, and perpetuate it. We submit that the JIF remains a relatively crude index for evaluating the quality of a journal, its scientific content or the credibility of a researcher. Doing so will not only affect the research scientists involved but may even discourage ethical research and hamper the overall scientific progress. A comprehensive scientific evaluation of an article requires a multidimensional approach and is beyond the scope of a single metric such as IF. While evaluating the performance of a researcher, academic administrators should focus on contribution and content rather than on publication venue. However, changing the existing culture will be slow since the researchers are so deeply entrenched with JIF mania that it may take time to wean off from its influence. The removal of JIF from websites as done by ASM is a bold step in this direction and should be followed by others with right earnestly. To err is human but to correct is divine.

One must note that the traditional method of evaluation continues to be peer review, and there is no substitute for reading the article for assessing the research worthiness of authors, rather than reading the title of the paper or the title of the journal or its IF[36].

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Juyal D, Dhawan B, Thawani V, Thaledi S. Falling prey to an impact factor craze. Educ Health 2018;31:189-90.  Back to cited text no. 1
[PUBMED]  [Full text]  
2.
Alberts B. Impact factor distortions. Science 2013;340:787.  Back to cited text no. 2
    
3.
Al-Awqati Q. Impact factors and prestige. Kidney Int 2007;71:183-5.  Back to cited text no. 3
    
4.
Franzoni C, Scellato G, Stephan P. Science policy. Changing incentives to publish. Science 2011;333:702-3.  Back to cited text no. 4
    
5.
Ha TC, Tan SB, Soo KC. The journal impact factor: Too much of an impact? Ann Acad Med Singapore 2006;35:911-6.  Back to cited text no. 5
    
6.
Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90-3.  Back to cited text no. 6
    
7.
Kapil A, Jain NC. Impact factor: Is it the ultimate parameter for the quality of publication? Indian J Med Microbiol 2016;34:1-2.  Back to cited text no. 7
[PUBMED]  [Full text]  
8.
Stephan P. Research efficiency: Perverse incentives. Nature 2012;484:29-31.  Back to cited text no. 8
    
9.
Lahiry S, Sinha R, Thakur S. Impact factor: Does it really have an impact? Indian J Dermatol Venereol Leprol 2019;85:541-ź5.  Back to cited text no. 9
[PUBMED]  [Full text]  
10.
Oh HC, Lim JF. Is the journal impact factor a valid indicator of scientific value? Singapore Med J 2009;50:749-51.  Back to cited text no. 10
    
11.
Ghosh S, Sinha JK. The need for rejuvenation of Indian biomedical journals. Indian J Med Res 2010;132:736-7.  Back to cited text no. 11
[PUBMED]  [Full text]  
12.
Journal Citation Reports 2014, Science edition (web-based) if it is web based- give web ID. Philadelphia: Thomson Reuters; 2014.  Back to cited text no. 12
    
13.
Beall J. Misleading Metrics. Scholarly Open Access. Available from: https://www.scholarlyoa.com/other-pages/misleading-metrics/[Last accessed on 2019 March 9].  Back to cited text no. 13
    
14.
Beall J. Predatory publishers are corrupting open access. Nature 2012;489:179.  Back to cited text no. 14
    
15.
Jain NC. Predatory journals. Indian J Med Microbiol 2015;33:426.  Back to cited text no. 15
[PUBMED]  [Full text]  
16.
Adler KB. Impact factor and its role in academic promotion. Am J Respir Cell Mol Biol 2009;41:127.  Back to cited text no. 16
    
17.
Larivière V, Sugimoto CR. The journal impact factor: A brief history, critique, and discussion of adverse effects. Springer Handbook of Science and Technology Indicators; 2018. Cham (Switzerland): Springer International Publishing p. 1-ź33.  Back to cited text no. 17
    
18.
Feetham L. Can you measure the impact of your research? Vet Rec 2015;176:542-3.  Back to cited text no. 18
    
19.
Van Noorden R. Brazilian citation scheme outed. Nature 2013;500:510-1.  Back to cited text no. 19
    
20.
Smith R. Commentary: The power of the unrelenting impact factor-is it a force for good or harm? Int J Epidemiol 2006;35:1129-30.  Back to cited text no. 20
    
21.
Van Noorden R. The science that's never been cited. Nature 2017;552:162-ź4.  Back to cited text no. 21
    
22.
Schekman R. How journals like Nature, cell and science are damaging science. The Guardian; 2013. Available from: https://www.theguardian.com/commentisfree/2013/dec/09/how-źjournals-źnature-źscience-źcell-źdamage-źscience.[Last accessed on 2019 April 11]  Back to cited text no. 22
    
23.
Juyal D, Thawani V, Thaledi S. Rise of academic plagiarism in India: Reasons, solutions and resolution. Lung India 2015;32:542-3.  Back to cited text no. 23
[PUBMED]  [Full text]  
24.
Juyal D, Thawani V, Thaledi S, Prakash A. The fruits of authorship. Educ Health 2014;27:217-20.  Back to cited text no. 24
[PUBMED]  [Full text]  
25.
Bush V. Science the Endless Frontier. Washington DC; United States Government Printing Office; 1945.  Back to cited text no. 25
    
26.
Madhan M, Gunasekaran S, Arunachalam S. Evaluation of research in India – Are we doing it right? Indian J Med Ethics 2018;3:221-ź9.  Back to cited text no. 26
    
27.
Casadevall A, Bertuzzi S, Buchmeier MJ, Davis RJ, Drake H, Fang FC, et al. ASM journals eliminate impact factor information from journal websites. J Clin Microbiol 2016;54:2216-7.  Back to cited text no. 27
    
28.
DORA: San Francisco Declaration on Research Assessment. (2013) American Society for Cell Biology. Available at http://www.am.ascb.org/dora/[Last accessed on 2019 April 21].  Back to cited text no. 28
    
29.
Chaddah P, Lakhotia SC. A policy statement on “dissemination and evaluation of research output in India” by the Indian national science academy (New Delhi). Proc Indian Natn Sci Acad 2018;84:319-ź29.  Back to cited text no. 29
    
30.
Saxena A, Thawani V, Chakrabarty M, Gharpure K. Scientific evaluation of the scholarly publications. J Pharmacol Pharmacother 2013;4:125-9.  Back to cited text no. 30
[PUBMED]  [Full text]  
31.
Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA 2005;102:16569-72.  Back to cited text no. 31
    
32.
Bollen J, Rodriguez MA, Van de Sompel H. Journal status. Available from: http://www.arxiv.org/abs/csGL10601030. [Last accessed on 2019 April 23].  Back to cited text no. 32
    
33.
Dellavalle RP, Schilling LM, Rodriguez MA, Van de Sompel H, Bollen J. Refining dermatology journal impact factors using PageRank. J Am Acad Dermatol 2007;57:116-9.  Back to cited text no. 33
    
34.
West JD, Bergstrom TC, Bergstrom CT. The Eigen factor metrics: A network approach to assessing scholarly journals. College and Research Libraries 2010;71:236-44.  Back to cited text no. 34
    
35.
Costas R, Zahedi Z, Wouters P. Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J Assoc Inf Sci Tech 2015;66:2003-19.  Back to cited text no. 35
    
36.
Gordon G, Lin J, Cave R, Dandrea R. The question of data integrity in article-level metrics. PLoS Biology 2015;13:e1002161.  Back to cited text no. 36
    



 
 
    Tables

  [Table 1], [Table 2]



 

Top
   
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Background
  Conclusion
   References
   Article Tables

 Article Access Statistics
    Viewed243    
    Printed8    
    Emailed0    
    PDF Downloaded53    
    Comments [Add]    

Recommend this journal