Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Scholarly Impact in Chemistry

  • 28-08-2011 11:13am
    #1
    Closed Accounts Posts: 13


    Dear netters,

    The scholarly impact of academic articles in the social sciences is typically assessed using the impact factor - which provides a useful metric of scholarly quality at a number of levels of analysis. I'm particularly interested in article level of analysis. So my question is this - beyond the number of citations, how is the scholarly impact of peer reviewed chemistry articles assessed?

    If you might please refer me to discussions of such matters in peer review articles, that would be most appreciated.

    Thanks in advance
    DrC


Comments

  • Registered Users, Registered Users 2 Posts: 861 ✭✭✭Professor_Fink


    Dr C wrote: »
    The scholarly impact of academic articles in the social sciences is typically assessed using the impact factor - which provides a useful metric of scholarly quality at a number of levels of analysis. I'm particularly interested in article level of analysis. So my question is this - beyond the number of citations, how is the scholarly impact of peer reviewed chemistry articles assessed?

    I'm a physicist, not a chemist, however I believe the experience of most hard sciences is similar, as we share funding agencies. Impact factor is widely used, but so are things like citations per unit time, total number of citations, etc. For comparing researchers the h-index is often used (there are also a bunch of variants on this).

    That said, none of these are considered particularly good metrics within the community, and my experience is that researchers tend to regard these as an expedience taken by funding agencies and HR. Even calculating the correct numbers is near impossible, as different citation counting services give widely different numbers as they have incomplete coverage of the literature. In my own case I have experienced counts varying by a factor of 10 or more depending on which site is used (Scopus, WoS, adsabs, Google Scholar, etc.). I can also say from my own experience that what is fairly widely regarded as my most important paper if only my 5th most cited and that it has a factor of nearly 5 less citations than my most widely cited paper.

    Obviously you may think that me using myself as an example is not a good idea, and you have no way to verify this as I use a pseudonym here. So let me point to Albert Einstein as another example of this. His most widely cited paper is the EPR paper (8394 citations according to Google scholar) which is far far more widely cited than any of the annus mirabilis papers (2473 cites for Brownian motion, 588 cites for relativity, and 621 for the photo electric effect). Note that the EPR paper is important, mostly to people doing quantum computing, but this is fairly recent attention. The other papers founded entire fields.

    As a comparison Witten's paper "Anti de Sitter space and holography" has 10 times as many citations as any of Einstein's papers on relativity, which literally found the field. Further, there are plenty of text books with more than 10k citations.

    Part of the reason for this can be that the most important and fundamental results become widely know, not just through citation, but through other ways, and once an article becomes sufficiently well known (particularly once the results start to appear in text books), less people cite the original article and instead cite review articles or textbooks if they cite anything at all (many won't even bother with the citation, as they fully expect the reader to know exactly what they are refering to).

    Anyway, my point is simply that there is no such simple way to assess the importance of an article no matter how much policy people would like there to be.


Advertisement