fb pixel

Research Impact Assessment and Metrics

  1. Research Assessment
  2. Journal Impact
  3. Article Impact
  4. Author Profiles
  5. Altmetrics
  6. Additional Assessments & Readings 

You can also watch the video from Scholarly Communications Librarian, Brianne Selman to learn about some of the key concepts:


1. Research Impact Assessment and Metrics

Annual Activity Reports, reporting to funders, and Tenure and Promotion processes all typically ask academics to demonstrate their impact.  Quantitative bibliometrics (such as citations at the article level or journal level) are often used in these cases as proxies for research quality, as they seem quick, easy to understand, and may at first glance seem like objective measures. Additionally, researchers may choose to automate some of these measures by signing up for author profiles, so that they can see data about their publications at a glance.

However, many of the assumptions and ramifications of these measures of “prestige” have been called into question by scholars, and they should not be used uncritically.

“When a measure becomes a target, it ceases to be a good measure” – Goodhart’s Law

Studies show that bibliometric targets, like the Journal Impact Factor, distort the way academics cite and use the work of their colleagues. The massive uptake and embracing of bibliometric targets has also deeply affected tenure and promotion processes. These approaches may, in fact, encourage poor scholarship by changing the incentives in academic publishing. There have been more than a few calls for changing the way we think about academic prestige, and the biases that become ingrained by our assessment practices, as well as the responsible use of metrics.

In addition to bibliometrics, scholars are encouraged to use alternative assessment/ narrative approaches when describing their scholarly contributions and describe not just the labour (writing, presenting, methods, etc.) of their research, but also the impact (effect on policy or practice, collaborations, invited talks, etc).

Leiden Manifesto

First published as a comment in Nature in 2015, the Leiden Manifesto sets out ten principles that should shape research evaluation. There are video versions and multiple translations of the Manifesto available.

The Ten Principles are:

1) Quantitative evaluation should support qualitative, expert assessment. 

2) Measure performance against the research missions of the institution, group or researcher. 

3) Protect excellence in locally relevant research. 

4) Keep data collection and analytical processes open, transparent and simple. 

5) Allow those evaluated to verify data and analysis. 

6) Account for variation by field in publication and citation practices. 

7) Base assessment of individual researchers on a qualitative judgement of their portfolio. 

8) Avoid misplaced concreteness and false precision. 

9) Recognize the systemic effects of assessment and indicators.

10) Scrutinize indicators regularly and update them. 

DORA (San Francisco Declaration on Research Assessment)

The Declaration on Research Assessment was developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco. DORA has subsequently been signed by nearly 2000 institutions, and over 16,000 individuals.

The Declaration outlines a number of principles for funding agencies, institutions, publishers, metrics suppliers, and researchers. Running through these principles are three main themes:

  • the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
  • the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
  • the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

San Francisco Declaration on Research Assessment, emphasis ours


2. Journal Impact

If you are a scholar planning to publish a journal article, you will be looking for a reputable journal to provide the best exposure for your work. For the qualitative aspects of this problem see Assessing Publishers. The quantitative aspects are addressed by various forms of journal rankings, which attempt to estimate a journal’s relative importance compared with other journals in the same field.

Systems based on the rate at which a journal’s content is cited are all subject to certain inherent limitations and vulnerabilities:

  • Publication and citation practices vary among disciplines, making comparisons across different subject areas difficult.
  • Rates of citation are not necessarily stable over time, and can be affected by such factors as the rise of multiple authorship of papers.
  • Generally, no distinction is made between favourable and unfavourable citations, so they are not a sure indication of merit.
  • A single highly cited paper can affect the overall ranking.
  • Rankings may be affected by certain editorial practices, such as publishing numerous review articles.
  • Journal rankings can be manipulated by excessive self-citation, or by groups of authors colluding to cite each other’s work.
  • Overall rankings for a journal are unlikely to predict the quality or significance of a single article.

For an excellent paper outlining the limitations of bibliometrics, see Haustein S., Larivière V. (2015) The Use of Bibliometrics for Assessing Research: Possibilities, Limitations and Adverse Effects. In: Welpe I., Wollersheim J., Ringelhan S., Osterloh M. (eds) Incentives and Performance. Springer, Cham.

For all their imperfections, journal rankings are nonetheless widely used still as proxies for research quality.

Journal Impact Factor

The Journal Impact Factor actually originally started as a tool for Librarians to use. It was an early way to decide which journals would get indexed, and to assist with journal purchasing decisions.

The University of Winnipeg Library presently does not subscribe to any database that includes the Journal Impact Factor (JIF), such as the Journal Citation Reports (JCR) from ISI, so only a brief explanation of it will be given here. The JIF is the most established indicator of journal ranking, and measures the frequency with which the “average article” published in a journal has been cited in a particular period. The default period is two years. For example,

2015 impact factor = (number of citations in 2015 of articles published in 2013-2014) ÷ (number of articles published in 2013-2014)

Five-year 2015 impact factor = (number of citations in 2015 of articles published in 2010-2014) ÷ (number of articles published in 2010-2014)

The JIF has been criticized for various limitations, such as inclusion of citations of materials (such as editorials and letters) that are not included in the denominator of the calculation formula, the inclusion of self-citations, and the lack of evaluation of the quality of the origin of the citation.

SCImago Journal Rankings

SCImago Journal Rankings (SJR) is a service similar to the Journal Citation Reports mentioned above. The site identifies top-ranking journals by subject field, drawing its data from citations in Elsevier’s Scopus database. The SJR addresses some of the criticisms directed at the Journal Impact Factor, as it includes more journals, limits self-citation, and  weighs citations according to the importance of the journal where they were published, using an algorithm similar to that of Google PageRank.

SCOPUS Sources/ SNIP

SCOPUS Sources provide Elsevier’s CiteScore methodology, as well as SNIPs.  CiteScores are similar to Journal Impact Factors, in that they are a ration of citations to publications.The SNIP is a journal’s citation count per paper, divided by the total number of citations in a subject field, so that a single citation is scored higher in subject areas where citations are less likely, and vice versa.

Eigenfactor

The Eigenfactor Project, founded in 2007, presently covers literature published from 1997 to 2015. The Eigenfactor gives a ranking for journals indexed by ISI in Web of Knowledge, based for each on the number of citations its articles receive, weighting citations that come from other influential journals more heavily, and excluding self-citations. The Article Influence scores give the average ranking for a journal (the Eigenfactor divided by the number of articles in the journal), and are thus similar to the JIF. The site also provides data on journals’ Cost-Effectiveness.

Google Scholar Metrics

While based on a metric originally designed to measure author impact (the h-index described below), the list of journals indexed in Google Scholar also serves as an indicator of the impact of journals, giving five-year h-index and h-median values.


3. Article Impact

Author statistics can be as simple as a count of the number of times a single article has been cited, or as comprehensive as a metric that attempts to encapsulate the cumulative influence of entire body of work.

Citation Counts

The impact of an individual article can be measured by the number of times it has been cited. Many databases, and a number of software applications, tally the number of times a specific article has been cited. Like journal metrics, citation counts are subject to certain inherent limitations:

  • Citation counts will vary from one source to another, depending on the number of and type of sources indexed by a database.
  • Journal articles are the predominant form of publication of original research in the Sciences, but not in the Social Sciences and Humanities, and databases which do not index books or book chapters under-represent activity in those disciplines.
  • The literature of some disciplines is more heavily cited than others.
  • Few sources of citation counts make it possible to identify self-citations, or to distinguish unfavorable citations from favorable ones.

Citation counts appear in a number of databases, including:

  • Web of Science: A database of all of the Web of Science Citation indexes, including thousands of international journals in the Sciences, Social Sciences, and Arts & Humanities. Search for “lastname firstinitial” to find a robust suite of citation analyses – total citation count, broken down over time, discipline, location of the citation, and more. See also the Author Profiles section below for more info on claiming your MyResearcherID.
  • Google Scholar: In lists of the search results, find the article you want citation counts for and look at the bottom of an entry for “cited by,” followed by a number. You can see each individual citation. See also the Author Profiles section for more info on seeing your overall citations and citations over time.
  • Dimensions.ai: The Dimensions database searches millions of articles and provides free and open results and visualizations.  Articles will include citation count for that article, as well as Dimensions and Altmetric badges for each article. These badges provide useful summaries of citations, the relative weight of these citations according to your field, how recent the citations are, and the sharing of your work in news stories and/ or social media.
  • PLOS One: Lists of the search results provide not only citation counts, but number of views, and number of shares in social media. Learn more about PLOS article-level metrics.

The h-index (Hirsch index)

The Hirsch index, more widely known as the h-index, is a measure of an author’s scholarly impact. The h-index reflects both the number of publications and the number of citations per publication. An author with an h-index of n has at least n papers that have each been cited n times. In other words, their work has at least ncitations in total. Limitations of the h-index include the following factors:

  • As with the JIF, the h-index is not a fair means of comparing authors across subject areas, as some disciplines naturally publish and cite more than others.
  • Because the h-index is calculated using the count of a researcher’s publications, and therefore reflects an author’s “scholarly age,” persons with shorter career spans are at a disadvantage, regardless of the importance of their discoveries.
  • An author’s h-index will be invalid if another researcher has the same name; for this reason, the only truly reliable means of deriving an author’s h-index is to use a list of publications provided by the author.
  • The h-index does not account for self-citations or negative citations, and may, therefore, misrepresent an author’s importance.

Sources for the h-index include:

  • Google Scholar: note that this source only provides an author’s h-index if he or she has created a user profile. To set up a user profile, click on the link at the top of the web page for “My citations.” For the h-index of journals, see Google Scholar Metrics.
  • Web of Science: a database of all of the Web of Science Citation indexes, including thousands of international journals in the sciences, social sciences, and arts and humanities. To find an h-index, perform an Author Search, then on the search results page, click on “Create Citation Report.”

4. Author Profiles

There are numerous options out there for seeing your overall publications and citations as an author. Many of these profiles allow you to “claim” them, which gives you an opportunity to update, modify, and correct as needed.

Places to claim or view Author Profiles include:

  • ORCID: ORCID  (Open Researcher and Contributor ID) is a persistent personal identifier. It allows you to claim and collate publications, grants, and more, regardless of name or affiliation changes. It is increasingly used in the funding process, and increasingly included in publications. Claiming your ORCID will allow you to import publications from various sources, including MyResearcherID (Web of Science, below), SCOPUS , Google Scholar (by importing a BibTeX record), other publishers, or by manually adding your own.
  • Dimensions.ai: If you have located an article of yours, you can click on your name to be given the option to see your Author Profile. No sign up is required, and you can view your publications, citations, and network of collaborators.
  • Google Scholar Profiles: Claim a Google Scholar account to see your citations. You can easily select which publications are yours, and verify with your UWinnipeg email account.  After you have registered your profile, when you are logged in (the upper right hand side when on Google Scholar), the My Citation button will bring up a list, with citation counts, for all your publications discovered by the Google spider. Also shows citations over time, and an h-index. Note that Google Scholar Citations include grey literature such as Word documents and PowerPoints.
  • Web of Science ResearcherID: Publons, which manages the peer review process for many journals, has combined with Web of Science ResearcherID, to provide one profile page that includes a publications dashboard for all of your works that appear in Web of Science, as well as your peer review and editorial work in Publons. Claim your ResearcherID in order to have a persistent identifier that stays with you, regardless of institution or name changes.  Once your ID is created, you can manage and correct the publications that shows on your profile. You can also add publications from Web of Science by selecting “I Wrote This”. You can also connect your ID with ORCID. Your profile page will show cumulative citations, citations over time, citations by country or discipline, and more.
  • Semantic Scholar: Semantic Scholar uses AI to draw networks and connections between research. Searching for an author's name on Semantic Scholar will usually result in one or more author's names being displayed at the top of searches in a box. Clicking on the box will bring you to an author page with stats such as number of publications, h-index, and more detailed information about each work with links to more citation data. You can also claim your profile from your own author page.

5. Altmetrics

As researchers increasingly publish and communicate about their work through web-based environments such as blogs, social networks, and institutional repositories, some scholars have begun to question the primacy of journal-based metrics in the assessment of scholarship. Altmetrics track new indicators of significance and impact outside the academy, using the social web — download counts, page views, mentions in news reports, social media, and blogs — to measure and represent the amount of attention an article receives.

Altmetrics can provide a more immediate measure of impact, and can measure the impact of non-traditional publications, like datasets and code. Moreover, Altmetrics can provide the context and reasons for a citation.

However, Altmetrics only measure how much attention a publication receives, not its quality. Other limitations include the fact that some disciplines or subjects naturally receive more attention than others, and the fact that older publications are not under-represented in social media.

Sources for altmetrics include:

  • Publish or Perish scrapes sources such as Google Scholar for citation data, and can provide an h-index and other calculations.
  • Altmetric for Researchers provides a free browser plugin that lets you see their Altmetrics for anything that has a DOI.
  • PLOS One provides the number of views, number of bookmarks, and number of shares in social media. Learn more about PLOS article-level metrics.
  • Social Science Research Network (SSRN) provides metrics such as abstract views, downloads, download rank. Note: You need to create a free SSRN account to view the references and citations.
  • Mendeley is a reference manager and academic social networking tool. Mendeley keeps readership statistics and download counts for articles.
  • Scholarometer is a browser extension to Google Scholar that can overlay a number of social or discipline-specific filters on search results

6. Additional Assessments & Readings

Library Help Chat