Skip to Main Content

Measuring Research Impact

An introduction to some commonly used metrics for determining the influence of published research.

Article Level Metrics

The widespread use of "Social Media" has prompted exploration into ways of attempting to evaluate research influence by counting or scoring the ways in which a particular report is treated on these forums. Advocates propose that measures of activity on social media surrounding a report should be used in addition to accepted methods, such as citation counting.

The terms "article level metrics" and "altmetrics" (for alternative metrics ) have been suggested to describe these  approaches. Since many publishers have feature-rich websites allowing considerable activity on the part of page visitors, proponents claim it should be possible in principle and not difficult in practice to gather various measures of activity surrounding a research report. 

These measures include counting how often an item is:

  • Viewed by visitors
  • Downloaded
  • Printed
  • Bookmarked
  • Mentioned in Blog commentaries
  • Mentioned in Twitter messages or "Liked" in Facebook
  • Considered in email forums or discussion groups
  • Found in reprint archiving systems such as Mendeley or Zotero, etc

These measures are said to be

  • Easier to make
  • Can be done faster than is possible in the citation cycle 
  • Would be  quantitative and objective gauges of the activity surrounding the appearance of a report
  • And by implication, would help clarify, and even rank the 'impact' of an article

PLoS METRICS EXPLAINED

PLoS has been adding alternative metrics reports for all the publications it releases. The publishers has prepared this editorial statement on that publisher's approach to interpreting Article Level Metrics data. Consulting it can give the reader a sense of what the publishers are seeking and also a good idea of the various limitations this approach imposes.

Other Implementations of Alternative Metrics.

Other organizations have launched implementation of alternative metrics. Some of these are very new and have so far (Oct. 2012) little to offer beyond an attractive web site. 

  • Some are apparently intended to operate as fee-based subscription services.
  • Several may fail, or be absorbed by competitors. Others yet may still be launched.
  • Until these initiatives have had time to organize and deploy their particular methods, it is impossible to state which, if any, will provide useful service. 
  • While gathering some kinds of data may be technically easy, research will have to reveal what exactly is being collected and what it might mean. What is the value of a Facebook "Like" or a Twitter mention? Can these be "traded"? I.E. does one Tweet equal five "Likes" etc?
  • Information science researchers will need to scrutinize the data collection methods and the strength of the interpretations put upon them.

Services offering alternative metrics include:

Summary

Attempts to use data derived from 'social media' as measures of research influence are intriguing efforts to refine and improve accepted methods, which are widely seen as unsatisfactory for various reasons.

These attempts may bring real improvement, or may simply generate more numbers and graphs.

Proposed new 'metrics' share a feature with those already in use: all are measuring 'activity' surrounding a particular research report and that activity is being taken as a surrogate for the report's scientific significance, which is not quite the same thing, or really not the same thing at all.

It is understandable that funding agencies and academic administrators want to find some method of evaluating research and researchers. Public money should be spent to some end and not all investigators are equally effective. It is also clear that methods chosen should be 'objective', i.e., free from personal bias. However, mere quantisation, however cleverly derived, of itself does not answer the question. Though, judgment and insight will still be needed.