The widespread use of "Social Media" has prompted exploration into ways of attempting to evaluate research influence by counting or scoring the ways in which a particular report is treated on these forums. Advocates propose that measures of activity on social media surrounding a report should be used in addition to accepted methods, such as citation counting.
The terms "article level metrics" and "altmetrics" (for alternative metrics ) have been suggested to describe these approaches. Since many publishers have feature-rich websites allowing considerable activity on the part of page visitors, proponents claim it should be possible in principle and not difficult in practice to gather various measures of activity surrounding a research report.
These measures include counting how often an item is:
These measures are said to be
Each article features a tab labeled METRICS. Clicking on the tab reveals a detailed display of activity pertinent to that particular contribution. Viewers can track how often an article was:
The editors at PLoS urge readers to interpret this data with caution and are careful to describe how the various parameters are defined. These editorial caveats are important for users to read and understand.
PLoS has been adding alternative metrics reports for all the publications it releases. The publishers has prepared this editorial statement on that publisher's approach to interpreting Article Level Metrics data. Consulting it can give the reader a sense of what the publishers are seeking and also a good idea of the various limitations this approach imposes.
Other organizations have launched implementation of alternative metrics. Some of these are very new and have so far (Oct. 2012) little to offer beyond an attractive web site.
Services offering alternative metrics include:
Attempts to use data derived from 'social media' as measures of research influence are intriguing efforts to refine and improve accepted methods, which are widely seen as unsatisfactory for various reasons.
These attempts may bring real improvement, or may simply generate more numbers and graphs.
Proposed new 'metrics' share a feature with those already in use: all are measuring 'activity' surrounding a particular research report and that activity is being taken as a surrogate for the report's scientific significance, which is not quite the same thing, or really not the same thing at all.
It is understandable that funding agencies and academic administrators want to find some method of evaluating research and researchers. Public money should be spent to some end and not all investigators are equally effective. It is also clear that methods chosen should be 'objective', i.e., free from personal bias. However, mere quantisation, however cleverly derived, of itself does not answer the question. Though, judgment and insight will still be needed.