Measuring research impact

Measuring research impact
by Rebecca Sierig

How to measure research impact?

It is highly likely that, in order to gain a promotion, or a permanent position within a University, a researcher will have to demonstrate the impact that their research has had, be it within their discipline, or within wider society. But how can you measure the impact your research has had? What options are there to help measure the impact of your research?

Measuring academic impact

Bibliometrics

The traditional way to measure academic impact (quantitatively) consists in measuring research outputs either by counting the number of publications or by counting the number of citations (cf. European Commission Expert Group on Altmetrics 2017: 8). These kinds of metrics are called bibliometrics.

H-index

One example for such a traditional way of measuring impact is the so called h-index, “the highest number of publications of a scientist that received h or more citations each while the other publications have not more than h citations each” (Schreiber 2008: 1513). If a researcher has published for instance 3 publications, where the first one has been cited 50 times, the second one 3 times and the third one 2 times (50,3,2), the h-index is 2. This means, that the researcher has published at least 2 publications which have been cited at least 2 times. The h-index is being criticized for it advantages senior researchers and does not reflect “the impact of highly cited publications” (cf. European Commission Expert Group on Altmetrics 2017: 9). The latter criticism also holds true for the example above: although a publication has been cited 50 times, the h-index is only 2.

Journal Impact Factor

Another example for measuring impact is the Journal Impact Factor (JIF), which – for having been misused – has somewhat fallen into disrepute. Originally intended to measure the impact of a whole journal, it has been applied as an indicator for the impact of a single journal article (cf. Académie des Sciences / Leopoldina / Royal Society 2017), although outstanding research does not necessarily have to appear in a high impact journal, nor is the impact of a journal an guarantee for the quality of the articles it contains.

However, the problem of misuse or gaming of metrics does not only concern traditional metrics. Every metrics has its weak points, even the qualitative ones like peer reviews, or the less traditional metrics which will be treated in the next paragraphs.

Usage metrics and alternative metrics

The report on “Next-generation metrics” (European Commission Expert Group on Altmetrics 2017: 9) names usage-based metrics and alternative metrics (so called altmetrics) as less traditional metrics, while “[u]sage metrics are considered to lie between traditional and alternative metrics”.

Usage metrics

Considering usage and not only citations when measuring impact includes more situations which could have an impact: even when not citing it, using a publication in terms of viewing, downloading and reading it, is an occasion of impact that is not taken into account by bibliometrics. Furthermore “[u]sage metrics are highly relevant for open-science, not only in terms of the usage of publications, but also for tracking non-traditional publications (posts, blogs) and for the re-use of open data or open software.” (European Commission Expert Group on Altmetrics 2017: 9).

Alternative metrics

Altmetrics go further in considering countable signals on the web than just usage metrics. According to the European Commission Expert Group on Altmetrics (2017: 10) they also comprise “mentioning, discussing, reading and using of scholarly information”. If you have published an article and share it via social media, with the help of altmetrics you can measure, for instance, who views, shares or likes it. Advocates of altmetrics further emphasize their potential as qualitative metrics:

“With altmetrics, we can crowdsource peer-review. Instead of waiting months for two opinions, an article’s impact might be assessed by thousands of conversations and bookmarks in a week. In the short term, this is likely to supplement traditional peer-review […]. In the future, greater participation and better systems for identifying expert contributors may allow peer review to be performed entirely from altmetrics.” (Priem / Taraborelli / Groth / Neylon 2010).

Other advantages of altmetrics over traditional metrics are, that they “cover not only journal publications, but also datasets, code, experimental design, nanopublications, blog posts, comments and tweets; and are diverse, i.e. providing a diversity of signals for the same object” (European Commission Expert Group on Altmetrics 2017: 10, referring to Priem / Taraborelli / Groth / Neylon 2010). If you share, for instance, some data you have prepared for your research, a first draft of an article or the slides of a presentation you have given, you can already create measurable impact without having finalized a whole publication and this is thanks to altmetrics. By the way, a good place to store and share your data or publications is a digital research infrastructure.

One more advantage of altmetrics named by the European Commission Expert Group on Altmetrics (2017: 10) is that they do not only measure academic impact but “broader societal impacts of scientific research”, as they take into account signals on the web which have not necessarily been created by academics but by a larger public or stakeholders other than academic ones.

As you might have guessed, as every metric has some downsides, so also do altmetrics. The European Commission Expert Group on Altmetrics (2017: 12) highlights again “the ease with which metrics-based evaluation systems can be gamed” which also holds true for altmetrics. Likes and downloads, for example, can easily be multiplied. Other problems arise when it comes to comparability, as social media channels are not equally common in different disciplines or countries (cf. ibd). However, even if social media is equally used by a research community, that does not mean that metrics based on social media data will be taken seriously by this community, especially when “the underlying basis of altmetrics (e.g., sharing and liking behaviour, motivations for sharing, and types of users of social media platforms) is not yet well understood” (ibd.). Lastly, the free access to data relevant for applying altmetrics (e.g. metadata about all the people who have liked a research output) is not guaranteed and depends on the social media application (cf. ibd.).

To conclude, measuring academic impact is not the easiest task and every metric has its pros and cons. When it comes to measuring societal impact, things get even more difficult as will be explained in the next paragraph.

Video about metrics for research impact


Watch Arjan van Hessen (CLARIAH), Thorsten Ries (Ghent University), Toma Tasovac (DARIAH-RS), Esther de Smet (Ghent University), and Steven Krauwer (CLARIN ERIC) discussing the adequacy of certain metrics for measuring impact. You can watch the full interviews with Esther de Smet, Arjan van Hessen, Steven Krauwer and Toma Tasovac on the page Voices from the Community.

Measuring societal impact

A notion of societal impact as outcomes of academic research provoking a change in society makes it hard to measure this impact, but not impossible. Simon Tanner (2012) for instance, who shares this notion of impact, has developed a model (Balanced Value Impact Model, BVI) for assessing different kinds of impact including societal impact. This BVI model is also the basis for the Europeana Impact Playbook “a step by step approach to help you identify your impact” (Europeana Foundation 2018) and directed to researchers and staff working in the Cultural Heritage sector (for more information browse the “Further Learning”-section below).

Another project building on a notion of impact as change or influence is the Impactomatrix, developed by DARIAH. Although developed “to boost the impact of your digital tools and infrastructure components” it can also be used as a tool for enhancing a researchers impact. If you want to know more about how you can use the Impactomatrix, you can watch the recording of a webinar held by two of its developers, Juliane Stiller and Klaus Thoden. You can find the recording in the “Further Learning”-section below.

Nonetheless, the influences responsible for societal changes are usually manyfold and can’t be reduced to only one causal influence, i.e. a certain research outcome. If societal impact is defined as “a recorded or otherwise auditable occasion of influence from academic research on an actor, organization or social process taking place outside the university sector itself” (LSE Public Policy Group 2011: 123) measuring societal impact becomes more tangible. For individual researchers, the LSE Public Policy Group (2011: 249) recommends tracking their societal impact with an impacts file:

“Such a file would cover meetings, visits, interviews, phone calls and emails with outside organizations, and talks, seminars, lectures, training courses etc. along with details of the audience. Wherever possible, evaluative statements that speak of the influence on the external organizations or personnel could be compiled in a number of ways, essentially by asking them to record their views.”

If research is conducted as open research, the chance to receive such evaluative statements is higher as well as the getting an idea of the audience using the research. Research infrastructures play an important role in opening up research also to the wider public and stakeholders other than academic ones. In how far they could help you to raise your impact will be treated next.

 


References

Useful Links

  1. IMPKT Tools: These tools have been developed within the RI Europeana. Especially noteworthy is the Impact Playbook, “a step by step approach to help you identify your impact”.
    • Fallon, Julia (October 2017): Impact tools & resources.

    • Verwayen, Harry / Fallon, Julia / Schellenberg, Julia / Kyrou, Panagiotis et al. for Europeana (October 2017): Impact Playbook. For Museums, Libraries, Archives and Galleries. PHASE I: Impact Design.

  2. Altmetrics: Read more about altmetrics in the following document:
    • European Commission Expert Group on Altmetrics (2017) Next-generation metrics: Responsible metrics and evaluation for open science. European Commission (Directorate-General for Research and Innovation). Available at: doi:10.2777/337729.

  3. Metrics Toolkit: “The Metrics Toolkit is a resource for researchers and evaluators that provides guidance for demonstrating and evaluating claims of research impact.  With the Toolkit you can quickly understand what a metric means, how it is calculated, and if it’s good match for your impact question.” (Champieux / Coates / Konkiel 2018)
  4. Software citation: Learn more about software citation and consult the following slides or papers
  5. Bibliometrics: Some critical stance on bibliometrics as “a proxy for expert assessment” can be found in the following document:

Webinar

Here you find the Wrap up and materials of the PARTHENOS Webinar “Create Impact with your e-Humanities and e-Heritage Research” in February 2018 (held by Juliane Stiller and Klaus Thoden).