Why measuring the impact of a research infrastructure?
To get a first idea of why measuring the impact of a research infrastructure could be useful, watch the following video introducing the Impact Playbook:
Verwayen at al. (2017) for Europeana
Along with the recent shift in the acknowledgment and citation behaviour of individual researchers concerning data and software and the infrastructural evolution, there is a growing tendency and interest from stakeholders (e.g.) funders to quantitatively measure the success or impact of research infrastructures. As DARIAH-DE (2017) states:
“Digital research environments in the arts and humanities have to deal with the question what value they provide for the scientific community and how they should make the best use of their granted money.”
However, although much requested by the stakeholders, measuring impact is not an easy task. First of all, the particularity of each infrastructure and its ecosystem has to be taken into account. This means among other things, that one has to be clear about what kind of impact has to be assessed, why it should be measured and from whose perspective (cf. Tanner 2012: 35). Being aware of this challenge, Simon Tanner has developed an approach which, on the one hand, aims to be applicable to impact measurement of any research infrastructure in the cultural, academic or creative field and allows, on the other hand, to remain sensitive to the specific conditions, interests and perspectives each (assessment of a) research infrastructure is embedded in. This approach is called Balanced Value Impact Model (BVI) as it allows the consideration of different perspectives on impact, to assign values to these perspectives and to eventually balance them. Besides elaborating what shall be assessed and why, the model further allows a systematic consideration of qualitative and quantitative methods as well as methods to be applied before a research infrastructure is built (ex ante) and after it has been created (ex post).
As the list of data-gathering and measuring techniques is quite long and the choice of a technique strongly depends on the research infrastructure, the kind of impact and the available data, the following paragraphs can only provide a quick glimpse of these techniques. For a detailed list of data-gathering techniques, please consult Simon Tanner 2012 or follow the links to already existing tools for impact assessment of Humanities’ research infrastructures in the ‘Further Learning’ section below.
How to measure the impact of a research infrastructure?
To get an idea of the impact of a research infrastructure, a first indication could be its number and behaviour of users. The infrastructure EHRI, for instance, uses usability tests which is possible as users can register with the portal. However, if the focus is on academic impact, the number of publications that deal with research based on a research infrastructure might also be of interest. One could even go further, as Arjan van Hessen suggests in the video below and not only take into account the credits given within research (products), but also the references to a research infrastructure made by other research infrastructures or by considering the appearance of research infrastructures in curricula of universities:
Beside the rather traditional metric practices of counting publications or citations, there is also the possibility of measuring the (academic) impact of a research infrastructure by applying altmetrics. In short, that means to measure how users of social media get involved with a research infrastructure, for instance if they “like” it, create comments and tweets about it, or share information about a research infrastructure via social media channels.
Both traditional and alternative metrics entail several challenges, first of all the lack of data (cf. European Commission Expert Group on Altmetrics 2017: 12). As repeatedly deplored, if the use of research infrastructures is mentioned at all, it is not mentioned in a consistent way and therewith not reliably detectable. This lack of data also holds true for social media reactions to a research infrastructure, but mainly for the lack of (free) access to the relevant data. Other problems arising in connection with altmetrics are that they might not be suitable for every discipline or that they are not as prestigious as traditional metrics, which for their part are doubted to display academic impact adequately (cf. ibd.). Watch the video of Toma Tasovac below to learn more about these doubts:
What can be derived from the last paragraph is that the impact assessment of humanities’ research infrastructures is still open to further research and development and a number of problems still have to be solved. The problem concerning the inconsistent or non-existent way of giving credits to a research infrastructure could be an important impact area for research infrastructures themselves, as they could contribute to changing the citation behaviour of its users.
How to encourage researchers to give credit to a research infrastructure?
Maybe a good starting point to answer this question is to ask what is it that discourages researchers from giving credits to a research infrastructure. One reason could be that the relevant metadata for citing a research infrastructure are missing. Another reason could be that the metadata are indeed findable, but one does not know how to arrange them in a reference. To overcome these and other barriers to giving credits (especially to software and data developers) FORCE11 has created the FAIR Data Principles, whose application now needs to become part of researchers’ citation habits. This is where research infrastructures could come into play. PARTHENOS, for instance, contributes to the further development and the dissemination of the FAIR Principles by transferring them to research infrastructures and by the creation of different training material.
If these or other measures turn out to have less impact than anticipated, it is time to think about ways of enhancing the impact of the research infrastructure. This topic will be touched in the next sub-section.
- DARIAH-DE (2017): Impactomatrix. Available at: https://dariah-de.github.io/Impactomatrix/
- European Commission Expert Group on Altmetrics (2017): Next-generation metrics: Responsible metrics and evaluation for open science. European Commission (Directorate-General for Research and Innovation). Available at: doi:10.2777/337729
- Smith, Arfon M. / Katz, Daniel S./ Niemeyer Kyle E./ FORCE11 Software Citation Working Group)(2016): Software Citation Principles. Available at: doi:10.7717/peerj-cs.86.
- Tanner, Simon (2012): Measuring the Impact of Digital Resources: The Balanced Value Impact Model. Arcadia. Available at: https://www.kdl.kcl.ac.uk/fileadmin/documents/pubs/BalancedValueImpactModel_SimonTanner_October2012.pdf
- Verwayen, Harry / Fallon, Julia / Schellenberg, Julia / Kyrou, Panagiotis et al. for Europeana (October 2017): Impact Playbook. For Museums, Libraries, Archives and Galleries. PHASE I: Impact Design. Available at: https:/ /pro.europeana.eu/files/Europeana_Professional/Impact/Impact%20playbook/Europeana%20Impact%20Playbook.pdf
- IMPKT Tools: These tools have been developed within the RI Europeana. Especially noteworthy are the tools and manuals ‘Change Pathway‘ (“A method for connecting the activities and outputs of an organization, with outcomes experienced by the stakeholder”), ‘Pathway Builder‘ (“Documents the Change Pathway and identifies the associated measurements”) and ‘Strategic Perspectives‘ (“A tool which provides a strategic context for decision-making on what impact is to be measured and why that measurement is needed”).
- Impactomatrix: ” The Impactomatrix is an interactive website that gives you the possibility to boost the impact of your digital tools and infrastructure components.Based on a selection of 21 areas of impact, corresponding factors can be chosen that will influence the specific area. Additionally, criteria are available which are helpful in measuring changes in the selected impact area.”