June 23, 2017 | Dr John Harrison, Associate Editor, 'Regional Studies'

Impact is ‘the holy grail’

Getting the most out of metrics


Dr John Harrison is Reader in Human Geography at Loughborough University and is Associate Editor of the Taylor & Francis journal, Regional Studies. In his recent talk at the London Book Fair, he spoke about his need for metrics to help him in his roles as researcher, author and editor. We asked Dr Harrison to share his experience with other Editors and advise how they too could use this data to strengthen their journals.


Personally, I disagree with the widely circulated mantra of “publish or perish”. Today, you can still publish and perish because there is now so much more published work than there was 1, 2, 5, 10, 20 years ago, and it is more accessible than ever before. In this publishing climate, the question for authors – and which editors increasingly focus on – is, who is going to be interested (audience) and why (contribution)?

 

Audience

Published work which has an audience (i.e. is on a topic people are interested in) but no contribution will not have an impact. Likewise, work which has a very clear contribution but no audience (i.e. the author is the only person interested in this topic) will not have an impact. For authors to ensure their work has an ‘impact’, it must have an audience and it must make a contribution.

So, my advice is this: it is not simply enough to say “I set out to research this, this is how I did it, this is what I found”. Work that makes an impact does much more than this. It reaches out and engages its intended audience: it says “Here is a significant issue of broad relevance, this is my contribution and how it represents new knowledge and why it deepens understanding of the issue, this is why it should be of interest to you”. In other words, all work that has an impact addresses the ‘so what’ question.

As editors, we often spot this from the title and the abstract. Both should make it very clear who the audience is (people interested in X) and what the contribution is (what are they going to learn by reading). So, unless it is the exemplar, mentioning the example or the case study in the title can instantly narrow down the potential audience and contribution.

 

Citation data

Citation data is read in three main ways:

  • Firstly, people look at the raw number of citations that a publication or researcher has. The weakness here is you are often not comparing like-for-like. For example, the raw number does not account for time since publication of an output, or the number of years a researcher has been publishing and therefore accruing citations.
  • Secondly, people look at an author’s H-index. This is seen to be a more rounded assessment, reflecting the breadth and depth of an author’s impact. Take two authors who have 1000 citations each from 30 published outputs. Both authors appear identical. But now imagine that Author A has a H-index of 20 (meaning 20 of their outputs have been cited at least 20 times), while Author B has a H-index of 12 (12 outputs have achieved at least 12 citations). Because the H-index measures scientific productivity and impact the former helps identify an author who is more consistent in the impact of their published work, whereas the latter is an author who has the potential to achieve excellence but may be more inconsistent. Particularly in the early stages of an author’s career the H-index can be a good indicator of a potential “one hit wonder”.
  • Thirdly, people look at author trajectory. This again can be important in assessing two authors with similar citation numbers or H-index scores, or two pieces of work with a similar number of citations. Clearly, if those citations have (or that H-index has) been achieved in half the time, this is a much more significant result.

Nevertheless, it is also important to think beyond citations because this metric only measures impact among peers within the academic community. It does not, for example, capture the impact of published work on students (here the number of article downloads might provide a useful metric) or among the wider scientific community comprising both academics and non-academics (here metrics such as Altmetric are trying to capture the attention scholarly work receives in the press, on social media, in blogs etc.).

 

Challenges

The challenge is that citation data is the end point. What we could refer to as Metrics 1.0 has been focused on the link between output and impact. Metrics 1.0 has allowed those involved in publishing (authors, editors, publishers) to track how many times people have looked (total views), read (total article downloads, book sales), and used (citations, Altmetric) scholarly output. This can be extremely useful for tracing why published work does / does not have the desired impact.

 

 

Here we need to consider the rate of attrition that all authors and published work must face. Put simply, the numbers decrease markedly from those who look, to those who read, to those who use. What current metrics allow for is an understanding of at what stage authors/works lose people (and therefore the potential for impact). With this knowledge, Metrics 1.0 has meant that we are more likely to pinpoint where potential impact has been lost. The result is two burning questions for authors:

  1. Why do those who look not read?
  2. Why do those who read not use?

Moreover, these two questions have cultivated advice and guidance of the kind that suggests …

  • If people look but do not read there is an obstacle which is putting people off (most likely the title or abstract).
  • If people look and read but do not use, the work does not engage the reader (most likely because it fails to go beyond saying “I set out to research this, this is how I did it, this is what I found” – see earlier comment).

From this, we can derive that the way to improve ‘impact’ is to increase the number at the start of the process (looks i.e. the number of people who are aware of the work) and/or to minimise the rate of attrition from looks, to reads, to users (by improving titles/abstracts, tackling more important questions in our research etc.). The former puts a premium on visibility; the latter a premium on quality.

But for all that Metrics 1.0 has revealed, arguably there is much that we still don’t know. At this point it is almost impossible not to sound like the former US Secretary of Defence, Donald Rumsfeld, because if Metrics 1.0 has given us “known knowns” then thinking about Metrics 2.0 requires us to think about the “known unknowns”. So what are some of the “known unknowns”? Let’s take one example.

 

Downloads

Is it simply the case that an article with high download numbers – or a book with high sales figures – but low citation numbers is the result of not stating the relevance of the research and engaging an audience? No. There are other possible explanations, not least is that this could be a research output where the majority of readers are students. Outputs with a high student readership will undoubtedly show a higher attrition rate. But this is the problem when “users” are measured by citations alone.

In this one example, we see the potential for metric data to be misinterpreted, but also we see the potential for developments that go beyond what metrics currently reveal. From this perspective, it will be interesting going forward to see how the development of metrics will help shed light on the two fundamental questions highlighted above: why do those who look not read, why do those who read not use?


For more information and resources on journal metrics,
visit our page
Mastering Metrics: A Taylor & Francis Guide

Published: June 23, 2017 | Author: Dr John Harrison, Associate Editor, 'Regional Studies' | Category: Citations, impact and usage, Front page | Tagged with: