I argue that our universities are being led astray by lop-sided questionable metrics. We have known from day one that a research performance measure built predominantly on academic citations, with no reference to societal impact, is a flawed approach over the long term. The consequence is a cumulative lost opportunity whose value will never be known. We measure what we do well and retreat from measuring our failings. Australian universities continue to languish in their engagement with industry at a time when our economy is in search of a future. It is time for leadership on this issue of impact.
To be fair, there has been significant difficulty in identifying a practical and effective measure of societal impact. But there are other factors that have effectively conspired to prevent adoption of a suitable metric. For some, the sanctity of academic research must not be compromised by a motivation for economic benefit while, for most, there is little incentive for delivering societal impact, given the existing metrics. Underlying this fact has been self-interest, in maintaining and promoting the status-quo, with an intense focus on journal publications, citations, and the acclaimed H index. Invariably, reasons will be given for why we should not adopt a measure of societal impact. My purpose here is to argue for such a measure and to propose an approach.
Next year’s Excellence in Research for Australia (ERA) exercise will again focus our interest on academic impact, using the quality of research outputs, quantified primarily by journal citations or peer review assessment. The Australian Academy of Technological Sciences and Engineering (ATSE) has recently developed an engagement measure utilising data collected for ERA but essentially neglected by ERA. This is an important initiative and should be explored further, however I argue that we need to examine impact beyond the academy. It is time to identify a practical and effective measure of societal impact to leaven the existing metrics of our research excellence.
In 2012 a trial, referred to as Excellence in Innovation for Australia (EIA), was conducted in order to investigate how to measure the impact of research. Led by the ATN and Group of 8 universities, the trial involved the submission of specific research case studies and a process for rating their impact, the highest being A, then alphabetically to the lowest, E. A-rated projects produced ‘Outstanding impacts in terms of reach and significance. Adoption of the research has produced an outstanding social, economic, environmental and/or cultural benefit for the wider community, regionally within Australia, nationally or internationally.’ In other words, these projects had ‘extreme’ societal impact, while projects rated B, C, D and E had progressively less impact.
It is the A-rated projects that are at the heart of my proposal. In order to measure societal impact, universities across Australia should be invited to submit case studies of high-impact research that they believe meet the threshold for an A-rating. A Panel would then assess the submission against this threshold. It makes sense that a research impact assessment exercise should be focussed on the extreme, rather than the expected – in adopting this focus, we will understand how often the work carried out by researchers genuinely reaches beyond laboratories and campuses to make a difference to the world.
By collecting and analysing high-impact data, the aim would be to increase the rate at which our universities deliver major societal impact. This would provide a powerful statement of the value of universities to society and an incentive to researchers to focus on driving the achievement of impact beyond the academy.
There are several good reasons for focusing only on A-rated impact. Firstly, these projects are the easiest to identify. The EIA final report provides a benchmark for A-rated research impact – one case study claimed, for example, that their “silicone hydrogel was used in 47% of contact lenses worldwide”. Secondly, the number of submissions would be manageable, making the system cost-effective. Based on the EIA final report, twelve Universities produced 30 A-rated case studies over a 20-year reference period. That would equate to approximately five A-rated research projects each year across the whole Australian sector. The intention would be to grow this number by encouraging focus on it. Given that it would be in the interest of each university to ensure that all A-rated case studies would be captured in the assessment exercise, the sector could have confidence in the outcome.
Apart from drawing attention to societal impact by publishing the outcomes, it will be critical to encourage high-impact research projects through reward. Commercial returns at the university level are rarely enough. But achievement will deliver academic, economic and other societal benefits, and make the case for public funding of university research more compelling. Achieving major societal impact will not happen immediately – it can take ten or more years to deliver on the possibilities of a project – but the introduction of a measurement system could have more immediate benefits, with existing research findings assessed in new and novel ways. This element of the system could be administered by a business-orientated body such as the Productivity Commission.
As the operating environment for Australia’s universities remains unclear, institutions continue to focus on building their research reputations as the best possible insurance against an uncertain future. This preoccupation with reputation is admirable. But we need to introduce a new, and disruptive, dimension to the notion of reputation, based on the more challenging question: has our work made a difference in the ‘real’ world?
Written by: Professor Kevin Galvin, University of Newcastle, who was the recipient of an ATSE Clunies Ross award and AusIMM Mineral Industry Technique Award in 2014.