Prime Minister Malcolm Turnbull is rumoured to announce major changes to the way academic research is funded in his forthcoming innovation statement this month.
Media reports suggest the government wants academics to spend less time writing for journals and more time working with industry to ensure research has a commercial and community impact. It is suggested the aim is to strengthen and build ties between universities and industry and make sure all publicly funded research is contributing to society in some way.
Discussions about how to measure university research have been around for a long time.
Back in 2005, the then minister for education, Brendan Nelson, announced the Research Quality Framework (RQF) aimed at improving the assessment, quality and social impact of university research.
In 2007, the Labor government abandoned the RQF – which was based on using case studies – and focused on evaluating research quality under the banner of Excellence in Research for Australia (ERA).
While case studies have valid uses – such as showcasing real-world benefits of research – an independent body found that there are significant issues in using case studies to measure the social and economic benefits across the research system.
Case studies can demonstrate impact, but can’t be used to measure impact for these reasons:
- Timing – research often has impacts long after the research is completed.
- Attribution – innovation is a team sport involving multiple projects, actors and inputs, often in isolation from each other.
- Appropriability – it can be difficult to know who benefits from research and account for the diverse impacts that can arise.
- Inequality – it is difficult to compare across different types of impacts. For example, how do you compare the development of Gardasil with research on the effects of climate change in the Murray-Darling Basin?
Without a broad sample of case studies from all universities it’s also difficult to use these as evidence of who should get funding, especially if the research we are talking about can go back 10 or 20 years.
Last year in the UK, almost 7,000 case studies were used in the latest round of the Research Excellence Framework (REF). REF 2014 included an impact assessment worth 20% of the final rating.
A recent report has found that while case studies can show the benefits of research, based on the UK experience, it is unlikely that we will be able to develop specific metrics of impact. After all, like research itself, impact is a complex beast.
And so here we are, a decade on, still grappling with questions of why and how we should measure publicly funded research.
The government is now urging researchers to work more closely with the users of research – across the public, private and not-for-profit sectors. The idea is to help them innovate and so improve their competitiveness and enhance their products, processes and services.
At the same time, it is hoped that publicly funded research will stimulate innovative industries – and jobs – for the future.
The iPhone is one example of product built almost entirely on the back of publicly funded research.
A good idea for Australian higher education?
At the moment, researchers are measured and recognised for the number of journal articles they have published and where they have published them. This has led to Australia being among the best research nations in the world.
But it does mean that while our universities are supposed to be doing a range of valuable tasks – like educating and training students, creating and advancing knowledge and applying knowledge to solve the most pressing issues of our times – academic practices and institutional incentives support them for only one activity: getting grants to undertake research that can be published in journals, to get more grants, and so on.
If we want our universities to work with the community, governments and businesses to deliver greater prosperity and wellbeing, then measuring their research by publications alone is unlikely to fit the bill.
Something like the plan by the Academy of Technological Science and Engineering (ATSE) is a good addition. What this does is measure where academics are working on research with groups outside the university sector based on their sources of research income.
The main challenges
But if such a plan is put in place, it’s important to be aware of a few things:
We don’t want to focus all our university research on having an impact. We still need to do “basic research” – that is research that pushes knowledge forward for its own sake. This generates the insights that later get applied to specific problems.
Research and innovation involve multiple actors, often working in isolation from each other and across long time frames. It involves a lot of chance and trial and error; it is almost never a linear processes. There are activities that improve the likelihood of impact occurring – such as engaging the public and making research widely available. Understanding the context that a researcher wants to deliver impact in (and how it is changing over time) is key to delivering profound and lasting impacts.
There is no silver bullet when it comes to measuring research, only partial measures and proxies. Done well, proxy measures relate directly to the underlying processes they are measuring – for example, on average, researchers who get more of their income from industry sources are doing research that is most relevant to those industries, and so income can be a proxy for industry relevance. But a range of activities will be missed, such as research training or in-kind support. And so academic judgement is also needed to make sense of the numbers. As long as we understand the limits of proxy measures, we can anticipate and minimise perverse outcomes.
How should we measure research?
Given all of this, what we should be looking for is a multi-dimensional set of measures.
This means a robust measure of research excellence (such as ERA); measures of activities that increase the chances of research impact such as engagement with the public; and more than likely some measure of research training.
Missing any one of these means that we are missing out on the important benefits of university research:
- Research excellence without engaging with the public, industry and government means that we will not apply our world-class research to improving the prosperity of the public that funds it;
- Engagement in the absence of excellence means that we would not be delivering the best outcomes;
- Research training builds a pool of qualified researchers who can work within the public and private sectors, and who can receive and apply the lessons of research to maximise its benefits.
It is our investment in research as a society that will give us the best chance of solving the pressing issues of our times – inequality, climate change, food security, global conflict – so there is a lot at stake.
A narrow focus on academic publishing has helped to create a world-leading university sector.
The pressing issue now is how to develop a system that can apply this research to improve the lives of the public.
This article is part of our series Why innovation matters. Look out for more articles on the topic in the coming days.
Written by Tim Cahill, Adjunct research fellow, Swinburne University of Technology and Mark Bazzacco, Executive Manager, Planning & Performance, CSIRO. This article was originally published on The Conversation. Read the original article.