Skip to main content

Universities Research Assessment Exercise

Volume 454: debated on Wednesday 13 December 2006

I am glad to have this opportunity, however brief, to draw attention to some of the issues surrounding the way in which research in universities is evaluated and rewarded, and to some of the consequences for activities that seem to me to matter. I should confess that I have had a bee in my bonnet about this for some time, and I would like to transfer it to the bonnet of the Minister for Higher Education and Lifelong Learning. I see that he has sensibly decided to be abroad, but it is a great pleasure to see the Under-Secretary of State for Education and Skills, my hon. Friend the Member for Gloucester (Mr. Dhanda), in his place.

I need to make further confessions: I used to be a university teacher, and I am an honorary professor at the university of Birmingham and a visiting professor at the university of Westminster. I should make it clear that nothing that I say today has any connection with those fine institutions.

I am also joint editor of a journal called The Political Quarterly, which was founded in 1930 with money from Bernard Shaw by Leonard Woolf, Kingsley Martin and William Robson. Its past editors have included John Mackintosh, a former distinguished Member of this House, and Professor Bernard Crick, that great public intellectual who has retrieved the idea of citizenship for us and put it, belatedly, into the national curriculum. The journal’s mission, which is still announced inside its back cover, is to publish articles on issues of public policy aimed at the most demanding level of intelligence but without being technical and pedantic. The tradition of the journal has always been to accept articles written in plain English—without jargon—that deal with issues of political importance, or that provide background material or basic speculation directly related to those issues. The terms of that mission statement have implications for some of what I want to say. Let me explain why.

During a previous research assessment exercise—RAE—there was an attempt to get objective indicators of the worth of journal articles. When that was applied to politics journals, The Political Quarterly scored highly when people were asked about its impact, but much less highly when it was evaluated for research content. In other words, in terms of audience, there were high rewards for writing for fellow political science professionals but low rewards for writing for a wider public readership. That means that quality was being defined in a way that explicitly excluded the ability to reach a wide audience, and that for a discipline that supposedly had civics at its centre. The situation is frankly bizarre.

There are many disciplines on which I have no authority to speak—science, engineering and technology. It may be that they have no difficulties in the way that research is assessed, whether it is done in the present RAE form or in the form of the metrics that is planned to replace it. Indeed, it is widely said that one of the general problems is that the research assessment that is designed for the natural sciences is inappropriately applied to the humanities and the social sciences. That is particularly said about the application of metrics, which compounds the deficiencies of the RAE.

I want to concentrate on what I think I know and on what I have been told by the distinguished academics in social science and the humanities whom I have consulted about the impact of research assessment and whom I shall quote, albeit anonymously. Some point to the benefits that RAE has brought in establishing the importance of research and challenging received reputations and funding. However, all also identify its deficiencies and damaging consequences for activities that matter. One has to spend only five minutes in the company of academics before talk turns to the malign consequences of the RAE.

What are those consequences? Let me start with the main bee in my bonnet. The RAE encourages production for its own sake of stuff that people do not need to write, in a form that is impossible for most people to read. In social science in particular, the RAE drives the profession in upon itself, cutting itself off from a wider public discourse. It is marooned in a private and impenetrable language, and it is consumed by its own publishing preoccupations. No thought is allowed to go unreferenced, and sentences become ugly aggregates of borrowed phrases.

George Orwell put it well in that wonderful essay on “Politics and the English Language”:

“As soon as certain topics are raised, the concrete melts into the abstract and no one seems able to think of turns of speech that are not hackneyed: prose consists less and less of words chosen for the sake of their meaning, and more and more of phrases tacked together like the sections of a prefabricated henhouse.”

One has only to pick up a copy of a political science journal to find oneself immediately in the linguistic world of prefabricated henhouses. The Orwell mission of turning political writing into an art has no chance at all in this kind of academic environment.

Journals are invented just to consume the products of the RAE, and the more obscure and professionally introverted they are, the better. Public relevance is despised and devalued. One young academic, a representative of many, was warned by his head of department to stop doing policy-relevant research and concentrate on getting stuff into “the high-ranking journals”. Of course, as a general rule, the higher the ranking, the less such journals are read and the less sense they make to anyone in the real world.

Another academic writes:

“It encourages academics to publish for its own sake, regardless of quality or readership, just to get the points for the RAE. This spawns all sorts of rubbish new journals, makes wait-times for journals horrendously long and means that there is a vast quantity of stuff published that nobody had the time or inclination to read.”

It is widely remarked that the whole RAE exercise has become a game. One head of department put it to me, that it

“rewards the cunning rather than the excellent.”

New players, and sometimes whole new teams, are shamelessly brought in to improve the ratings, and old ones are disposed of despite their general contribution to the life of the institution.

The preoccupation with assessed research has a particular and serious consequence for teaching. Another departmental head observed that

“over time it has encouraged a culture, particularly among younger academics, that research matters far more than teaching to their careers, because research is given such a large financial incentive in comparison with teaching. This is having a serious distorting effect, because it means that a great deal of ingenuity goes into reducing time for teaching, and increasing time for research.”

One of these younger academics points to the gap in salary and regard opening up between the high-earning RAE academics and

“others who may be brilliant teachers and are left to do all the departmental legwork and presumably stay underpaid.”

This neglect and undervaluation of teaching is now being noticed, not least by fee-paying students and their parents. History students in Bristol drew attention recently when they protested at the drastic cut in teaching. One said:

“I thought I was paying to be educated by leading academics, not for a library membership and a reading list.”

We can expect to hear more of that sort of sentiment as teaching is consistently devalued in the preoccupation with research ratings. Good teaching goes unrewarded, and research, even if it is of doubtful value, is over-rewarded. Some academic staff now do little or no teaching and are an increasingly remote presence, and students receive less attention.

My plea is that those issues are properly picked up and tackled as the RAE has its last outing before being replaced with something else. The RAE is undoubtedly a costly and bureaucratic burden, but that is not the only reason for wanting to review it. One distinguished professor and departmental head said that

“the proposed change away from peer review to metrics would reduce some of the costs of the RAE, but would be even more perverse in accelerating the switch away from teaching, and would also produce bizarre and less reliable results in many subject areas—particularly, in arts and social sciences. No-one trusts citation indices or input measures”.

One good reason for not trusting citation indices is that articles with certain key words in their title attract more citations. Desperate researchers will soon learn to play that game.

In the disciplines that I know and care about, I want the perversities of the current arrangements to be rectified. I want the work of public intellectuals to be encouraged, the civic purpose of the university to be affirmed, policy-relevant research to be celebrated and valued, and the bias against teaching to be attended to.

I shall end by raising one final perversity. A recent guest on “Desert Island Discs” was the eminent philosopher Mary Midgley, who recalled that it had taken her 30 years to publish her first book. She said that she had spent that time in intellectual maturation and reflection. If she were a young scholar now, she would have to publish or perish, whether or not she had anything ready to say, and in a form that would count for the purposes of the RAE. In fact, she would probably be discouraged from writing a book. That may be the function of the RAE, but it should not be the purpose of a university.

I am ferociously independent here in the Chair, and I shall listen to the new Minister’s response to that speech with great interest. I do not envy him his task.

It is always a pleasure to serve under your chairmanship, Sir Nicholas.

I congratulate my hon. Friend the Member for Cannock Chase (Dr. Wright) on securing the debate and I am grateful for his welcome. If I were a footballer, I dare say that I would be a utility player. As he rightly said, today I am filling in for my hon. Friend the Minister for Higher Education and Lifelong Learning.

I hope that my hon. Friend has a spring in his step, having heard the news of the abolition of the RAE after 2008. I am sure that you have, too, Sir Nicholas. University research is a progressive force that must change constantly and move forward. For the past 20 years, the RAE has been the most important engine of that change. Since 1986, we have moved from a position of relative ignorance about the use of hundreds of millions of pounds of public money to a point where we can confidently say that the research that taxpayers fund is truly world class.

Between the last two RAEs in 1996 and 2001, the number of university departments achieving the top five or five-starred ratings rose by 65 per cent. to more than 800. Those excellent departments now employ more than half of all researchers in the higher education system. The RAE has changed and improved after every exercise, but I take on board my hon. Friend’s comments. In the next exercise in 2008, for example, the financial cliff edges—the difference between, for example, a four and a five-starred rating—will be eliminated by the introduction of quality profiles of the sort recommended in Sir Gareth Roberts’ 2003 review of the RAE. There will also be fewer peer review panels in 2008 and greater recognition of previously undervalued activities, such as applied research.

Increasingly in recent years, the RAE has attracted criticism from the higher education sector, partly as a result of its success. It looms large in the lives of universities and researchers and is said sometimes to overshadow other important activities in a university’s life cycle. The criticism is perhaps also a sign of the RAE's maturity—an indication that it has achieved all that it can and that it is time for something new. I shall say a little more about that in a moment.

I shall recap some of the key criticisms of the RAE. First, it is said to be expensive. The Higher Education Funding Council estimates the total cost to be more than £50 million. That is a significant sum, albeit 1 per cent. or less of the total value of the research budget, the allocation of which is informed by the RAE. Most of that cost is borne by universities in staff and administrative time. Secondly, the RAE is said to be bureaucratic. We heard about that during the debate and, again, to an extent, the allegation is true. Eighty-two peer review panels supported by 800 pages of guidance and many times more pages of university submissions will make the quality judgments that underpin the next RAE. Thirdly, the RAE is said to be divisive. Of the £1 billion or so of quality-related research funding that goes to the 131 English higher education institutions each year, more than one third goes to only four of them.

Not all institutions produce excellent research, but we must acknowledge that research excellence is not the only game in town. Some universities have successfully pursued alternative missions and sources of funding through promoting access, through excellence in teaching or, increasingly, through interaction with business and the community. That diversity is greatly welcome.

Although the RAE has served the country well, the Government and the higher education sector alike have continued to seek a simpler system. The challenge is to achieve that while upholding the principles of the dual support system for research. Some funding for specific projects comes through the research councils and the rest comes from the funding councils to encourage and reward research excellence.

Earlier this year the Government announced their intention to move, after the 2008 research assessment exercise, to a metrics-based system for research assessment and funding. Initial proposals were published on 13 June for consultation. They suggested that an assessment model based on research-income metrics might be adopted relatively quickly for science disciplines, as has been pointed out in the debate. For other disciplines, the proposals acknowledged that it would be harder to find a robust replacement for the peer-review-led process of the RAE. The consultation closed on 13 October, with nearly 300 responses received from universities, researchers, subject interest groups and other organisations and individuals with an interest.

Some clear messages emerged from the consultation. There was a desire for reform that would reduce the bureaucratic burden of the current arrangements, increase transparency and recognise all types of research. However, the majority of respondents had some key concerns about basing an assessment system on the 13 June proposals, noting: first, assessment on income metrics alone could not recognise research quality; secondly, although metrics were more readily applicable to some disciplines than others, the separation of assessment arrangements would be undesirable, so subject variations should be recognised within a single assessment framework; and thirdly, the move to new arrangements should not be sudden, and it should not destabilise institutions.

The Government have accepted those points. We agree that the sector must be confident that a new assessment process will promote quality, so I hope that that demonstrates how friends in the Department have the same bee in their bonnet as my hon. Friend. We are also grateful for suggestions about the use of quality indicators and the role of expert advisers in assessment. We value respondents’ appraisal of the applicability and fitness for purpose of metrics in different subjects. We recognise, too, the sector’s desire to ensure that its investment in RAE 2008 is reflected in a reasonable lifespan for the results. Taking account of those concerns, we announced on 6 December, as part of the pre-Budget report, revised proposals for a new assessment process. We remain committed to replacing the RAE with a research assessment process that uses metrics as its basis when they are sufficiently robust and fit for purpose. It reflects our belief that metrics offer the best opportunity for developing an objective and transparent assessment system that reduces the burden of the RAE and recognises all forms of research excellence.

The development of robust metric indicators should capture a greater range of excellent research activity and assist institutions in monitoring and managing their own activity. We agree, however, that income metrics cannot stand alone as indicators of quality. We therefore propose a new assessment process based on a series of indicators, containing income and research degree metrics, and a quality indicator, too. We recognise that the robustness of the quality indicator will be pivotal in creating confidence in the new process.

Bibliometrics—statistics relating to publications—are our choice as the source of a quality indicator, and many who responded to our consultation saw their potential. However, the development and use of bibliometrics varies greatly between disciplines. Our initial proposals made a crude distinction between science subjects and others, and as consultation responses noted, they neither reflected the complexity of metrics’ applicability, nor did justice to the majority of disciplines, social sciences in particular. Differences lie within disciplines as well as between them, and even where bibliometrics are well established, further work is necessary to provide a robust indicator for assessment purposes.

Our ultimate goal, none the less, is a metrics-based assessment process with a bibliometric quality indicator that is applicable to all disciplines, but sensitive to differences between them. We have decided to introduce metrics-based assessment swiftly for subjects where the funding council is confident of quickly identifying a bibliometric indicator. Those disciplines fall under the broad headings of science, engineering—my background, although not in research—technology and medicine that are collectively known as SET.

For other disciplines, the quality indicator will continue in the interim to involve expert review of research outputs, so some of the issues that my hon. Friend highlighted will remain problematic at least for the interim period. Output review will continue until bibliometrics in those disciplines are sufficiently well established to replace it. We are hopeful that the use of metrics in SET assessment will act as a catalyst for the development of bibliometrics in other disciplines.

For the time being, however, there will be differences in assessment for different disciplines. However, those differences will exist within the single overarching framework of indicators, and we are confident that assessment will operate in such a way as to ensure that all disciplines, and interdisciplinary work, are rigorously and fairly assessed.

Another part of the overarching framework will be the involvement of expert advisers. Although there will be no expert involvement at the level of the current RAE panel, there will be a continuing role for expert advice in deciding the weighting of the indicators for all disciplines, and in reviewing research outputs in disciplines where bibliometrics are not used. Expert advice in that context will include advice from disciplinary experts and expert users of research.

The first assessment for SET subjects under the new system will take place in 2009, and it will begin to inform funding from the 2010-11 academic year. It will completely replace the RAE 2008 element in funding for SET subjects by 2014-15. For other subjects, assessment under the new lighter-touch arrangements will be in academic year 2013, and it will inform funding from 2014-15.

The detailed design of the process requires further work, and my right hon. Friend the Secretary of State has asked the Higher Education Funding Council to lead that work and involve the sector closely. Our aim, as I have said, is for a process that continues to reward research excellence, but that significantly reduces the administrative burden on higher education institutions and their researchers.

Sir Nicholas, I hope that that has satisfied your own concerns. I once again congratulate my hon. Friend on raising his concerns for the House.

Sitting suspended until half-past Two o’clock.