You must have heard about the death of Prof Stefan Grimm, who apparently had been bullied by his departmental line managers at Imperial College London.
In case you have missed it, you can read the whole sad story at DC’ science: Publish and perish at Imperial College London: the death of Stefan Grimm.
ICL will of course claim that bullying is not endemic and this was a very sad but isolated case.
Evidence-based decision making in academic research
However, rather helpfully, a Mr John T Green has written a paper about the managerial mindset at ICL: Evidence-based decision making in academic research: The “Snowball” effect.
The paper appeared in 2013 in a journal called The Academic Executive Brief – welcome to the Dark Side!
Who is this John T Green?
John T. Green […] was Chief Coordinating Officer of Imperial College London from 2004 until 2010 where he implemented a range of innovative research management systems. *
“Innovative research management systems”? I am throwing up already.
Peers from leading UK research institutions perceived the need for a freely available open standard to enable any university to calibrate its research inputs (funding), processes (effectiveness and efficiency in spending that money) and outputs (what the university achieves for the money spent), and compare themselves in a like-for-like manner. *
Inputs? Processes? Outputs? This is a terribly technocratic view of what a university does and how research works.
I guess we should be thankful to John T Green that he explained the managerial mindset of the people ruling UK elite universities so explicitly and clearly.
He even helped codifying this system into Snowball Metrics – Global Standards for Institutional Benchmarking.
“Releasing an unproductive overhead”
Green is rather pleased with his achievements at ICL:
In the end, we were able to eliminate 120–130 faculty positions with a fair and consistent approach. As a result, the faculty of medicine released an unproductive overhead, invested in new staff and quickly climbed to be the strongest UK medical school, as measured by any input or output research measure. *
Green continues to be smug:
It is fascinating that within a scientific community, founded on the principles of evidence-based research that, when it comes to management decisions (such as recruitment), faculty can be tempted to rely on personal knowledge or impressions rather than on evidence. *
Well, Johnny, I can tell you why: because I am not a cog in a money-making machine and I don’t want my colleagues to be. I don’t think about myself, my team, my work or my colleagues in terms of inputs, processes and outputs. So the ‘evidence’ you talk about just doesn’t count for me.
Sometimes I even do the unthinkable and read the actual papers my colleagues publish and then, yes, I rely on my personal knowledge of science and my impression of the quality of their work when judging it.
And this is exactly how it should be. After all that’s what I have been trained to do for almost two decades now.
The fallacy of uniformly measurable performance
To counter-balance Green’s manifesto, read the analysis Stefan Collini, a Professor of English Literature and Intellectual History at the University of Cambridge, published in the London Review of Books in October 2013:
Underlying so many aspects of [higher education policies] is the fallacy of uniformly measurable performance. The logic of punitive quantification is to reduce all activity to a common managerial metric. The activities of thinking and understanding are inherently resistant to being adequately characterised in this way. (…)
[What academics feel] is the alienation from oneself that is experienced by those who are forced to describe their activities in misleading terms. *
You don’t have to be a complete Marxist to follow this argument. It explains why John T Green feels like a winner and scientists certainly don’t.
The managers, by contrast, do not feel this [alienation], and for good reason. The terms that suit their activities are the terms that have triumphed: scholars now spend a considerable, and increasing, part of their working day accounting for their activities in the managers’ terms. The true use-value of scholarly labour can seem to have been squeezed out; only the exchange-value of the commodities produced, as measured by the metrics, remains. *
Phew … it’s good we don’t have stuff like this in Cambridge! Or do we?
University of Cambridge is listed in Green’s article as using Snowball Metrics and Green is even a fellow at Queen’s College in Cambridge.
Makes me shudder.
Winter is coming.