Staff Writer Kaveh Kordestani explores the role of the National Student Survey.
“The student, not the government, is our client” were the words of Terence Kealey when the university he led, Buckingham, came top in the National Student Survey (NSS) fifteen years ago. Yet obsessing over the NSS – for the government, not the student – has increasingly become the vogue in the cut-throat world of higher education.
The NSS is overlooked by most when its results are quietly released in July each year. But behind the scenes, universities neurotically pour through every cell of every spreadsheet of a survey that seems to be of questionable utility in a modern age.
Take King’s (an apt example). Every NSS release comes with a new set of targets to reach – a 2% improvement, this year – and often sweeping changes where we perform negligibly worse, such as the ‘TASK’ initiative, which mixes some good changes with some questionable ones, all for a few percentage points.
King’s is right to celebrate its high ranking. But what does a 2% improvement actually mean? Is the long-term goal to somehow reach 100% student satisfaction and create the ‘perfect’ university?
Instead, are we perhaps attaching too much weight to what is, in effect, a study of students’ opinions (not necessarily teaching provision), which may ebb and flow from one year to the next?
“We want useful information for students and potential students to be available to allow them to make informed judgments about the quality of education provision.” – Owain James, President of the National Union of Students, 2001
The idea of a “national student satisfaction survey” was first floated by the 2001 Labour government; Tony Blair famously listed his three top priorities coming into office as “education, education and education”. It was billed as “giving students a voice”: a simple “how was it for you?” could be relayed to thousands of incoming students to help inform their choices.
An accompanying report, written by the Higher Education Funding Council (since absorbed into the Office for Students) set out the foundations for such a survey. Its first iteration was in 2005, and it has occurred every year since.
It consists of seven core themes covering everything from the quality of teaching to the effectiveness of an institution’s student union. A question on “freedom of expression” was even added a few years ago, to some confusion.
The survey also has one of the highest response rates of any optional survey in the country. Last year, 375,000 students – or 71.5 per cent of those eligible – responded. As any final-year student will know, universities aggressively push for responses with rewards, giveaways and huge campaigns around campus and online.
The NSS is meant to give prospective students an idea of what a university is like, independent from the polished prospectuses that institutions hand out. But it has now become a much more important metric for universities in determining where to improve – and how to increase student satisfaction.
But what’s actually wrong? Surely universities wanting to improve where students have concerns is in the best interests of all? Students are happier, after all, and won’t that mean that institutions can attract more each year?
Firstly, the data collected is often not a great reflection of what students might actually think of an institution. Student opinions will always fluctuate due to factors outside of a university’s control, making year-on-year comparisons of questionable utility.
Secondly, how the data is used is increasingly less attached to the genuine wants and needs of students – and more attached to hypothetical scores that only nominally improve student satisfaction. Many respondents are not even entirely aware of what the NSS is actually for.
The key problem with the NSS is that it is the only true quantitative benchmark of that cursed term “student satisfaction”. It is an attempt to convert what are often deeply personal and individualised opinions of students into an objective, comparable measure. And as much as is tried, it will always be an imperfect proxy for this.
The NSS has always been highly controversial in higher education. Academics are drawn into “hushed meetings” to discuss results. In 2018, an anonymous academic told The Guardian:
“For university staff, it can feel as if the focus is purely on getting the scores to go up every year, rather than actually improving the student experience”.
Dr Duna Sabri’s 2013 analysis of the NSS found that, for academics, the annual results “elicited feelings of dread and anxiety” and often ultimately “drowned out other sources of feedback”. One described the results as a “potential stick to be beaten with by the institution”. For many academics, she found, the NSS felt inescapable.
Professor Lee Harvey, formerly of Sheffield Hallam University, wrote a searing critique of it in 2008, calling it a “hopelessly inadequate improvement tool“. He was subsequently suspended from his position at the survey’s promoter, the Higher Education Academy.
Speaking 20 years on, his views on the NSS had not changed:
“It has always been and remains an utterly hopeless guide to how a university or faculty can actually improve. It asks meaningless generic questions, meaningless because there is a huge variation within institutions!”
“The questions are simply there to create an artificial ranking of institutions, which has next to no value and has always been designed to reinforce an elitist view of higher education.”
“The NSS ranks as one of the worst research tools ever foisted on the sector.”
The NSS has also seemingly outstripped its original purpose. What was floated as a tool that would help institutions improve has now become a key part of the ever-controversial university league tables. More concerningly, it is now playing a significant role in tuition fees rises and thus university funding.
How so? That comes down to the Teaching Excellence Framework (TEF), a metric that determines the “quality” of a higher education institution. The NSS is a key part of the “rating” given to an institution – King’s was rated Silver in 2023.
Originally, TEF ratings were to be used to determine whether a higher education institution could raise its tuition fees in line with inflation, a link that was eventually dropped. The NSS’s usage in the matter was so provocative that the National Union of Students threatened to boycott it altogether. KCLSU did: for that year there were no NSS results for King’s.

The Teaching Excellence Framework (TEF), which is determined heavily by the NSS, will allow King’s to raise its tuition fees by 2.7% in the next academic year. Words: King’s College London
If that sounds like an irrelevant relic of the past, think again, because the government has just announced a return to the link between TEF ratings and tuition fees. Already King’s has announced that they are hiking tuition fees by 2.7% in the next academic year, enabled by TEF.
There reaches a point where the gains become more and more marginal. An institution with low scores across the board has clear areas to improve, but one with consistently high scores is likely to benefit considerably less from minute improvements in certain scores.
There is also a fear that, with NSS scores converging ever more, its utility as a benchmark of student satisfaction is starting to disappear. How can we adequately compare universities if most of their scores are just a few percentage points apart?
As Professor Harvey said in 2008: “What we have is an illusion of a survey of student views. However, it is so superficial and so open to abuse as to be useless.”
Even the Department for Education is beginning to raise concerns about the NSS. They said in 2020:
“There is valid concern from some in the sector that good scores can more easily be achieved through dumbing down and spoon-feeding students, rather than pursuing high standards and embedding the subject knowledge and intellectual skills needed to succeed in the modern workplace.”
This is not a new concern. From the very first moment the NSS was floated in 2001, academics flagged up fears that institutions would attempt to manipulate scores. Yet the idea that a measure like the NSS, designed to ultimately improve student satisfaction and learning outcomes, may actually be making our future students less intelligent is a deeply concerning one.
It’s difficult to overlook King’s TASK initiative here. Student satisfaction is openly stated as the driver of these changes, many of which are good. But do we really need to be slashing word limits across the board? Will this actually make assessments easier for students, or will it, as some students say, “devalue” our degrees?
The Office for Students, which now runs the NSS, seems to be aware of these concerns. But its changes have only faced more and more criticism from the academic community. Institutions are now being punished, and studentsare charged more based on the results of a survey designed over 20 years ago, with few meaningful updates since.
“…treating the NSS as immutable revealed truth rather than the creaking, misaligned, incomplete instrument it’s become is the biggest problem.” – Jim Dickinson, Wonkhe, 2025
Education is increasingly becoming a numbers game. But some things we cannot reduce to numbers. For every decision driven by NSS scores, there are magnitudes of students whose voices are going unheard.
Perhaps it’s time we put down the scores and went back to those actually impacted: the students.
Kaveh Kordestani is a staff writer for Roar
