Finalists are all too aware of the annual National Student Survey (NSS) at the moment, with not long left to “make their voices heard”. Universities often view the NSS as one of their main measures of success, but is it really fit for purpose?

Since the rise in tuition fees students have often be referred to as ‘customers’ who need to be kept happy. The NSS acts as a measure of student satisfaction, asking individuals for feedback on what it is like to study their course at their institution. It primarily aims to help prospective students decide where to study, as well as indicate to institutions areas to improve. But what does it really tell us?

Firstly, the NSS contributes towards league tables. Students obviously want employers to see they have studied at a good university, and if NSS results are published in the public domain (Unistats) and used within league tables, then could this make it beneficial to provide positive – potentially dishonest – responses? It could be argued students are foolish to criticise their course and institution via the NSS, and instead this should be done through programme representatives and module feedback. After all, no student wants to rack up debts of around £40-50k on a poorly rated course.

In addition, the recent proposals for the ‘Teaching Excellence Framework’ (TEF) suggest that universities with good NSS results could be rewarded with the right to increase undergraduate fees in line with inflation. Universities may therefore seek ways to engineer good NSS results from students, motivated by potential revenue gains.

Furthermore, multiple choice questions massively limit responses and thereby the knowledge gained. How useful is this to prospective students? Take “Feedback on my work has been prompt” for example. It may be in first year you “strongly disagree” but by final year you “strongly agree”. You therefore choose a mid “neither…nor” to express your view on ‘your course as a whole’. But in doing so, your answer fails to inform prospective students what it is like to study now. They are not told that prompt feedback was an issue in the past but is now excellent for when they start.

Likewise, how useful is just a number to institutions? For “Staff are good at explaining things”, it may be that some staff are and others are not. A mid score tells us nothing about the specific lecturers who may be posing an issue, giving little direction for how a department should improve.

There is also a lot of pressure on students to complete the NSS, including from: lecturers, department leaders, programme representatives, and even phone calls from IPSOS Mori. Does this pressure make students more likely to complete it in a rush, primarily to stop being hassled rather than to share their well-thought opinions?

Of course gathering student feedback is invaluable for improvements, and students should certainly be able to “have a voice”. The idea behind the NSS is therefore very creditable, but the methodology, impact and usefulness raise debates. Honesty is brought under scrutiny by the potential benefits of good NSS results.

Kelly Watson

-Comment Article


Label is the publication of Loughborough Students’ Union. The opinions in our magazines and online are those of individual contributors, not of Loughborough Students’ Union, the editorial team, or any other officer of the union unless otherwise stated. 

Got an opinion to share? An article you would like to see written or one to write yourself? Email LabelEditor@lsu.co.uk to find out more and share your ideas.

Share.

Comments are closed.