Today the A-level results are out, and there are familiar scenes:
Lots of delighted and a few despondent teenagers; reports from the government of another record results year; calls that standards are dropping and it’s just due to easier marking; battles about coursework and exam-based assessment.
Whether exams have got easier I don’t know, but with 25% of students getting A grades, they’re certainly less discerning. By this I don’t mean that someone getting a top grade now isn’t as bright as 20 years ago – that’s virtually impossible to prove – but that the mere fact of 25% of students getting A grades makes it more difficult to separate the wheat from the not-quite-as-good wheat.
A simple solution that appeals to me as a numbers nerd is to adopt the US system of marking on the curve in national exams. Thus, from now on the grades are judged compared to your national year peer group, for example (I’m making up the numbers to explain):
A*… the top 3%
A… the next 7%
B… the next 20%
C… the next 20%
D… the next 20%
E… the next 20%
U… the next 10%
For employers and universities alike, this would give a consistent ranking; someone with an A grade would be in the top ten percent of their year group, which is a pretty good way of judging and providing consistency over the years. It also means if a paper or course happens to be easier one year, then marking on the curve smoothes out the problem.
Of course it’s not without problems. If everyone does well, some people who’ve only missed out on a few percentage points would end up with vastly lower grades and that could be demotivating. Yet it seems to me either we have a competitive ranking system of education – which is what A-levels and GCSEs are – or we simply scrap the whole thing and rely on teacher assessment of pupil ability.