Exams are bad metrics?

The exam fiasco. What can you say? It has got me thinking about what exams are for. As an employer, what do they tell you? And, what do you *think* they tell you?

Everyone, it seems, uses exam results as a metric. But few, it appears, actually agree on what the metric is measuring. Which makes the discussion and occasional argument over what’s to be done rather circular.

Here’s a couple of statements, heard made on Radio 4s Any Questions, but echoed in discussions elsewhere, to illustrate the confusion.

Employers want…

Employers want to be able to differentiate between the best candidates. Therefore it’s important that exam grades provide differentiation within a cohort of students.

In my experience as an employer that’s true, but only partially so.

Employers, sometimes (and admissions tutors for further and higher education, always), are interested in getting the best and so being able to differentiate within a cohort of students is vital.

However, employers often (and, I’d argue, more frequently) want a measure that candidates have reached some level of competency. Do they have the numeracy skills to analyse sales data effectively? Does their command of the English language enable them to inform or negotiate? For this a metric that is transparent and that remains consistent year-on-year is needed.

Need an analogy? The UK driving test isn’t a good metric if you’re a Formula One team looking for the fastest driver, or a school coach company looking for the safest driver. But the test does tell you that a person has reached a level of competency when behind the wheel of a motor vehicle. That may not be complete and sufficient for your needs, which is why there’s separate tests for motorcycles; advanced driving; and HGVs…

Exams measure…

exams measure intelligence

Do they heck. Exams are a test the ability of a student to pass a test that covers parts of a syllabus for a course they have studied. That may, or may not, correlate highly with your preferred measure of intelligence. That depends on the course, the syllabus and the test.

It follows that increases in average grades do not mean that students are becoming more intelligent. It may mean that students and teachers are getting better at learning the syllabus that the exam covers. It may also mean that students and teachers are getting better and sitting the exam that attempts to measure that syllabus. Or that exam boards are inflating grades.

Bonus question: If we *really* wanted to, how could we tell which were the case?

Wider context…

All of which puts the whole number of students with 5 GCSEs A* to C including English and Maths (and your choice of additional subjects) metric as a measure of school performance in it’s rightful place. Yes, it’s a measure, but it’s not the only measure of performance, nor is in necessarily a good one. However, it is the one that schools are pressurised incentivised to optimise for and so, unsurprisingly, they do.

Update [20120830]

Related links:

Update [20120831]

Advertisements