Skip to main content
Queen Mary Academy

AI and authentic assessing

AI hasn’t broken assessments. They were already flawed. Academic integrity issues aren’t new: whether it’s outsourcing essays, copying past work, or using ChatGPT, pretending it’s your own remains cheating argues Dr Stephen Buckingham. 

 

Published:
Classroom with a group of students

A group of Undergraduate students

They say AI has broken our assessments.

I disagree - they were already broken.

Let's start with the issue of academic integrity. In the old days, cheating was cheating. Getting someone else to write your essay was cheating. Copying an essay from someone who did the module last year was cheating. Getting ChatGPT to write your assignment and pretending you wrote it is cheating.

Blurred lines in the sand

But what about having a prolonged, nuanced discussion with GPT to craft an essay, challenging its assumptions, provoking it to find new research, and asking it to respond to your push-back? Even using ChatGPT 'just for understanding' could be cheating: plagiarism is using someone else's ideas as if your own without proper acknowledgement.  Even using AI the right way – using it to develop, deepen and elucidate your own thinking and interweaving it with the thoughts of others – poses problems for defining whose ‘voice’ is whose. Unless you ban AI use altogether (good luck with that), you simply can't draw a clear line.  I am even tempted to feel nostalgic for the good old days of essay mills – at least conceptually there was clear water between proper and improper conduct.  Perhaps that was a warning we didn’t listen to.

Meanwhile, educators are crying out for an AI policy, but a policy isn't enough to fix a broken educational model.

I asked some students if they enjoyed thinking about their subject. They laughed and told me they are too busy meeting assessment deadlines. Our staff feel overwhelmed with grading work whose authorship is unclear, submitted by students whose engagement with the work rarely goes beyond the grade. How did we get to this state of affairs?

So, I have a provocative question. Has our drive to make assessments fair (output-based, criterion-referenced) provided the very conditions for the AI crisis by ? Is the reason we cannot tell the difference between a student essay and the output of ChatGPT because we have been training students to act like ChatGPT? AI has revealed just how dehumanising our assessment model is.

A humane education

Human knowledge is complex, interdependent, and highly situational and embodied. Even irrational. I cannot think of the abstract concept of 'cup' without visualising one (go on, try it). And I cannot think of a cup without immediately linking it to drinking, eating at a table etc.

A disengaged, assessment-driven student will try to learn that “digitalin increases cardiac output by inhibiting the sodium-potassium exchanger”. An engaged student will not only know that fact, but will also think of his uncle who has heart failure. It has meaning for the student that contains and exceeds the bare fact. This is the kind of humane, benevolent wisdom that we need to act as a partner to AI in the emerging future.

Authentic assessing

This leads me to my second provocative question: should we start assessing the student and not their work?

We need more than authentic assessments - we need authentic assessing. Authentic assessing is about forming a view of a person in the round. It is forming a holistic judgment of a learner, not just their outputs.

For example, student talks followed by a supportive round of questioning provide students the chance to think aloud and show how they react to new challenges. Student portfolios developed throughout the module in collaboration with the teachers give an opportunity for students to express themselves in different ways, enhancing inclusivity. Yes, there are risks such as unconscious bias.  We will have to learn how to overcome that, and we can.  The educators are being educated!

People can form accurate, unbiased judgments about another person's intellectual, critical and creative skills very easily because we share that person's embodied experiences and can have empathy. Authentic assessing using a combination of performative and reflective elements leverages the naturally rich, nuanced and social skills we practice every day.

AI has taught us that assessment is not a technical problem. Let's stop the teach-test treadmill and start to use AI to create assessments that celebrate human judgment, creativity and experience.

Dr Stephen Buckingham

Reader in Biomedical Sciences Education

https://www.qmul.ac.uk/sbbs/staff/stephen-buckingham.html 

 

 

Back to top