CRADLE International Symposium 2025 Panel: Changing assessment design practices in an AI world
Dr Jo Elliott
In the closing panel at CRADLE’s International Symposium 2025, facilitated by Prof. Phill Dawson (Deakin University), panellists A/Prof Jess Luo (Education University of Hong Kong), A/Prof. Nicole Pepperell (University of Technology Sydney) and Dr. Zachari Swiecki (Monash University) reflected on the two-day symposium on ‘Changing assessment design practices in an AI world’.
Two discussions stood out as particular highlights. First, Jess highlighted the importance of considering the wider context and circumstances in which assessment is (or is not) redesigned. She reflected that discussions about the urgent need for assessment redesign often have a moralising tone, but fail to acknowledge the work required to redesign assessment. Unless institutions ensure that people have the time, space and resources to do this work – and that it is recognised and rewarded – the burden rests on individual staff, and assessment redesign risks becoming a box-ticking exercise. Nicole suggested considering assessment workload planning at a programme level, to account for particular points in programmes that justify a greater investment of staff time than others.
The second highlight was the discussion around the role of friction in learning. Nicole highlighted that some level of ‘friction’ or challenge is important for learning. There is a risk that GenAI can reduce that friction too much, making an assessment too easy and reducing the opportunity for learning – but it can also help remove other, sometimes external, sources of friction, making it easier to learn. Responding to an audience question about appropriate levels of friction, given the diversity of our students, she suggested that this should be part of a contextualised, ongoing discussion with students. As educators, we should talk to students about which frustrations are a normal part of the learning process, and reassure them that experiencing friction or challenge is not a problem in and of itself. Creating safe learning environments and a sense of trust, makes it easier to talk about when that friction goes beyond normal or helpful levels, and may impede students’ learning.
Enhance student outcomes in an AI enabled world - Centre for Online and Distance Education
Violet Chan
The Centre for Online and Distance Education (CODE) held a workshop titled Staff development to enhance student outcomes in an AI-Enabled world on 14th of October. Academic and professional staff from several institutions came together to discuss how universities can best support staff in adapting to an AI-driven learning environment.
Dr Martin Compton from King’s College London opened with reflections on the gap between the rapid pace of innovation and the slower rhythm of institutional change. While new AI tools emerge almost daily, changes to assessment can take up to eighteen months. He urged universities to focus on support and responsible adaptation instead of bans and detection tools and highlighted the need for programme-level initiatives that balance innovation with risk management.
Dr Dominik Lukes from the University of Oxford offered a compelling exploration of how educators can build mental models for using large language models (LLMs) effectively. He described the current AI landscape as a “jagged technological frontier” where AI can effortlessly complete some tasks while failing at others of similar complexity. Dr Lukes emphasised the importance of understanding that LLMs are not synonymous with AI and encouraged staff to treat AI learning as procedural practice—developed through repetition, reflection, and experimentation.
Colleagues from the University of Leicester presented their AI Literacy Framework, aligned with the Digital Education Council’s model. Their approach combines governance through annual policy reviews and an assessment traffic-light system, practical support via training and guidance, and a Community of Practice to share experiences and foster collaboration.
Finally, the University of London team showcased their use of generative AI to enhance feedback in the PG Cert in Teaching and Learning programme. Using the Noodle Factory platform, students received instant, AI-generated feedback from role-play peer reviewers trained on course materials. The feedback was found to be consistent and actionable, though lacking the depth and expertise of human responses. The project concluded that AI feedback can effectively complement but not replace peer and tutor feedback.