Skip to main content
Queen Mary Academy

AI for Reflective Practice: Thoughts and Tensions

Dr Manesha Peiris explore how AI tools can help reflective practice, whilst identifying the risk of reinforcing performative, surface‑level reflection. She suggests that educators need to reframe reflection as a dialogic, process‑focused activity rather than a written product.

 

Published:
Profile photo of Dr Manesha Peiris

The rapid development of Large Language Models (LLMs) forces educators to confront a difficult question: Can AI meaningfully support reflective practice, or will it simply accelerate the reduction of reflection to a written performance? 

The increasing maturity and accessibility of platforms such as ChatGPT, Gemini and Microsoft Copilot make them attractive tools for supporting learning. In this blog, I explore both sides of the argument, with the aim of helping us proceed thoughtfully, leveraging potential benefits while remaining mindful of pedagogical risks. 

Reflective practice: more than writing 

Reflection, the ability to critically examine one’s actions, assumptions and decision-making, is a foundational professional skill. In contexts such as organisational decision-making, problem-solving and professional development, reflective capacity underpins growth. 

Well-established theoretical models provide structured approaches to this process. Donald Schön’s reflection-in-action and reflection-on-action, Graham Gibbs structured reflective cycle, and David Kolb's experiential learning cycle emphasise how individuals can examine assumptions, interrogate values, make sense of experience, and develop actionable insights. 

Dialogic reflection and active listening are particularly powerful in this regard. Through dialogue, practitioners gain perspective on their own meaning-making. Skilled questioning provides validation and scaffolding, allowing complex experiences to be unpacked. 

This raises an important question: can LLMs function as structured interlocutors, providing disciplined questioning and scaffolding, or do they risk short-circuiting the very cognitive struggle that makes reflection transformative? 

The positive case: structured scaffolding 

There is a strong case that LLMs can support reflective practice when used intentionally. Structured prompting can scaffold the reflective process by helping practitioners to reframe situations, identify blind spots, surface implicit assumptions and articulate tacit thinking. For individuals who may hesitate to reflect openly with peers, AI can offer a psychologically safe space, free from judgment and reputational risk. The accessibility of AI tools also increases equity of access to structured reflection, particularly for learners without regular mentoring or peer dialogue opportunities. The key, however, lies in how the tool is positioned. 

A poorly framed prompt might ask: 

“Write a reflective piece based on this content.” 

A more pedagogically aligned prompt would instruct: 

“Act as a reflective practice coach. Do not write a reflective statement for me. Instead, ask me probing questions using the stages of Gibbs’ reflective cycle. Challenge my assumptions and help me explore alternative interpretations. Ask one question at a time and wait for my response.” 

In the latter example, the LLM functions as a questioning partner rather than a ghostwriter. The cognitive work remains with the learner. However, this approach requires the user to remain in control. Poorly framed prompts can quickly shift the emphasis from reflection to writing, and from process to product. 

The tension: reflection or performance? 

Educators are already familiar with a persistent challenge: learners often struggle to differentiate reflective practice from reflective writing. Over time, reflective practice can become treated as an assessed genre. Learners learn what “sounds reflective” before they learn how to engage in disciplined self-inquiry. They master phrases such as “I realised that…” and “On reflection…” while the deeper cognitive transformation remains uncertain. Frameworks such as Gibbs’ reflective cycle, originally intended as scaffolds for thought, can be interpreted as prescriptive formulas. The result is often compliance rather than curiosity, performance over personal growth. 

LLMs are exceptionally good at producing the language of reflection. They can generate prose that reads as emotionally literate, critically aware and insightful. The risk is not that AI creates performativity, but that it intensifies an existing pedagogical distortion. A learner may submit work that appears deep and self-aware without having undergone the cognitive struggle, discomfort, or meaning-making that genuine reflection demands. Reflective practice is inherently messy, iterative and sometimes unresolved. LLMs, by contrast, tend to tidy that mess. In doing so, they may erase the very friction that produces growth. 

When reflection is assessed primarily through polished written outputs, the incentive structure may unintentionally encourage this distortion of intent, producing artefacts that satisfy formal requirements without necessarily evidencing transformative thinking. 

A way forward: process over product 

Rather than rejecting AI outright, we may need to reconsider how reflection is framed and assessed. If written reflection has become the dominant proxy for reflective practice, perhaps the deeper question is whether our assessment strategies have narrowed what counts as reflection. Alternative approaches may include: iterative reflection logs demonstrating evolving thinking, audio or video reflections capturing the process rather than the product or outcome, visual maps of problem-solving and decision-making and assessed dialogic reflection rather than solitary written artefacts. 

At the same time, LLMs could be explicitly positioned as process tools. Used as structured interlocutors, they can scaffold questioning before the production of assessed work. Prompts such as: What assumptions am I making? What alternative interpretations might there be? How might someone else perceive this situation differently? can help shift the emphasis back to inquiry rather than performance. 

However, new challenges arise. Reflection is often personal and vulnerable, and audio or video submissions may increase exposure and discomfort for some learners. There is no single format that guarantees authenticity. Simply shifting from written reflection to alternative media does not necessarily address the underlying issue: if learners understand reflection primarily as performance, they will adapt their performance to any format we privilege. Moreover, AI-guided reflection itself risks becoming formulaic if not carefully framed. To genuinely reframe assessment, we must therefore also address the learner misconceptions about reflection that our current pedagogic practices may have inadvertently reinforced. 

Concluding thoughts 

As educators, many of us continue to grapple with persistent myths: that reflection must be emotional to be valid, or that reflective writing is synonymous with reflective practice. Our transition toward reflection as a form of authentic assessment may, inadvertently, have narrowed its meaning. 

This landscape creates susceptibility to what might be called LLM colonisation, the production of aesthetically convincing reflective texts that conform to expected patterns. If reflection becomes a genre, AI will master it. But if we actively reframe reflection as dialogic, embodied, situated and social, as a disciplined practice of professional inquiry, then AI may yet serve as a powerful tool rather than an oracle. 

The question, perhaps, is not whether we can use LLMs in reflective practice, but whether we are prepared to clarify what we mean by reflection in the first place. 

Dr Manesha Peiris

Senior Lecturer in Reflective Practice and Project Management

https://www.qmul.ac.uk/spcs/staff/academics/profiles/mpeiris.html 

 

 

Back to top