Skip to main content
Digital Education Studio

Queen Mary’s AI Policy: what it changes for teaching, learning, and assessment

Interview with Cathie Jayakumar-Hazra, Cybersecurity Awareness Training and Policy Manager

As Queen Mary prepares to publish its new Artificial Intelligence (AI) policy, we caught up with Cathie Jayakumar-Hazra, Cybersecurity Awareness Training and Policy Manager, to discuss what this means in practice for teaching, learning, and assessment. Building on earlier work to develop Queen Mary’s AI Guardrails, the policy brings together principles of fairness, transparency, data protection, and robustness into a single institutional framework to guide the ethical and responsible use of AI across academic and professional activity.

Generative AI tools are now moving rapidly from experimentation into everyday academic practice. While Queen Mary already provides extensive guidance and resources for students and educators on using these tools responsibly, the new policy sits alongside—and complements—this support by providing a shared governance benchmark.

In this follow-up conversation, Cathie reflects on why a policy was needed in addition to guidance, what will feel different for educators and students in day-to-day practice, and how the policy is designed to support confident, responsible use of AI while leaving space for pedagogic innovation to continue.

 

For an educator designing a module, assessment, or learning activity, what will feel different in day-to-day practice now that the AI policy is in place?

The policy provides general advice on using AI ethically, transparently, and responsibly, particularly in relation to academic integrity, fairness, and authenticity. It reinforces that all use of AI in teaching and assessment must align with Queen Mary’s academic standards.

That said, the policy deliberately does not prescribe detailed pedagogic practice. There is a relatively small section on teaching and assessment because we wanted to leave space for specialist academic bodies—such as the Centre for Excellence in AI in Education and the Joint Research Management Office—to develop and update their own guidance, toolkits, and capacity-building resources.

In earlier drafts, that section was much longer, but the decision was taken to keep the policy high-level and durable. The idea is that colleagues can turn to the policy first for the overarching framework, and then follow the signposted guidance for practical, discipline-specific decisions.

What this changes day-to-day is confidence. Educators can design learning and assessment activities knowing there is a clear institutional position on academic integrity, data use, and human oversight. The policy also makes explicit that AI must never be the sole determinant of assessment outcomes—human judgement always remains central.

 

Queen Mary already has strong guidance and resources for students and educators on using generative AI responsibly. What gap was this policy designed to address?

There were several gaps we wanted to address, but a major one relates to data protection and security—particularly around the sharing of personal, clinical, or research data with AI tools.

The policy provides a core framework to help staff and students understand that oversharing data, or misusing AI tools with personally identifiable or sensitive information, can be detrimental not only to individuals but to the University as a whole. That includes reputational risk, legal exposure, and potential financial loss.

This is why the policy is closely connected to existing legal and regulatory frameworks such as the Data Protection Act, GDPR, the EU AI Act, and external principles like the Russell Group Principles on the use of generative AI tools in education. It also explicitly aligns with Queen Mary policies on academic misconduct, research integrity, information security, and dignity at work.

In practice, the policy acts as a central benchmark. Without it, we would have lots of guidance and good practice, but no single governance framework that shows internal and external stakeholders that Queen Mary has robust oversight of how AI is used. The policy supports innovation, but it also ensures we are protecting data, assets, and critical services, particularly when engaging with third-party AI providers or developing AI systems ourselves.

 

Last but not least, students. From a student perspective, what does the policy newly clarify about what is acceptable, expected, or off-limits when using generative AI?

At this stage, the policy provides general guidance rather than highly detailed rules for students. It reinforces expectations around academic standards and academic misconduct and links directly to existing procedures and guidance in this area.

Importantly, it makes clear that students’ use of AI must comply with their school or institute’s expectations and must not amount to academic misconduct. It also clarifies boundaries around data: students must not enter personal, sensitive, or research data into unapproved AI tools.

This section will likely evolve over time. The intention was not to over-prescribe at policy level, but to allow further development through consultation with academics, researchers, and students. The policy sets the foundation, while more specific guidance—particularly around learning and assessment—can adapt as practice and technology continue to change.

Overall, the policy is meant to support dialogue rather than shut it down. It creates a shared framework that enables responsible experimentation, while making clear where accountability lies and what values must be upheld.

 


 

The AI policy is currently undergoing final review and is expected to be published following institutional approval processes. Once released, it will sit alongside existing student and educator guidance, providing a stable governance framework for the ethical and effective use of AI at Queen Mary.

Back to top