The shadow AI university – who gets an AI-enabled education?
Students are quietly organising around AI use in an emergent peer-led innovation network. But, Professor Chie Adachi (Interim Academic Director of the Centre for Excellence in AI in Education) warns, inequitable access could entrench a two-tier learning experience.

Imagine those two students at a university.
Student A: powerful laptop, private room, paid GPT subscription, Discord study servers, “prompt engineering packs” shared with friends.
Student B: shared device, patchy wifi, caring and job responsibilities, anxious about “getting in trouble” with AI, relies heavily on their university’s official guidance.
AI isn’t just disrupting assessment; we are seeing the emergence of a shadow AI university. It’s quietly building a shadow layer of universities where digitally confident, well-resourced students experiment and leverage “co-intelligence” with AI systems in their learning and becoming. Unless universities recognise those students and their practices as the starting point for AI for education strategy, we risk entrenching a two-tier student experience and divided universities.
Student-led AI-enhanced/constrained learning
Students are already building what can be thought of as a “shadow AI university”: a parallel ecosystem of learning practices that operates almost entirely outside institutional view. Across WhatsApp and Discord groups, AI-literate students actively co‑design learning support for their studies, swapping prompts and trading “best AI tools for X course.” These informal channels function as peer‑led innovation labs, where students experiment with learning workflows and refine them collectively.
Students already use AI for a wide spectrum of learning tasks – briefing themselves before lectures based on what resources they can see on LMS (learning management systems), digesting long readings into accessible summaries, generating practice questions for revision, or drafting their assignments. Beyond academic work, AI tools are enlisted to help draft emails or provide forms of emotional support during stressful periods, illustrating how deeply embedded these tools have become in everyday student life.
What is striking is the creativity and sophistication of these practices. It is not simply “shortcutting” or “cheating,” but rather an emergent, student‑led pedagogy. Yet universities rarely recognise this richness, because institutional attention remains focused on misconduct and compliance, particularly around the narrative of AI use in assessments. By shifting our lens from policing to partnership, we might begin to understand – and learn from – the vibrant AI learning cultures students are already creating.
Two-tier system
The rise of informal, student‑led AI practices create a quietly expanding two‑tier experience across universities. Access to generative AI may appear universal, and there are many free general purpose AI tools in the market. However, most universities still don’t have (a consolidated repertoire of) enterprise solution of AI tools for learning, teaching and assessment. University-wide license for AI tools is costly; and therefore this environment, lack of university supported AI tools, drives students to access different levels of AI tools outside of universities at their own cost or judgements. Further, students’ ability to harness AI tools is deeply shaped by their digital capital and confidence. Those without reliable devices, fast broadband, prior exposure to educational technologies, or strong peer networks cannot turn to AI as a powerful learning amplifier.
In short, AI does not support students learning equally without explicit scaffolding: it disproportionately benefits those already positioned to thrive. This dynamic intersects sharply with longstanding equity concerns in universities – commuter students with long travel times, those juggling work and study, widening‑participation cohorts, and international students navigating linguistic and academic norms all face barriers to developing AI literacy.
Issues of “AI permitted” environments
Current institutional responses to generative AI often miss the point because they begin from a risk‑first posture. The dominant focus remains on misconduct, detection of AI technologies in assessment, and generic guidance that frames AI primarily as a threat to academic integrity and university degrees. What these narratives fail to recognise is the extensive informal AI ecosystem and learning repertories that students have already built – messy, creative, networked, and largely invisible to universities.
By narrowing the conversation to compliance, institutions overlook the actual embodied practices being shaped on the ground, by our students. Just as importantly, they fail to confront the unequal access to devices, tools, and AI literacies that determine who benefits most. University policy can say “AI is permitted,” but this does little for students who may lack the digital access, capital or confidence to experiment safely and learn to apply effectively.
Paradoxically, risk‑heavy approaches can drive AI use further underground, making the “shadow AI university” even harder to see and understand. Students become more cautious about disclosing their practices, and the gap between institutional narrative and lived reality widens. By treating AI primarily as a compliance problem, universities are missing the opportunity to engage with emerging pedagogical and equity issues.
AI strategy through co-creation and equity-first
An equity‑first, co‑created AI strategy requires moving beyond compliance into genuine partnership with students. It starts by illuminating – not suppressing – the shadow layer of AI-enabled learning practices. Further, universities ought to seriously consider guaranteeing a baseline of access to trustworthy, institutionally supported AI tools. This ensures students are not constrained by their ability to pay or forced onto unsafe platforms.
But access alone is insufficient – AI literacies need to be woven into curriculum, where educators scaffold their developmental journey over the duration of a programme, not delivered as optional bolt‑ons. Students ought to learn to critically evaluate AI outputs, understand disciplinary norms around acceptable use, and reflect also on when not to use AI.
When we reflect back on those two students we imagined at the start – their trajectories will only diverge during their time at universities, if we are not careful to take considered steps forward. One becoming increasingly fluent in AI‑mediated world, the other constrained by uncertainty and limited access. This is the quiet but authentic risk we face.
The real question then for university leaders is not simply, “What is our AI policy?” It is: “Who is at the risk of being left out of our AI future – and how will we invite them to help design it?”
Acknowledgment: This blog was first published in Wonkhe on Tuesday 17 March 2026
Professor Chie Adachi
Interim Academic Director for the Centre for Excellence in AI in Education
https://www.qmul.ac.uk/queenmaryacademy/about/meet-the-team/profiles/chie-adachi.html