Having extensive experience working as a learning technologist in the UK higher education sector, I have witnessed many technological shifts, e.g. from the rise of virtual learning environments to the rise (and fall) of Moocs. However, the rapid emergence of generative AI tools represents a transformation of a wholly different magnitude entirely. Unlike previous digital tools, generative AI does not just support existing pedagogical practices, it will fundamentally and very quickly reshape them. The pervasive influence of AI will permeate course design, assessment, student engagement, staff development, and student attitudes to learning, creating both amazing opportunities and huge challenges for universities.

Generative AI tools such as ChatGPT, Gemini, and Microsoft Copilot are already redefining how students access, process, and produce knowledge. Traditionally, learning theories such as constructivism emphasise that learners build knowledge through active engagement and social interaction. Generative AI can amplify this by acting as an interactive ‘cognitive partner’, enabling students to seamlessly explore ideas, test hypotheses, and – most importantly – receive instant feedback from the tools. For example, a student studying economics can ask AI to simulate market scenarios, then critically evaluate the outcomes, or it could ask AI to build a product marketing plan and examine the consequences of changing some of the promotional variables. This type of usage would strongly align with Vygotsky’s zone of proximal development, where AI becomes an interactive and responsive scaffold, supporting learners to achieve tasks they could not easily accomplish independently.

However, such AI tool usage also raises questions when we consider cognitive load theory, which explains how our working memory has limited capacity, so learning is most effective when instructional design minimises unnecessary mental effort and focuses on essential processing for understanding. While AI can reduce extraneous load by summarising complex texts or generating examples and consequences, it risks encouraging surface learning if students use the tools to bypass deep and meaningful engagement with their subjects. In this regard, it is important that universities design learning activities that require students to analyse, critique, validate, and apply AI-generated content rather than simply accept the outputs superficially and passively.

In terms of the impact of AI tool usage on academic integrity, an area where I have huge experience and forms a large part of my work, assessment practices will need to undergo quite radical change. Asking students to produce standard essays and reports – previously very reliable indicators of student understanding – will not be as important going forward, as these are easily produced by AI, and we do not yet have reliable tools, as we do with Turnitin for detecting plagiarism, to allow us to easily identify AI-generated content. This will challenge the validity of such assessments, and educators will need to rethink what constitutes authentic and honest learning. One response is the shift toward assessment for learning, which stresses the importance of process over product, of formative over summative. For instance, students might be asked to document their interaction with AI tools, explaining how they refined prompts and evaluated outputs. This approach would also resonate with metacognitive theory, encouraging learners to actively reflect on their thinking and decision-making during assessment processes.

Moreover, spoken assessments, project-based activities, and collaborative problem-solving will gain prominence in future assessment design, where there will be a greater emphasis on students providing evidence to support the processes that they have engaged in while producing outputs. These methods not only mitigate the risk of AI tool misuse but will also help students develop higher-order skills such as critical thinking and creativity, skills that AI tools cannot easily replicate (yet!).

For academic staff at universities, generative AI tools offer both freedom and disruption. Routine tasks such as drafting feedback, creating learning materials, or summarising research can be quickly and easily automated, freeing their valuable time for more meaningful engagement with students. Such an approach aligns with Humanistic Learning Theory, which prioritises the building of relationships and offering opportunities for personal growth. By reducing administrative burdens, something that academics have long railed against, AI should enable educators to focus on more personalised mentoring and the fostering of intellectual curiosity in their students.

That said, university staff will need new literacies, AI prompt development, ethical AI tool usage, and intellectual property and data privacy awareness. Professional development for academics will need to evolve to include the rising demand for such new literacies and competencies. Universities would do well to adopt a community of practice model, where staff actively and collaboratively explore AI tools, share their developing best practices, and co-create (with learning technology professionals) usage guidelines. This collective, community approach will ensure that AI tool integration is pedagogically sound rather than driven by novelty and availability.

Generative AI will also raise fundamental and profound ethical questions. Bias in AI outputs, data privacy concerns, intellectual property rights, and the potential for a widening digital divide could exacerbate already existing inequalities, while creating new disparities. Students from disadvantaged backgrounds, something I have always had a big interest in, may lack access to premium AI tools, creating a two-tier learning experience, especially if the big tech companies seek to monetise their huge investment in tools that they currently provide for free. Universities must therefore embed principles of universal design for learning – an educational framework that aims to make learning accessible and inclusive for all students – ensuring that future AI-enhanced education remains available and fair. A priority will be on providing institutional access to AI platforms and tools, and training students in responsible and critical AI literacy will be increasingly important.

Looking forward, generative AI tools will likely become embedded in virtual learning environments, functioning as an intelligent assistant within these already pervasive virtual platforms. Adaptive learning systems powered by AI will be able to shape and personalise content in real time, responding to the needs of individual learners. Such an approach reflects a connectivist approach, a theory that views learning as a networked process where knowledge resides in distributed and connected systems. AI will not necessarily replace academics at university, but it will (hopefully) augment and empower their already pivotal role, shifting the emphasis from material creation and content delivery to facilitation, mentoring, and guidance.

Given the above, it is obvious that generative AI will not be a passing trend (like Moocs), rather, it is a paradigm shift that challenges universities to rethink pedagogy, assessment, learning, academic integrity, and professional practice. The potential of AI tools to enhance learning is huge, but only if it is harnessed thoughtfully and ethically. By grounding AI integration in established learning theories and prioritising fairness and inclusivity, universities will ensure that this technological revolution serves the needs of both students and staff, preparing everyone for a future where human and artificial intelligence coexist in the pursuit of knowledge and learning.

 

 

Learning theories comparison table for educational technology and AI integration
Theory Learning Focus AI Teaching Implications Practical Example 
Cognitivism Information processing, schema developmentAI personalises content based on cognitive load and learner profileAdaptive platforms adjust difficulty based on performance 
Connectivism Networked learning, digital literacyAI recommends resources, connects learners, and visualises learning pathsAI-curated forums connecting learners with experts 
Constructivism Active learning, prior knowledgeAI simulates real-world scenarios and supports project-based learningVirtual labs or simulations for science experiments 
Humanism Motivation, self-directed learningAI supports learner choice, goal setting, and emotional engagementAI tutors allowing learners to choose topics and pace 
Metacognitivism Self-monitoring, reflection, and strategy useAI prompts reflection, tracks habits, and suggests improvementsAI dashboards showing progress and prompting goal setting