Bringing calm to the chaos: Using educational theory to reframe AI in higher education

lightborn avatar

The furore around Artificial Intelligence (AI) and its significance for education and academic integrity can be overwhelming. Opinions abound on the merits and provocations of using (or not) these tools (see for example Bryant, 2023). UNESCO have recently produced this comprehensive report to getting started with ChatGPT and AI and outlines other major challenges including lack of regulation, privacy concerns, cognitive bias, gender and diversity, accessibility and commercialisation.

Reframing how we view AI using educational theory may help clarify when and how to use it. In a previous post I explored AI’s impact on education. Two frameworks that are useful are Anderson et al.’s Revised Bloom’s Taxonomy (2001) and Constructive Alignment (Biggs & Tang, 2011).

Constructive Alignment

Let’s think about AI from the viewpoint of Constructive Alignment where Biggs & Tang (2011) take advantage of students’ tendency to learn what they think will be assessed.

Image source: Radboud University

An analogy

Let’s consider learning mathematics. In primary school, maths is taught without calculators, so students understand the logic of how to do maths. In high school, calculators are introduced to allow more complex equations and computations to be taught.

We can use this analogy when thinking about AI’s use in higher education.

In foundation units where fundamental skills are taught, learning outcomes (LOs) may exclude the use of AI as students need to understand and apply core knowledge. For example, we need analysts, doctors and engineers to be able to apply basic concepts without the assistance of AI. There may be situations where AI is not available to assist such as in the case of remote locations, internet outages or other crises such as natural disasters that wipes out connectivity.

Foundation units

The use of AI in assessment in foundation units may be excluded depending on the discipline. There will be differences between a marketing unit teaching social media and a medical unit teaching anatomy.  As AI is good at compiling knowledge,  educators should rethink assessment modes in foundation units, opting for assessments that offer less opportunity to engage AI and where the student is verifiable to the marker e.g., in-person assessment. Whilst this may present challenges to delivering at scale, it does help to ensure students are able to demonstrate an understanding of basic concepts whilst mitigating concerns about academic integrity.

Specialist units

In specialist units and capstone units, the use of AI may be encouraged and embedded in the assessment design to ensure students are industry-ready on graduation. Ultimately, we will need to include the use of industry-relevant AI (and other digital) tools to ensure students have the competency and understanding of the use of AI demanded in the industry sector they intend to work in. Other approaches that test critical thinking and understanding of concepts, such as oral assessments, may be more appropriate at this stage of the degree (Van Ginkel et al., 2017).

Ideas for Assessment Design

  • Consider overhauling assessment structures to reduce the reliance on formats easily assisted by AI, such as essays and take-home exams. There are many examples now being shared.
  • Move away from simply assembling and compiling content, as AI does this so well. Move towards assessing students’ ability to critically select and use the content to solve problems. This has the added benefit of making the assessment more authentic (Villarroel et al., 2018).
  • Consider the use of multiple layers or filters to enhance academic integrity. In this excellent article, Dawson (2022) explores this new approach in detail.

Teaching and learning activities are then designed to teach students how to meet the assessment criteria.

Thus, the learning outcomes take advantage of students’ tendency to learn what they think will be assessed (Biggs & Tang, 2011). As teachers we know that the first thing students often do is look at what the assessment are, and what they need to do to pass the unit. The learning activities are designed and scaffolded to utilise or not utilise AI depending on the stage of the degree and learning outcomes.

Bloom’s Taxonomy

When thinking about assessments, Bloom’s Taxonomy (Anderson, Krathwohl & Bloom, 2001) is useful to clarify exactly what you are trying to assess which then can lead to analysing which assessment format may then be the most effective.

 Image source: Vanderbilt University Center for Teaching.

Pulling it all together; AI and Educational Theory

Combining Bloom’s Taxonomy with Constructive Alignment a new framework takes shape showing how we can analyse where AI fits into education.  It may provide guidance on where, when and how to use AI proactively to enhance learning.

Depending on the discipline and what is being taught the same rationale could be used within a single unit (one subject in a degree program). Where AI is excluded during the first weeks of the semester and where basic concepts are taught and tested using in-class tests for example. Then AI’s use is encouraged later in the unit where spot orals on written reports or full oral assessments are utilised. This is not a common way of assessing in business education (Huber et al., 2023), particularly at scale, but is growing in prominence (Van Ginkel et al., 2017).

Final thoughts

In this new field of AI and educational theory where research is yet to be published, these frameworks can help align and crystallise our approach. Using educational theory to analyse if and where AI can be effectively used to help improved student learning outcomes, may help bring calm to the chaos. Given the speed of change and the degree of impact AI will have, any tool that helps frame our thinking is an important step in achieving some sense of control.

About the author

Joanne Nash
%d bloggers like this: