04/07/2026 | News release | Distributed by Public on 04/07/2026 05:54
The second edition of the "Assessment in the Age of AI: Do's, Don'ts and Don't Knows for Current Practices" symposium brought together lecturers, researchers, students and staff from the University of Cape Town (UCT) and Stellenbosch University (SU) to address the challenges and opportunities of generative artificial intelligence (AI) in higher education assessment.
The event, which took place at UCT on 1 April, focused on practical strategies for assessment redesign, driven by moving beyond reactive detection-focused approaches toward pedagogically principled responses that foreground learning outcomes and student thinking.
Presentations explored critical uncertainties around AI-supported grading, academic integrity, writing skills development, and the balance between protecting educational standards and preparing students for AI-enabled professional contexts.
"Assessment in universities is a hot topic due to generative AI and how staff and students are using AI. It's very important because the stakes are high. We are thinking about what the future of the university is as an accreditation body," explained co-organiser and the director of the Centre for Innovation in Learning and Teaching at UCT, Sukaina Walji.
"So, we've brought together experts and academics and practitioners to discuss what the ways forward are. We've framed this symposium around, number one, do's or what we're doing now and what we should be doing in our assessment practices. Then there are also the don'ts: what should we not be doing based on the two to three years of experience we have in engaging AI.
"Finally, we have the don't knows. This category is important because it acknowledges that we know we don't have all the answers around how to deal with generative AI and assessment. By naming those, it actually gives us questions that can lead to research directions and opportunities for innovation."
A framework for responsive assessment redesign
In his seminar, event co-organiser, Professor Francois Cilliers from UCT's Faculty of Health Sciences, kicked off the first round of presentations by elucidating process of development for the "Start with Outcomes Framework for Purposive Assessment Redesign in an Age of AI: A Practice Guide".
The framework, which is grounded in instructional alignment theory, contends that focusing on assessment characteristics risks allowing AI to drive and distort educational design, privileging technological capability over educational intent.
Here, Professor Cilliers argued that institutions are solving the wrong problem. The fundamental issue is not assessment disruption, but the disruption of learning outcomes themselves.
"We are currently being reactive, reacting to the capabilities of AI and redefining our assessment tools in light of the developments as they come. However, we run the risk of letting these tools drive our assessment practice," he said.
"What we hope is that this will keep us focused on education rather than on the abilities or the importance of whatever tool just happens to be available."
"What we really have to ask ourselves is: how do we systematically move from a piecemeal approach where you consistently have to redesign your assessments as new tools come out and instead develop a process where the assessment is aligned for maximum benefit in the learning process?"
The framework itself proposes a five-category typology for reviewing learning outcomes: Fully Human-Centric (AI plays no role); AI-Augmented (AI supports human cognition); AI-Enabled (AI necessary to achieve outcomes); AI-Dominant (AI automates with minimal oversight); and Obsolete (AI has fully replaced the need).
"Categorising learning outcomes like this helps us to figure out how AI influences outcomes in the first place - not just at the point of assessment. The assessment redesign can then follow from the redesign of the outcomes," Cilliers explained.
"What we hope is that this will keep us focused on education rather than on the abilities or the importance of whatever tool just happens to be available."
Outcomes under observation
Two major focuses were on the impact of AI on writing and literacy development, as well as moving beyond surveillance towards trust-building, transparency and outcomes-based approaches and pedagogical reasoning.
In his presentation - "Writing Under Watch: How Do Writing Process Surveillance Tools Affect the Writing Process?" - learning technology services manager at UCT, Zaaid Orrie, highlighted the pitfalls of AI detection tools both in terms of their ability to actually root out AI use and how they affect students' performance.
"We have AI detection tools, but the research shows that these tools are generally quite unreliable and they often generate false positives. So, they don't analyse the outcome of the essay, but they analyse the entire process of writing the essay."
"What pressures are we putting on students when they are writing essays under this technological observation, and how does that affect the outcomes that we are going to see?"
While this might seem like a foolproof way of ensuring that students do indeed do their own work, Orrie noted that the mere act of surveillance in this way can negatively affect student outcomes.
"In Jeremy Bentham's Panopticon argument, the potential of being watched changed the behaviour of those being observed. This panoptic gaze constructs the student as a suspect - recording keystrokes, highlighting any copying and pasting, and logging how long it takes to construct an essay - before any misconduct occurs," he said.
"Research from social psychologist Robert Zajonc highlights this, noting that being watched changes how we perform. This is particularly true when it comes to complex tasks like writing. The anxiety created by the feeling of being watched makes students make safer choices.
"What's really important here is that we ask ourselves what pressures are we putting on students when they are writing essays under this technological observation, and how does that affect the outcomes that we are going to see?"
This article was written using three AI tools: Otter.ai for transcribing recordings taken during the seminar; QuillBot for summarising presentation abstracts and finding overlapping themes; and Claude to ensure the UCT style guide was appropriately applied to the final text.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Please view the republishing articles page for more information.