Driving tests are, for many, a traumatic experience, but we all appreciate the need for them. As far as possible, we want these tests to validly and reliably assess a candidate’s competence to get behind the wheel without endangering other road users.
Does that entail that we would never grant a full driving licence to someone taking the test in a self-driving car? This is a question that governments will have to face soon. Self-driving cars are a technological development that we cannot simply ignore. They appear set to replace human-driven cars just as human-driven cars once replaced horse-drawn carriages. The genie is out of the bottle, so it is no use banning future drivers taking their tests in self-driving cars.
Yet my initial reaction – and I doubt that I am alone – is that we wouldn’t even consider granting a full driving licence in such a situation. The licence is evidence that the individual has actually demonstrated particular competences, which are not evidenced when a driver uses a self-driving car. This is true regardless of how enthusiastically our society adopts new technologies or whether we consider such developments to be inevitable. Granting licences to test takers in self-driving cars would undermine the meaning and value of a driving licence.
The parallels with using generative AI in university assessment are obvious. So under what circumstances would we award a degree or credit to someone who completes their assessment using the likes of ChatGPT? And is our reasoning consistent?
中国A片
One response may be that we are explicitly assessing a student’s ability to use particular technologies and that these will increasingly involve AI tools. But many, if not most, of the competencies that we assess do not refer to the use of technology (in the programme in which I teach, the use of technology is addressed in only one of 10 learning outcomes). Ought we, then, to conclude that, as in the driving licence scenario, granting credit when AI is used would in most cases be entirely inappropriate?
Perhaps we could suggest that strictness is justified in driving tests because the potential consequences of incompetence (fatal car accidents) are far more significant than what might happen if an accountant is awarded a degree based mostly on AI use. However, there can be grave consequences for disciplinary incompetence, too. In relation to accounting, for example, we need only recall Enron, WorldCom, the Global Financial Crisis, Bernie Madoff and more recently, and .
中国A片
Yet we have no problem with drivers taking their tests using cars that have power steering, ABS systems and any number of other technological aids to driving. Why single out AI? The answer is that technological enhancements differ in ways that are important. For instance, licences are qualified when people take tests in cars with automatic transmission: licence holders are not allowed to drive cars with manual gears. While some technologies, such as ABS brakes, are aids that do not significantly affect the demonstration of core competencies, others, such as automatic transmission, do affect it.
Returning to 中国A片, this suggests a need to draw a distinction between technological aids that assist students and those that perform tasks on their behalf (a distinction that may indeed be quite blurry). Yet even when tools such as a grammar checker are assisting students, their use is surely only appropriate in a summative assessment if we are not assessing the competence to which the tool contributes. That is, just as we would not allow an automatic car to be used if a driver is being assessed on their ability to change gears, the use of grammar-checking is not appropriate if the student is being assessed on their ability to formulate grammatically correct sentences.
If this sounds harsh, it is perhaps because I am talking specifically about summative assessment. Technological tools (including AI) can clearly be appropriate and extremely beneficial when used formatively, just as a novice driver might benefit from watching how a self-driving car “behaves” or from observing when an automatic transmission system shifts gear.
The challenge posed by generative AI is that, unlike many other tools, it can increasingly perform entire tasks. It is more than just a technological aid. It is more than ABS braking or automatic transmission, it is the self-driving car. In this context, we could do with some guiding principles. Is the following too ambitious?
中国A片
To the extent that AI can perform or simulate the competency being assessed, it is not appropriate for students to use it to complete an assessment task.
is associate professor in the School of Accountancy at Queensland University of Technology.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login