中国A片

AI is here to stay, so we may as well learn to use it? That’s a cop-out

We can’t just concede that usage is ethical if within universities’ narrowly conceived rules. What about the pursuit of truth, asks Benjamin Mitchell-Yellin

十二月 18, 2024
Montage of a circuit board engulfed in smoke and flames to illustrate  AI is here to stay, so we may as well learn to use it? That’s a cop-out
Source: Getty images montage

I receive many invitations to attend webinars and download explainers about the ethical and responsible use of artificial intelligence in 中国A片. I’ve a keen interest in the topic and want to take part in the important discussion about how these new technologies should shape our world.

But I’ve become increasingly disillusioned. In addition to being the director of my university’s Teaching and Learning Center, I’m also a tenured philosopher with a specialisation in ethics. What passes for ethical discussion in these spaces strikes me as unimaginative, at best.

My colleagues seem to think that only a few considerations are relevant to questions about what we should do with AI tools, and they frame the conversation in such a way that a limited range of conclusions are on the table. They’re working with an unduly narrow conception of ethics.

To be clear, I’m not upset because I think this work should be left to the ethicists. I do, however, think the evident lack of imagination amounts to an abdication of 中国A片’s responsibility – to ourselves, our students and society at large.

Let’s start with the narrow range of what’s considered relevant. There are a number of reasons to be concerned with generative AI. Among them is that it is incredibly energy-intensive and threatens to . The press this issue in terms of the question whether high energy demands now will bring about reductions later on due to increases in efficiency. Meanwhile, those who’ve historically received the short end of the stick are in terms of damage to their local environments. At least the fourth estate is considering the matter; universities are largely skirting it.

Take, for example, the generative AI webpages of the and the , two excellent universities and among the in the world to study ethics. Both give scant mention of ethics in relation to generative AI, and neither mentions environmental impact. The site at least includes environmental impact among relevant ethical considerations, but it quickly moves on to focus on privacy, accuracy and equity. And though the puts “environmental well-being” among the headings of its questions for educators, the actual questions underneath make no mention of energy consumption.

You might think this makes sense because these universities are focused on educating students, not saving the planet. You’d be wrong. Both and declare climate change among humanity’s greatest challenges and tout programmes to tackle it. Yet, when it comes to generative AI, they’re mainly concerned with academic honesty and data security. (No matter that these tools were trained on large datasets of other people’s words and images, with significant .)

It's not that privacy, security and bias don’t matter. It’s that they’re not all that matters.

This brings me to my second complaint. The typical webinar or explainer promises to help me learn “how to” use AI. This framing is inviting, promising concrete techniques I can put into practice. It’s also restrictive, limiting discussion of reasons not to use AI to the domain of the technical. If it won’t undermine student learning objectives or violate university policy, go for it!

Ethics is the study of what should be done. This includes questions about consistency with policy and law, as well as questions about well-being and morality. It also includes possible conclusions, not just about how to do something, but also about whether to do it.

Let me head off one obvious rejoinder. It won’t do to say that generative AI is here to stay, so we may as well learn how to use it. That people already do something or that the possibility of doing it will never go away aren’t reasons to assume we should be doing it.

An analogy may help. Even supposing it will always be possible to eat meat and that many people will continue to do so, it would be strange if every discussion of the ethics of eating meat were framed in terms of how to eat it responsibly. There’s room to ask how to more ethically raise animals to be eaten and to ask whether it’s ethical to eat them in the first place. What’s different about AI?

This gets at the crux of the matter. When those of us tasked with producing and disseminating knowledge acquiesce to the narrative that it’s ethical and responsible to use AI tools?as long as they don’t undermine the narrow policies and objectives of institutions of higher learning, we abandon our responsibility to lead society in pursuit of the truth.

But you might object: Simply asking questions is futile and won’t change anything. To which I say: If you think enquiry is ineffectual, then you shouldn’t be working in 中国A片. By failing to ask the hard questions ourselves, we’re failing to model for our students how to do so. And what is it we’re doing with our lives if we think teaching and learning make no real impact?

It’s important we remember that there’s room for nuance. Some uses of AI , even with emissions and other costs taken into account. I’m not calling for every university to ban AI. I’m calling for us to engage in good faith ethical enquiry. This is the responsible way to chart a path forward.

is an associate professor of philosophy at Sam Houston State University, Texas.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (2)

Does the ethicist know anything about AI, its multiple forms, and many uses, good, bad, and in between? No! AI is neither new nor singular and unitary. It is complicated with a range of uses. This screed shows no awareness of AI's contradictory forms, uses, and abuses.
graff.40: With respect, did you read and reflect or simply scan and respond? A short article can’t touch on every aspect in detail but it’s clear the author is aware of more than can be explored here, and accepts the potential benefit of some applications of AI, despite the energy resource implications. The Industrial Revolution massively increased the consumption of what we now call finite resources and simultaneously increased the rate of environmental degradation. Perhaps, in 1750 it was not possible to imagine the future scale of industrial activity, or the scale of pollution, or even the addiction to consumption that still drives developed economies some 275 years later. I’m not confident that the next 275 years will witness what will feel like welcome progress. It has to be right ask fundamental questions about new technologies but lately it seems that only profit matters. The full cost of AI is more important than arguments over definitions and distinctions.
ADVERTISEMENT