中国A片

To advance AI, move slowly and fix things

The contemplative and risk-averse nature of academia contrasts with the fast-moving strategies of Silicon Valley, but that is our greatest strength, says Kate Devlin 

十一月 8, 2019
Tortoise slow
Source: iStock

For the past two years, ethics has been a pressing issue in the world of artificial intelligence research. By now, the concerns around bias in machine learning have been headline news: sexism in ; , and .

While initially ethics in AI centred on how machines might make moral decisions, the focus now is on the likewise arduous task of ensuring that the software and the datasets used in machine learning are representative, scrutable and fair. A slew of guidelines, principles and recommendations have , all with the same central message on the need to erase bias but, because of?the difficulty of the work, few are able to make any concrete, implementable actions.

But amid the push to fight algorithmic injustice, a new branch of ethical consideration has come to light: the ethics of the development environment. Earlier this year, it emerged that the Massachusetts Institute of Technology’s Media Lab had accepted about $800,000 (?620,000) from American financier Jeffrey Epstein, a sex offender convicted of sexual acts with underage girls, and later charged with trafficking.

In August, the director of the MIT Media Lab, Joi Ito, published a , admitting that he had permitted the funding despite being aware of Epstein’s criminal record and reputation. Researchers at the lab spoke angrily of their discovery that their work was linked to Epstein. In the weeks that followed, more beneficiaries in the field of AI and cognitive science were .

Academia is no stranger to issues of research integrity. The vast majority of UK HE institutions have policies in place to ensure that they avoid inappropriate sources of funding, whether this is out of social concern or as a matter of reputational risk, or both. But for AI, a discipline that attracts hype and investment in similarly large amounts, the shift from what is being funded to who is doing the funding has hit hard and hit publicly.

Universities are well placed to safeguard their research. The University of Cambridge, for example, was quick to distance itself from the Cambridge Analytica scandal, and leaked emails showed concern among staff about . For the university researcher, defending research integrity is, one hopes, as ingrained as academic citation.

For quite some time, the Silicon Valley mantra of “move fast and break things”, adopted by CEO Mark Zuckerberg as Facebook’s informal motto, has been viewed as the agile and responsive goal of tech development. By contrast, the slow-moving, cautious world of academia has been seen as the poor outsider, constrained by institutional bureaucracy and review boards. Researchers from postdocs to professors have from academia to industry, where wages are considerably higher, and the admin loads far lower. The consequences are a recruitment crisis in universities, and an emphasis on producing research driven by marketable products.

But while industry wants academia to tell it how, it is less quick to engage with the why. Regulation of AI is far from standardised. Global principles have been suggested and adopted but are non-binding and generic in nature. Governments are aware of the need for action – the UK’s is widely lauded – but policy moves slower than practice.

Corporations such as Facebook, Google and Amazon are currently a law unto themselves, reactive to post-development outrage rather than proactive to the possible outcomes: it took a scandal of international importance to impel Zuckerberg to before the US Congress. In an attempt to show consideration, Google announced early this year that it was forming an ethical advisory board – its Advanced Technology External Advisory Council. It before being dissolved, ironically because of tensions around the ethics of the participants themselves.

For universities, subject to the strictures of fine-grained ethical clearance and risk assessments, it seems unfathomable that research can be carried out without due diligence. No wonder the most cutting-edge research in AI has moved well outside the ivory towers, where money is plentiful and form-filling is a bad dream.

But that, perhaps, is academia’s power. The contemplative, thorough and peer-reviewed environment of the university is a place to draw together the strands that feed into AI: not just the computer science but philosophy, law, and design as well as the science and technology studies and the media theory that can all contribute to a more inclusive, thoughtful and ethical AI.

Universities are not about moving fast and breaking things; universities are about the critical analysis, the gathering of evidence and the sound methodology, the bigger picture. There is strength in moving slowly and fixing things.

“AI research is multidisciplinary at heart,” remarks Virginia Dignum, professor of Responsible Artificial Intelligence at Ume? University and fellow of the European Artificial Intelligence Association. “Education of future AI researchers and practitioners requires embracing this multidisciplinarity. It also needs to ensure inclusion and diversity in a safe environment for all. Ultimately the future of AI and the impact it has on our society and on humanity depends on the choices we make now.”

Kate Devlin is senior lecturer in social and cultural artificial intelligence in the Department of Digital Humanities at King’s College London.

Kate will be speaking about the ethics of AI at THE Live on 27-28 November. The event will bring together some of UK 中国A片’s brightest minds to reimagine the role of?universities.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

ADVERTISEMENT