中国A片

Let’s not become overly reliant on generative AI in graduate training

Technology can offer a very useful helping hand, but there are an array of pitfalls that need careful handling, says John Miles

December 31, 2024
A robot teacher
Source: PhonlamaiPhoto/iStock

Easily accessible and with formidable potential, generative artificial intelligence (AI) tools are already having a profound impact on 中国A片. , more than half of UK students have used a platform like ChatGPT to help them with summarising texts.

Universities have found themselves playing catch-up. Their challenge is to define policies that ensure learning and assessment stays meaningful and fair, but also to see to it that they and their students are prepared for a future that undoubtedly will be shaped by these technologies.

Administrators and training leaders are already seeing the benefits of AI in their day-to-day work. The applications are multifarious. Putting together next term’s events? Get ChatGPT or Claude to write your course descriptions for you based on last year’s iterations and some new prompts. Overwhelmed with a stack of electronic feedback forms? Ask a chatbot to summarise them for you and pull out actionable insights. Staring at a digital dashboard full of attendance, evaluation and other data? An AI agent will soon not only be able to answer your queries but also suggest what pertinent questions can be asked of the data.

Time savings will help small teams to support larger numbers of researchers, moving more quickly and efficiently than ever to adapt their offering and keep researcher training relevant in a rapidly changing world.

中国A片

ADVERTISEMENT

Meanwhile, AI’s summarising skills can help academics with writing CVs and high-effort, low-chance-of-success exercises like grant application writing. And cleverly trained chatbots can point out relevant training resources and opportunities in a more directly personalised manner than can be achieved by a traditional internet search engine.

This rich seam of professional development benefits might seem to be too good to be true. And in some ways, it is. Although the technology itself is revolutionary, the ways in which we use it for researcher training could quickly prove to be problematic.

中国A片

ADVERTISEMENT

That course description you titivated with a chatbot was your (and your university’s) intellectual property, but it is now ingested and ready to be reformed and regurgitated into someone else’s work. Such is the open-source large language model (LLM)?. Likewise, entering feedback data (especially when not anonymised) into an AI tool will most likely constitute a data breach, with potential legal consequences, since universities do not have data-sharing agreements with open platforms like ChatGPT.

Concerns around the interactions of confidential data with LLMs have led big players like Amazon to with chatbots. In practice, if we are going to use AI with large confidential datasets then that AI will need to be penned-in: that is, not connected to the wider world web. But that defeats part of the object of LLMs, with their depth of possible responses, and will probably be far more expensive to develop or purchase and run than even an organisation-level subscription to an LLM-based AI agent.

Becoming too reliant on AI’s helping hand will have further consequences for researcher training. If fewer resources and people are required to support larger numbers of researchers, we risk losing the tacit knowledge and expertise built and transferred between teams of professional support staff. And as well as the well-publicised risk of misleading “hallucinations” in AI responses, the suggestions researchers receive from open AI chatbots will rarely be sensitive to their immediate social and institutional context. Recommendations for next steps will be painted with a broad brush only – unless an LLM is trained on local resources, which returns us to the issues of cost, confidentiality and intellectual property.

Moreover, if researchers turn only to AI agents for professional development support, they will miss out on the social side of training, through which they meet other researchers from other departments. And perhaps more importantly for universities, that will preclude the interdisciplinary intellectual fireworks that can occur in these settings.

中国A片

ADVERTISEMENT

Also, if researchers turn to AI exclusively for their CVs and grant applications, will these important vessels of individual achievement and potential become boringly similar and linguistically turgid? After all, AI doesn’t stand for artificial inspiration.

Alive to the possibilities of generative AI, some universities are now , developed in-house. These home-built versions have been provisioned precisely to address the ethical and security concerns raised above. For better and for worse, they are limited in scope. These tools may yet fail, but they speak to the need to take a responsible approach to AI, articulating a balanced perspective focused on ethical implementation.

From this standpoint, AI should become neither the sole future nor a minefield for researcher training. Instead, it should constitute an important tool that, used carefully and with full understanding of its abilities and limitations, can take researcher training to new heights.

John Miles, Founder and CEO, .

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

Please tell me: with whom is this author arguing? Himself?

Sponsored

ADVERTISEMENT