中国A片

Researchers embracing ChatGPT are like turkeys voting for Christmas

The technology threatens to impoverish research and destroy humans’ ability to understand the social world, says Dirk Lindebaum

May 2, 2023
Source: Getty (edited)

Since the launch of ChatGPT at the end of November last year, and articles on the possibilities and perils of the sophisticated chatbot for teaching and pedagogy have become a dime a dozen. But its effect on research is likely to be no less profound.

In some social science disciplines, such as information management, ChatGPT already acts in effect as a co-author. Researchers are excitedly jumping on the bandwagon, using ChatGPT to render the research process more “efficient”. In my opinion, however, the use of ChatGPT in research can create at least three adverse outcomes.

The first is that using the technology to compile literature reviews will impoverish our own analytical skills and theoretical imagination. When we write our own literature reviews, we read for understanding: we seek to know more than we did before through the power of our own minds. This involves a willingness to overcome the initial inequality of understanding that can exist between reader and author (such as when a PhD student reads a monograph in preparation for a first-year report). And the effort enables us to see and make new theoretical connections in our work.


THE Campus resource: Prompt engineering as academic skill - a model for effective ChatGPT interactions


But ChatGPT can’t understand the literature: it can only predict what the statistical likelihood is of the next word being “a” rather than “b”. Hence, the literature reviews it produces merely offer up food for thought that is past its best-before date given that the training data are not necessarily current. This is why some have described ChatGPT’s knowledge production as occurring “within the box”, rather than outside it.

中国A片

ADVERTISEMENT

Being able to understand the current literature and to harness the imagination is crucial for linking observed phenomena with theoretical explanations or understanding for improved future practice. The risk is that an over-reliance on ChatGPT will deskill the mental sphere, leaving us poorly equipped when we need solutions to novel, difficult problems.

The second problem with the use of ChatGPT in social science research is that it changes the mode of theorising. The technology processes data through computation and formal rationality rather than through judgement and substantive rationality. Thus, when it is applied to theorising, it embodies an assumption that the world is based on abstract and formal procedures, rules and laws that are universally applicable. This is an outlook that Max Weber argued is detrimental to social life.

中国A片

ADVERTISEMENT

Such a detriment might arise, for instance, when human ?or socially developed norms and practices for regulating conflicting interests undergo fundamental changes when judgement is substituted by reckoning in decision-making. Thus, morality becomes rather mechanical, prompting situations in which “decisions are made without regard for people”, to quote Weber.

Thus, in the computational approach, morality is considered a universally applicable phenomenon that can be expressed through computation. By contrast, a mode of theorising based on judgement that is sensitive to the local, social and historical context of phenomena tends to appreciate that values are negotiated, renegotiated or even contested over time.

This concern is exacerbated by the fact that ChatGPT has been to reproduce discriminatory associations concerning gender, race, ethnicity and disability issues due to biased training data. As Brian Cantwell Smith argued in his 2019 book The Promise of Artificial Intelligence: Reckoning and Judgment, if we are “unduly impressed by reckoning prowess”, there is a risk that “we will shift our expectations on human mental activity in a reckoning direction”. My argument is that this observation also applies to theorising as a human mental activity.

The third problem with using ChatGPT in research is that it distorts the conditions for a fair and truly competitive marketplace for the best ideas. While scientifically valuable, publications also matter for career progression and status. But the difficulty of obtaining permanent posts generates a strong temptation to skip the hard thinking and writing work that normally goes into writing well-crafted papers in pursuit of a longer list of papers to put on a CV.

中国A片

ADVERTISEMENT

I am entirely unimpressed by attempts to assuage concerns by arguing that ChatGPT will only ever be a research “tool”, with human authors remaining in charge. At the end of the day, even if use of ChatGPT is transparently declared, it is difficult to tease out the human’s and machine’s relative contributions. Some years ago, through perseverance and dedication to reading for understanding, my co-authors and myself managed to turn a 15-page rejection letter for an essay into an acceptance letter. That experience is ours, and it is a reminder that academic success is more rewarding if we can appreciate the effort that went into it.

I realise that my concerns are probably shared only?by a minority of researchers. The rapid adoption of ChatGPT in research makes it abundantly clear that it is here to stay. Yet it is important to understand that this development is likely to impoverish rather than enrich our theoretical and analytical skills in future: just look at the fact that intelligence levels in the general population are decreasing as the use of technology increases.

The risk is that we, as researchers, ultimately lose the ability to explain and understand the social world. Who wants to be a turkey who votes for this bleak Christmas? Don’t count me in.

Dirk Lindebaum is a senior professor in management and organisation at the Grenoble ?cole de Management. This article is based on a forthcoming paper in the Journal of Management Inquiry.

中国A片

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Reader's comments (2)

Do you run your regressions by hand? Use tools where it makes sense to do so and where you get the maximum productivity gains. You don't need a PhD to get that, it is common sense. Where these models don't work, they often fail miserably. Much of the hype around this in academia are from people who have no idea what they are talking about.
"Do you run your regressions by hand?" You have missed the point of this commentary completely; probably intentionally so. It is not about the informed use of tools (digital or others). It is about the threat to our intellectual imagination. Lindebaum is rightly talking about the threats to the “intellectual craft” that is part of the process of thinking and understanding (including writing, which is not simply an activity of reporting results, but an integral part of thinking, creativity, and imagination so important for intellectual pursuits). An apt analogy is that of an artisan cabinetmaker honing her/his skills through practice while creating something new (and often novel and unique). Now, you can automate the production of cabinets, but the artisan skills are lost along the way (and the entire makeup of a cabinet is adjusted to the fit requirements of the machines and processes set up for producing them). The artisan craftsperson is turned into a worker simply operating them. This transformation involves substantial deskilling (of the craft) and reskilling (to operate machines); nonetheless, a substantial change. Workers are basically reduced to being mere operators (and if you further automate, entirely replaced by autonomous robots sooner or later). In academia, we will see more “unthinking” reporting of “structured reviews” of this or that, shallow research results of spurious causal relations and variables tested with the help of statistical tools; statistically sound and methodologically sophisticated but conceptually uninspiring and often suspect. Most of the stuff published in academic journals these days is intellectually underwhelming, lacking real novelty and true imagination (I can only talk about my own discipline of business and management, though). The use of AI and tools such as ChatGPT will further accelerate this process (and it will come, I have no doubt about that). It will greatly advance the careers of the “career minded” academic, who are already focusing on pleasing metrics, hitting productivity targets, the quantity of output and the purported “sophistication” of the tools used, pumping out one SEM-based nondescript study after another, for example. They are already akin to being academic “factory workers” rather than “intellectual craftspeople”. If the above get eventually automated away by AI tools, so be it. Maybe you are right, and it is not a bad thing in the end. The deluge of outputs (due to the publish or perish game) hardly gets read anymore already. If we eventually automate and outsource the entire process of academic writing, reading, and teaching, we could be so much more efficient, pumping out even more stuff that nobody reads. The AI can do the lecturing and conference talk too. We will be able to send our AI avatars to such events. Much more efficient this way, who has time for such ancient nonsense. Eventually, AI will talk to AI. Finally, we will be free to do things that really count. Unfortunately, we will have lost the essential skills required to do something intellectually meaningful by then, along with the understanding of why such intellectual skills are relevant in the first place. Be sure, the rich and powerful will continue to have access to intellectual craftwork (and skills), but it will be intellectual fast-food for the masses most likely.

Sponsored

ADVERTISEMENT