By Varun S. Bhatta
Science fiction writer Ted Chiang, in a page-long story, depicts a future where humans and their languages have become obsolete for carrying out science. “Metahumans” do science in a language that humans can barely comprehend. The science journals of humans, once the foretellers of truth, have nothing novel to publish. Whatever few papers get written are desperate attempts to translate the metahuman science.
Although it’s unclear whether we will ever reach the above described doom (or pinnacle) of human sciences, a series of developments in the last few months combine into an interesting origin story for Chiang’s imagined future. The birth of ChatGPT 3.0, and its precocious ability, did excite people – until science journals started receiving papers authored by this bot.
This was an awkward situation for scientists. They could not deny the benefit of such assistance for doing specific activities, but granting AI authorship does seem to take things too far. The journals’ editorial boards decided to continue maintaining the delicate balance. They appealed to scientists to use AI with an awakened “inner skeptic”, so that their “attention to detail” does not fumble. This use, however, still does not qualify ChatGPT as an author “because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility”.
A half-measure policy
Even though the invocation of ethics – the available off-the-shelf means for differentiating us from them – is pertinent, the above stance is worrisome. To understand that, we need to trace a few steps back and ask why we are even talking about ChatGPT in the first place. A similar incident of paper submission with a microscope as a co-author – albeit intriguing – would not have caused such a furore. To be an author, generally, there are two criteria: (1) substantial contribution to the research conducted and (2) ethical accountability for it. The appeal for a microscope’s co-authorship would have been rejected on the grounds of (1) without ever reaching (2). In contrast, there seems to be an implicit presumption that ChatGPT is different from other scientific tools and satisfies (1), and only with (2) can we avoid the unwanted.
If this is remotely the case, should we not worry about the ethical unaccountability of ChatGPT’s substantial contribution as well? Without that, invoking ethics only while evaluating its authorship ends up looking like a last resort used to protect the realm of authorship.
The ethical question is hard to answer. But the present scenario should nudge us to re-examine more deeply what it is to do research, and accordingly redefine the authorship criteria. The concept of a researcher/author in (1) is construed broadly by stringing together only the activities that happen in research, like conception/design of the work, acquisition/analysis/interpretation of data, and writing the paper. This definition will not suffice when these activities can be done by non-humans. It is time to shift the question from “Can ChatGPT be an author?” – which can be answered by ticking the boxes – to a more fundamental one, “Can ChatGPT do research?”
Researching as opining
Research in the sciences, and also in the humanities, is an activity through which knowledge production happens either through the generation of novel knowledge or the revision of existing knowledge. An important stage in this is the publication of a paper that documents a specific claim and a novel argument, or evidence for that claim.
What is the status of a published claim? To illustrate through a well-known example, Einstein’s 1905 light-quantum hypothesis was not well received and remained a weak proposal until Compton’s 1923 experiment. The gradual acceptance of photons, in turn, gave rise to the conflict between quantum and semi-classical field theories in optics. Only post the 1970s did the majority of physicists disregarded semi-classical field theories.
When research papers are published, their claims are not immediately accepted to be true. The claims have their own dynamics within a research community. They compete with one another, and during this phase, most of the claims are just expert opinions. Over time, a consensus develops about one claim being reliable and closer to the truth than others.
Thus, research is an activity of producing opinions. For an open question in a discipline, there are competing alternative answers. Here, the researchers opine by picking one of these and provide tentative reasons for their claims. These, unlike the commonly held “opinions” that are uninformed, are plausibly-true claims of well-informed experts. It is by opining alone that we can move from what is known to unknown and thereby nudge the growth of a discipline.
Identifying the foundational epistemic activity in research as opining clarifies how to understand authorship in this context. Authors of research papers opine by conceptualising ideas and analysing data. In the case of group authorship, the co-authors collectively opine about a claim. The authored opinion’s novelty and significance are subsequently evaluated to decide about its publication.
Can ChatGPT be an author?
If researching is opining, can ChatGPT do that? It certainly seems to know about the current debates.
But when asked what its opinion is about a debate or to pick a side, it confesses “not to have personal opinions or beliefs”.
The “do not” instead of “cannot” in the above statement indicates ChatGPT’s deliberate refusal to opine. It could, but it does not. This becomes evident when we pick one of the sides and ask ChatGPT to support it. (This behaviour of ChatGPT providing an argument when a specific stance is explicitly given can also be seen in the research paper that lists ChatGPT as the co-author.)
ChatGPT seems to be knowledgeable about debates and the possible stances in them. However, it does not pick a side by itself. Instead, it maintains the status quo and does not nudge the debate further. Since it does not opine, it does not do research. And in this fundamental sense, ChatGPT cannot be an author of a research paper, irrespective of the novelty of its results. This conclusion can be extended to a group scenario as well: since the co-authors collectively opine about a claim, ChatGPT cannot be a co-author too.
Some might point out here that the above behaviour of ChatGPT is just its current limitation. This might be so, but I think ChatGPT ceases to be useful when it starts opining. Moreover, the above argument clarifies in what sense ChatGPT fails ethical accountability: if at all ChatGPT can opine, it cannot be held responsible for its opinions.
Bringing back the human
With the advent of ChatGPT, it is no more sufficient to define research authorship through cognitive activities like designing, analysing and writing. These need to be supplemented with the basic human epistemic attitudes like opining, which play a fundamental role in knowledge creation.
But this policy change will be as short-lived as the current version of ChatGPT, unless the science community reflects on the crucial role of writing for research and the place of humans in it. Science discourages the subjective references of writers (like personal pronouns “I”) in its texts and denies the role of personal values in its research. It is not surprising that research of a discipline – which has meticulously erased the presence of human subject – can, after all, be authored by a non-human.
Varun S. Bhatta is Assistant Professor of Philosophy at Indian Institute of Science Education and Research Bhopal. He is the co-moderator of Indian Philosophy Network and member of Barefoot Philosophers.
Courtesy: Science.Wire