International Perfusion Association

Social:

Role of Generative Artificial Intelligence in Publishing. What is Acceptable, What is Not

J Extra Corpor Technol 2023, 55, 103–104

Without a doubt, Generative Artificial Intelligence (AI) is a hot topic in the media in recent times. This was spurred initially by the popular, widespread use of ChatGPT and other platforms to generate written material, imagery, and even audio/visual productions based on user-inputted prompts. Generative AI is defined by AI as: “a type of AI that uses machine learning algorithms to create new and original content like images, video, text, and audio”.

How do these technological advancements impact us in the scientific publishing world? Specifically, when is it appropriate and perhaps more importantly, when is it NOT appropriate to use such tools in producing published scientific articles?

Strictly speaking, every time a word processor suggests a better way to phrase a sentence, basic AI is being applied in one’s writing. Taken to a much more sophisticated level, a writer can submit a roughly written draft to a generative AI platform using large language models (LLMs) and a more sophisticated written output could be produced and ultimately submitted. If a student in an English class, meant to teach students how to write well did submit such a piece for an assignment, this use of AI might constitute cheating. However, when authors use AIs to help polish their work for publication, this should be entirely appropriate since such an application enhances the work to help readers comprehend and appreciate such work better. Our journal has recently started providing our authors the option of using a “comprehensive writing and publication assistant” to improve their submissions. Submitting authors should see a link to the service we are partnering with the Paperpal Preflight Screening Tool. For a very reasonable fee, the tool offers translation, paraphrasing, consistency, and journal submission readiness checks on uploaded manuscript drafts. This service is particularly helpful for some international authors who may have a challenging time meeting our language requirement standards.

In another scenario applicable to publishing, say a peer reviewer wishes to use AI to evaluate a submission. You might be asking: “wait, can AI do that?” Most certainly! Would that be acceptable though? There are indeed platforms out there that are trained on publicly available biomedical publications such that the AI is able to look up references to help a peer reviewer assess manuscripts. Maybe the peer reviewer just needs help getting started with the first draft of their review, or they may feel that the author’s language skills need a lot of help like in the earlier scenario. A major difference here, however, is that when a peer reviewer uploads a manuscript on one of these platforms, they would be breaching confidentiality which is not acceptable. The NIH does not allow AI to be used for the peer review of grant applications, and neither should such technology be used for publication peer reviews because the same breach of confidentiality occurs when an author’s manuscript is uploaded to a third party’s platform. Other concerns that Hosseini and Horbach (2023) identified fault “the fundamental opacity of LLM’s training data, inner workings, data handling, and development processes” which can result in “potential biases and the reproducibility of review reports”. JECT peer reviewers will therefore be instructed to not rely on such systems in conducting their evaluations. Moreover, no editorial decisions on the final outcome of any manuscripts will be made using AI tools alone.

To help authors navigate this new terrain, JECT will endeavor to provide new guidance in our Instruction for Authors as other journals are currently implementing. Some principles that other journals are recommending and that we will likely adopt include:

  1. Authors should provide a declaration of any AI-generated content in the submission and include the details of the tool or system used such as the Paperpal example above. These declarative statements should be made whether generative AI technology is used in the writing process or to generate tables, figures, and even videos. Basic tools for checking grammar, spelling and references are exempted.
  2. Authors confirm that they have reviewed the product generated by the AI systems and are ultimately responsible and accountable for the integrity of such content.
  3. AI systems cannot be attributed as authors. Only humans can.

As many other journals do, JECT uses other AI-like tools, such as for similarity checks but will always apply human review and oversight. In the future, other tools might become available for journals to use in ways that improve quality and efficiency. Being in the early stages of this new era, generative AI technology and its uses will continue evolving, raising new issues and concerns that have yet to be encountered or addressed. The Committee on Publication Ethics (COPE) has provided guidelines for the use of AI tools by authors, peer reviewers, editors, and publishers in scholarly publications; and will probably continue to contribute by providing guidance in the future. Ultimately, full transparency, responsibility, and accountability are the appropriate primary principles to be adhered to as we continue in our mission to disseminate trusted information within our sphere of perfusion sciences.

Acknowledgments

My thanks to fellow editors Donald Likosky and Luc Puis for their assistance with resources for this editorial.

J Extra Corpor Technol 2023, 55, 103–104

Bryan Lich’s Comments: Just for fun, this is how AI would rewrite this publication (see below). We also used Dall-E AI to generate the cover image for this article. Do you think that is air your breathing now? Hmm?

AI Content Enhancement: “Generative Artificial Intelligence (AI) is becoming an increasingly discussed subject in today’s media landscape, spurred on by the popularity and widespread adoption of platforms like ChatGPT, which enable the creation of text, visuals, and multimedia content in response to user inputs. Generative AI is characterized by its use of machine learning models to innovate and produce novel content, including text, images, videos, and audio.

The emergence of such technology raises pertinent questions within the realm of scientific publishing. When is the utilization of generative AI tools suitable, and, perhaps more crucially, when should it be avoided in the creation of scientific literature?

At its most elementary level, AI comes into play when a word processor recommends sentence refinements, which is a basic form of AI assistance in writing. On a far more advanced scale, writers could enhance their initial drafts using generative AI platforms powered by large language models (LLMs), potentially leading to more polished final submissions. If a student were to use this method for an assignment in an English class—where the objective is to learn writing skills—it could be deemed dishonest. Conversely, for authors refining their manuscripts for publication, employing AI can be seen as entirely suitable since it serves to improve reader understanding and engagement with the material. Our journal has introduced a “comprehensive writing and publication assistant” for our authors, offering services such as translation, paraphrasing, and submission preparation through a partnership with the Paperpal Preflight Screening Tool, which can be especially beneficial for international authors facing linguistic barriers.

Considering another publishing context, imagine a peer reviewer contemplating the use of AI to assess a manuscript. While AI technology is indeed capable of assisting in the evaluation process by referencing a vast database of biomedical publications, it introduces a significant ethical dilemma. The use of such platforms by peer reviewers could result in a breach of confidentiality, an act that is considered unacceptable in both grant application reviews, as per NIH guidelines, and manuscript evaluations for publication. Concerns highlighted by Hosseini and Horbach (2023) regarding the inherent lack of transparency in LLMs’ training data and processes underline the potential for bias and issues with the reproducibility of reviews. Therefore, JECT will advise its peer reviewers against depending on these systems for evaluations and will not base editorial decisions solely on AI-generated inputs.

To guide authors through these new developments, JECT plans to update our Instructions for Authors, aligning with practices adopted by other journals. These may include:

  • Declarations of AI-generated content within submissions, along with details of the tools used, except for basic grammar, spelling, and reference checks.
  • Assurance from authors that they have reviewed and are responsible for the integrity of AI-produced content.
  • A stipulation that only humans can be recognized as authors.

While JECT already implements AI-like tools for tasks such as similarity checking, human oversight remains paramount. As we navigate the nascent stages of this generative AI era, it is expected that the technology will evolve, introducing new considerations and challenges. COPE provides guidelines on the use of AI tools in scholarly publishing and is likely to continue to offer insights. In our mission to disseminate trusted information within perfusion science, JECT will uphold the principles of transparency, responsibility, and accountability.”

Latest Posts