Artificial intelligence (AI) is technology that is changing the landscape of higher education in terms of teaching and research. For research activities, policies and guidelines about the use of AI, specifically in terms of writing and reviewing manuscripts, papers, and grant proposals, are evolving across federal agencies, journals, and academic institutions. It is up to investigators, project staff, and students to be aware of the applicable policies and guidelines surrounding the use of AI programs and tools, and question how reliable they are for use in the research environment (NIH, 2023).
Can AI be listed as an author?
No. There is a consensus amongst journals and research communities that AI models “cannot meet the requirements for authorship as they cannot take responsibility for submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements” (Committee on Publication Ethics [COPE], 2023, Zielinski et al., 2023; Flanagin et al., 2023).
The concept of ‘responsibility’ is more than just ownership, it is also accountability. Generative AI cannot be an author “because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility” (Nature, 2023; Hosseini, Rasmussen & Resnik, 2023). Accountability is an essential element of authorship because it communicates liability and answerability for the work.
Can AI be used in writing and/or developing manuscripts?
Specific journals and research disciplines have different requirements concerning the use of AI in the writing process. In general, “authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used” (COPE, 2023; Zielinski et al., 2023; Flanagin et al., 2023). Authors are responsible for ensuring that AI generated outputs are appropriate and accurate. “Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased” (International Committee of Medical Journal Editors [ICMJE], 2023; Hosseini, Rasmussen & Resnik, 2023).
According to ICMJE standards, authors should take steps to avoid plagiarism in AI generated text and images (2023). Any quoted material should be appropriately cited and attributed (ICMJE, 2023). In general, the AI model should not be cited as the author of the quoted text. For example, when using the AI model ChatGPT, the cited author should be listed as the author of the model, OpenAI. Information on how to cite AI when using the American Psychological Association (APA) style can be found here.
When considering the use of generative AI in scientific writing, users must accept responsibility and accountability for the content produced by such tools. As is indicated above, generative AI tools cannot be responsible or accountable. Because generative AI has been found to plagiarize and fabricate material, authors who rely upon AI generated material without confirming the accuracy of the information will open themselves up to findings of academic and research misconduct should fabrication, falsification or plagiarism be contained within those AI materials. Accuracy and integrity in scientific work remains the researcher’s responsibility, for which they are accountable.
Can AI be used in writing grant applications?
Many of the concerns that exist when using AI for writing/developing manuscripts (see above) also apply to writing grant applications. Grant applications are assumed to represent the original and accurate ideas of the applicant institution and researchers. However, because AI tools have the potential to introduce plagiarized, falsified, and fabricated content, grant applicants should be cautious of any AI-produced content and are warned that funding agencies will hold applicants accountable for any plagiarized, falsified, and fabricated material (i.e., research misconduct) (Lauer, Constant, & Wernimont, 2023).
Can AI be used in the peer review process?
The National Institutes of Health (NIH) has prohibited “scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals” (NIH, 2023). Utilizing AI in the peer review process is a breach of confidentiality because these tools “have no guarantee of where data are being sent, saved, viewed, or used in the future” (NIH, 2023). Using AI tools to help draft a critique or to assist with improving grammar and syntax of a critique draft are both considered breaches of confidentiality.
How should AI use be reported in my research?
Rigor and reproducibility standards are often established by specific journals and research disciplines. Transparent and complete reporting of methodology and materials used is crucial in promoting reproducibility and replicability. The Association of the Advancement of Artificial Intelligence has a helpful reproducibility checklist that can be found here.
References
Committee on Publication Ethics [COPE]. (2023, February 13). Authorship and AI Tools – COPE: Committee on Publication Ethics. Retrieved June 14, 2023, from here.
Flanagin, A., et al. (28 February 2023). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA 329(8), 637-639.
Hosseini, M., et al. (2023). Using AI to write scholarly publications. Accountability in Research.
International Committee of Medical Journal Editors [ICMJE]. (2023, May). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. ICMJE – Recommendations. Retrieved June 14, 2023, from here.
Lauer, M., Constant, S., & Wernimont, A. (2023, June 23). Using AI in peer review is a breach of confidentiality. Retrieved July 12, 2023, from here.
National Institutes of Health [NIH]. (2023, June 23). NOT-OD-23-149: The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. Retrieved June 25, 2023, from here.
Nature (2023, January 24). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Retrieved July 10, 2023, from here.
Zielinski, C., et al. (2023, May 31) Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on Chatbots and generative artificial intelligence in relation to scholarly publications. Retrieved July 10, 2023, from here.