Artificial Intelligence in Research

The following resources have been collected for UCI researchers to provide guidance on the use of artificial intelligence (AI) in various scholarly and institutional contexts. As AI technologies continue to evolve, it is crucial to stay informed about policies and best practices that govern their use in research, including  the peer review process. Below, you'll find summaries and links to detailed information from the University of California, the National Institutes of Health (NIH), and the National Science Foundation (NSF), each addressing the responsible and secure use of AI in the academic research environment.

Ucop Ai

Read about the University of California's initiatives in advancing responsible artificial intelligence. Learn how thought leaders are integrating ethical AI practices into education, research, and public service, and explore resources, events, and the governance frameworks guiding these efforts.

Uciis

Read about the potential risks and considerations associated with AI. Learn about the secure use of AI chatbots like ChatGPT and Google's Bard, the dangers of AI-driven phone scams, and the impact of deepfakes and disinformation. Stay informed about the evolving field of AI and the importance of maintaining safety and security as technology advances.

Nih Logo

Read NIH's notice prohibiting the use of AI technologies, such as natural language processors and large language models, in the peer review process for grant applications and R&D contract proposals. This prohibition is part of NIH's effort to maintain security and confidentiality in peer reviews.

Nih Extramural Nexus

Read this post on the extramural nexus about  how using AI in peer review breaches confidentiality, as reviewers must not use AI tools to analyze or critique grant applications. The NIH also warns against using AI in writing applications due to risks of plagiarism and fabrication.

Nsf 2line Logo

Read NSF's guidelines regarding the use of AI in its merit review process. NSF prohibits reviewers from uploading proposal content or review information to non-approved generative AI tools to maintain confidentiality and integrity.