Research Integrity and Safety
Guidance on AI Use in Research
Recommendations for the UGA Research Community
Our principles of research integrity and transparency serve as the touchstone for recognizing and understanding the proper use of Generative Artificial Intelligence’s (GenAI’s) capabilities both now and as they continue to change and grow. The UGA Office of Research Integrity and Safety (ORIS) supports the research community in upholding these principles.
As GenAI and AI tools continue to rapidly grow in capability, access, and use, we encourage faculty to help gather, develop, and disseminate best practices around the use of GenAI in their own research communities as appropriate (lab/group, department, center/institute, college/school, discipline) and to foster responsible and ethical use of GenAI tools.
This guidance is not intended as an exhaustive set of best practices. GenAI is rapidly evolving. It is anticipated that this guidance will be updated as GenAI capabilities, applications, and the implications of its use continue to evolve. Please consult this site regularly, in addition to sponsor, publication, professional association/society, and other disciplinary sources. We encourage faculty and their research teams to stay updated broadly with GenAI use, policies, practices, and requirements. If you have specific questions regarding using AI in your scholarly endeavors, contact the Office of Research Integrity and Safety.
Generative AI (GenAI) Guidance from Federal Sponsors
For sponsored research projects, researchers are encouraged to review the policies or guidelines regarding the use of GenAI from both sponsors and UGA and to be mindful of any rules set by anticipated publishers. If any clarification is needed or questions arise, please consult Sponsored Projects Administration (SPA). If sponsors have not established relevant policies or guidelines, please refer to UGA’s guidelines and policies, best practices in your discipline, and requirements of target publications.
Key federal agency resources regarding AI are below.
White House
National Institutes of Health
- https://datascience.nih.gov/artificial-intelligence
- https://www.niaid.nih.gov/grants-contracts/nih-case-study-copy-paste
National Science Foundation
- https://www.nsf.gov/focus-areas/artificial-intelligence
- https://www.nsf.gov/news/notice-to-the-research-community-on-ai
U.S. Department of Agriculture
- https://www.usda.gov/about-usda/reports-and-data/data/usda-open-data-catalog/inventory-usda-artificial-intelligence-use-cases
- https://www.nifa.usda.gov/nifa-peer-review-process-competitive-grant-applications
Department of Energy
- https://www.energy.gov/cio/department-energy-generative-artificial-intelligence-reference- guide (Reference Guide)
Department of Defense
- https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF (Adoption strategy)
National Aeronautics and Space Administration:
- https://www.nasa.gov/nasa-artificial-intelligence-ethics/ (NASA and AI Ethics)
National Oceanic and Atmospheric Administration:
GenAI Guidance from the University System of Georgia (USG)
See the USG Artificial Intelligence Guidelines: A USG IT Handbook Companion Guide. This initial release version 1.1 from June 2024 will be reviewed and updated regularly as GenAI capabilities and applications continue to evolve.