Presidential Interdisciplinary Seed Grants

Socio-Technical Solutions for Countering AI-Generated Deep Fakes

Socio-Technical Solutions for Countering AI-Generated Deep Fakes

Cluster Engagement Track

Project Summary
The term Deep Fakes refers to realistic AI-generated images, videos or audio that can be used to create “fake but believable” content and thus mislead humans. While Deep Fakes can have legitimate uses, such as in the entertainment industry, they can also be used maliciously to manipulate people’s perceptions of real-world facts, such as in cyber-warfare, election manipulation, and misinformation/disinformation campaigns on social media. Although some technical approaches to counter Deep Fakes have been previously proposed, they are far from perfect and new research at the intersection between AI and Cybersecurity is needed to build more reliable Deep Fakes detection solutions. Additionally, the mere existence of Deep Fakes has caused a global erosion of trust in digital media in general. New solutions that enable humans to regain trust in digital content are therefore urgently needed. In this project, we propose to address the Deep Fakes problem with a highly inter-disciplinary approach involving UGA faculty who conduct research in Cybersecurity, Artificial Intelligence, Computer Vision, Journalism, Public Policy, and Law.

First, the PIs propose to investigate two different technical research directions: (1) improving the detection of AI-generated images using novel AI-based computer vision approaches and enabling explainability of the detection system’s results; and (2) reinstating trust in digital images by developing a Trusted Camera mobile application that enables hardware-assisted image provenance attestation. Next, the PIs will study how consumers of digital content can be aided in the process of determining what images can be trusted, by studying how users perceive the results of Deep Fakes detection tools and by developing usable image provenance systems that allow users to verify the origin of an image (e.g., time in which a photo was taken, location coordinates, etc.). Finally, we will investigate public policy and legal aspects of Deep Fakes, and research ways in which these intersect with the new technical solutions that we will develop in this project. Besides being included in academic publications and grant applications for extramural funding, the results of this research will also be disseminated through UGA’s CyberArch cybersecurity clinic program, where they will be developed and used as teaching material for the cybersecurity risk assessment training received by participating CyberArch students.

Team Lead

Roberto Perdisci
Institute for Cybersecurity and Privacy
perdisci@uga.edu

Team Members

Jin Sun
School of Computing

Le Guan
School of Computing

Justin Conrad
Center for International Trade and Security

Sonja West
School of Law

Bartosz Wojdynski
Grady College of Journalism and Mass Communication

Mark Lupo
Carl Vinson Institute of Government

Thomas Kadri
School of Law