Researchers tackling the challenge of visual misinformation — think the TikTok video of Tom Cruise supposedly golfing in Italy during the pandemic — must continuously advance their tools to identify AI-generated images.
NVIDIA is furthering this effort by collaborating with researchers to support the development and testing of detector algorithms on our state-of-the-art image-generation models.
By crafting a dataset of highly realistic images with StyleGAN3 — our latest, state-of-the-art media generation algorithm — NVIDIA provided crucial information to researchers testing how well their detector algorithms work when tested on AI-generated images created by previously unseen techniques. These detectors help experts identify and analyze synthetic images to combat visual misinformation.
“This has been a unique situation in that people doing image generation detection have worked closely with the people at NVIDIA doing image generation,” said Edward Delp, a professor at Purdue University and principal investigator of one of the research teams. “This collaboration with NVIDIA has allowed us to build even better and more robust detectors. The ‘early access’ approach used by NVIDIA is an excellent way to further forensics research.”
Advancing Media Forensics With StyleGAN3 Images
When researchers know the underlying code or neural network of an image-generation technique, developing a detector that can identify images created by that AI model is a comparatively straightforward task.
It’s more challenging — and useful — to build a detector that can spot images generated by brand-new AI models.
StyleGAN3, a model developed by NVIDIA Research that will be presented at the NeurIPS 2021 AI conference in December, advances the state of the art in generative adversarial networks used to synthesize images. The breakthrough brings graphics principles in signal processing and image processing to GANs to avoid aliasing: a kind of image corruption often visible when images are rotated, scaled or translated.
NVIDIA researchers developed StyleGAN3 using a publicly released dataset of 70,000 images. Another 27,000 unreleased images from that collection, alongside AI-generated images from StyleGAN3, were shared with forensic research collaborators as a test dataset.
The collaboration with researchers enabled the community to assess how a diversity of different detector approaches performs in identifying images synthesized by StyleGAN3 — before the generator’s code was publicly released.
These detectors work in many different ways: Some may look for telltale correlations among groups of pixels produced by the neural network, while others might look for inconsistencies or asymmetries that give away synthetic images. Yet others attempt to reverse engineer the synthesis approach to estimate if a particular neural network could have created the image.
One of these detectors, GAN-Scanner, reaches up to 95 percent accuracy in identifying synthetic images generated with StyleGAN3, despite never having seen an image created by that model during training. Another detector, created by Politecnico di Milano, achieves an area under the curve of .999 (where a perfect classifier would achieve an AUC of 1.0).
Our work with researchers on StyleGAN3 showcases and supports the important, cutting-edge research done by media forensics groups. We hope it inspires others in the image-synthesis research community to participate in forensics research as well.
The GAN detector collaboration is part of Semantic Forensics (SemaFor), a program focused on forensic analysis of media organized by DARPA, the U.S. federal agency for technology research and development.
To learn more about the latest in AI research, watch NVIDIA CEO Jensen Huang’s keynote presentation at GTC below.
The post How Researchers Use NVIDIA AI to Help Mitigate Misinformation appeared first on The Official NVIDIA Blog.