Google has released a database of 3,000 deepfakes – videos that use artificial intelligence to alter faces or to make people say things they never did.
The videos are of actors and use a variety of publicly available tools to alter their faces.
The search giant hopes it will help researchers build the tools needed to take down “harmful” fake videos.
Contributing Data to Deepfake Detection Research
Deep learning has given rise to technologies that would have been thought impossible only a handful of years ago. Modern generative models are one example of these, capable of synthesizing hyperrealistic images, speech, music, and even video. These models have found use in a wide variety of applications, including making the world more accessible through text-to-speech, and helping generate training data for medical imaging.
Like any transformative technology, this has created new challenges. So-called “deepfakes”—produced by deep generative models that can manipulate video and audio clips—are one of these. Since their first appearance in late 2017, many open-source deepfake generation methods have emerged, leading to a growing number of synthesized media clips. While many are likely intended to be humorous, others could be harmful to individuals and society.