Home / Technology / Facebook is developing a new method to reverse-engineer deepfakes and track the source

Facebook is developing a new method to reverse-engineer deepfakes and track the source



Deepfakes is not a big issue on Facebook right now, but the company continues to fund research on the technology to protect against future threats. The latest work is a collaboration with academics from Michigan State University (MSU), with the combined team creating a method for transforming deepfakes: analyzing AI-generated images to reveal the identifying properties of the machine learning model that created it.

The work is useful, as it can help Facebook track bad actors who spread deepfakes on the various social networks. This content may contain misinformation, but also non-consensual pornography ̵

1; a depressingly common use of deepfake technology. Right now, the work is still in the research phase and is not ready to be distributed.

Previous studies in this area have been able to determine which known AI model generated a deep phase, but this work, led by MSU’s Vishal Asnani, goes a step further by identifying the architectural features of unknown models. These properties, known as hyperparameters, must be set in each machine learning model that shares an engine. Together, they leave a unique fingerprint on the finished image that can then be used to identify the source.

It is important to identify the properties of unknown models, says research leader on Facebook, Tal Hassner The Verge, because deepfake software is extremely easy to customize. This allows bad actors to cover their tracks if investigators tried to track their activity.

Examples of deepfakes include these fake faces, generated by a well-known AI model called StyleGAN.
Image: The Verge

“Let’s assume that a bad actor generates many different deepfakes and uploads them on different platforms to different users,” says Hassner. “If this is a new AI model that no one has seen before, then there is very little we could have said about it before. Now we can say: ‘Look, the image that was uploaded here, the image that was uploaded there, all came from the same model. ‘And if we managed to grab the laptop [used to generate the content], we will be able to say, ‘This is the sinner.’ ”

Hassner compares the work of forensic techniques used to identify which model of camera was used to take a picture by looking for patterns in the resulting image. “Not everyone can make their own camera, though,” he says. “While anyone with reasonable experience and standard computer can create their own model that generates deepfakes.”

Not only can the resulting algorithm fingerprint the features of a generative model, but it can also identify which known model created an image, and whether an image is basically a deep forgery. “On standard benchmarks, we get state-of-the-art results,” says Hassner.

But it is important to note that even these modern results are far from reliable. When Facebook hosted a deepfake detection contest last year, the winning algorithm was only able to detect AI-manipulated videos 65.18 percent of the time. The researchers involved said that spotting deepfakes using algorithms is still largely an “unsolved problem.”

Part of the reason for this is that the field of generative AI is extremely active. New techniques are published every day, and it is almost impossible for any filter to keep up.

Those involved in the field are very aware of this dynamic, and when asked whether the publication of this new fingerprint algorithm will lead to research that may go unnoticed by these methods, Hassner agrees. “I expect that,” he says. “This is a cat-and-mouse game, and it continues to be a cat-and-mouse game.”


Source link