A $10m fund has been set up to find better ways to detect so-called deep fake videos. Facebook, Microsoft and several UK and US universities are putting up cash for the wide-ranging research project.

Google has released a database of 3,000 deepfakes – videos that use artificial intelligence to alter faces or to make people say things they never did. The videos are of actors and use a variety of publicly available tools to alter their faces. The search giant hopes it will help researchers build the tools needed to identify and take down “harmful” fake videos.

What are Deep Fakes?
Deep fake clips use AI software to make people – often politicians or celebrities – say or do things they never did or said.

Many fear such videos will be used to sow distrust or manipulate opinion. Last week, California Governor Gavin Newsom signed into law AB 730, which makes it a crime to distribute audio or video that gives a false, damaging impression of a politician’s words or actions. The law applies to any candidate within 60 days of an election, but includes some exceptions. News media will be exempt from the requirement, as will videos made for satire or parody. Potentially deceptive video or audio will also be allowed if it includes a disclaimer noting that it’s fake. The law will sunset in 2023.

How To Spend $10 Million
The cash from Facebook will help create detection systems as well as create a “data set” of fakes that the tools can be tested against.

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” wrote Mike Schroepfer, chief technical officer at Facebook. One of the key elements will be to create the data set used to calibrate and rate the different fake-spotting systems.

Mr Schroepfer said data sets for other AI-based systems including those that can identify images or spoken language, had fuelled a “renaissance” in those technologies and prompted wide innovation.

The data set will be generated using paid actors to perform scenes which can then be used to create deep fake videos that the different detection systems will attempt to spot.

No Facebook data will be used to generate this freely available bank of images, audio clips and videos, said Mr Schroepfer. He added that the database of deep fakes will be made available in December.

The technology to create convincing fakes has spread widely since it first appeared in early 2018. Now it has become much easier and quicker to create the videos, with many worrying the convincing clips will feature in propaganda campaigns.

One commentator said the tools to spot the fakes may not solve all the problems the technology presents.

“It’s a cat-and-mouse game,” Siddharth Garg, an assistant professor of computer engineering at New York University told Reuters. “If I design a detection for deep fakes, I’m giving the attacker a new discriminator to test against.”

Remember – you can always believe your eyes or ears…