'Godfather of AI' and other experts call for 'deepfake supply chain disruption', citing potential risks

Experts in the field of artificial intelligence (AI) and executives of some companies in the profile industry, including one of the pioneers of this technology, Yoshua Bengio, signed an open letter calling for stricter regulation of “deepfake” content, citing potential risks for human society, informs Reuters, quoted by Agerpres.

Deepfake content is increasingly difficult to identify. PHOTO Shutterstock

“Currently, 'deepfake' content often involves sexual images, fraud and political disinformation. As AI advances rapidly and makes deepfake productions much easier to create, safeguards are needed“, the group stated in the open letter written by Andrew Critch, an artificial intelligence researcher at the University of California, Berkeley.

What are “deepfake” contents

content “deep fake” are realistic but rigged images, audio and video recordings created by artificial intelligence algorithms, and recent advances in this technology have made them increasingly difficult to differentiate from human-created content.

Recommendations on how to regulate “deepfake” content

In the open letter, entitled “Disrupting the Deepfake Supply Chain”), a series of recommendations is made regarding how to regulate the contents “deepfake“, including full criminalization of child pornography of the type “deepfake” and criminal penalties for any person who knowingly creates or facilitates the spread of “deep fake” harmful, while requiring AI companies to prevent their products from creating content “deep fake” harmful.

More than 400 people from various industries, including academia, entertainment and politics, had signed the letter by Wednesday morning.

Harvard University psychology professor Steven Pinker, Joy Buolamwini, founder of the Algorithmic Justice League, two former Estonian presidents, Google DeepMind researchers and an OpenAI researcher are among the signatories.

Priorities of regulatory authorities

Ensuring AI systems don't harm society has been a priority for regulators after Microsoft-backed OpenAI launched ChatGPT in late 2022, which wowed users with its ability to hold human-like conversations and by the ease with which it performs various other tasks.


Several warnings have come from prominent figures about the risks posed by AI, most notably a letter signed by Elon Musk last year calling for a six-month pause in the development of systems stronger than the GPT AI model -4 created by OpenAI.