Prepare, Don’t Panic: Synthetic Media and Deepfakes
This project focuses around the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media” and how we push back to defend evidence, the truth and freedom of expression. This work is embedded in a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights as emerging technologies such as AI intersect with the pressures of disinformation, media manipulation, and rising authoritarianism. Read more about our emerging threats work here.
Check out the report based from the “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening”
Eleven things we can do now to prepare for deepfakes
De-escalate rhetoric
and recognize this is an evolution not a rupture of existing problems and that our words create many of the harms we fear
Demand responses reflect and be shaped by global and inclusive voices + approach
Global threat models
Identify threat models and desired solutions from a global perspective
Solutions building on existing expertise
Promote cross-disciplinary and multiple solution approaches building on existing expertise in misinformation, fact-checking and OSINT
Understanding and connective tissue
Empower key frontline actors like media and civil liberties groups to better understand the threat and be connected to other stakeholders/experts
Platform and tool-maker responsibility
Determine what we want/don’t want from platform and companies commercializing tools or acting as channels for distribution: including what we want and don’t want in terms of authentication tools, manipulation detection tools, and content moderation based on what platforms find
Public debate on technical infrastructure choices
and understand the pros and cons of who globally will be included, excluded, censored, silenced, empowered by choices we make on authenticity measures or content moderation
Shared detection capacity
Prioritize shared detection systems, and advocate that investment in detection matches investment in synthetic media creation approaches
Identify appropriate coordination mechanisms between civil society, media and platforms around use of synthetic media
Support research into how to communicate ‘invisible-to-the-eye’ video manipulation and simulation to publics
Promote ethical standards
on usage in political and civil society campaigning
Resources & analysis

In the news…


Help WITNESS create more human rights change

Join us by subscribing to our newsletter.