Our updated site can be found here: gen-ai.witness.org

Prepare, Don’t Panic:
Synthetic Media and Deepfakes

[/vc_column]
This project focuses around the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media” and how we push back to defend evidence, the truth and freedom of expression from a global, human rights-led perspective. This work is embedded in a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights as emerging technologies such as AI intersect with the pressures of disinformation, media manipulation, and rising authoritarianism.
Our work launched in 2018 with the first multi-disciplinary convening around deepfakes preparedness: Check out the report based from the “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening”. For further reports from our series of global meetings please see below.
Twelve things we can do now to prepare for deepfakes
1
De-escalate rhetoric
and recognize that this is an evolution, not a rupture of existing problems – and that our words create many of the harms we fear.
2
Name and address existing harms
from gender-based violence and cyber bullying.
3
Inclusion and human rights
Demand responses reflect, and be shaped by, a global and inclusive approach, as well as by a shared human rights vision.
4
Global threat models
Identify threat models and desired solutions from a global perspective.
5
Building on existing expertise
Promote cross-disciplinary and multiple solution approaches, building on existing expertise in misinformation, fact-checking, and OSINT.
6
Connective tissue
Empower key frontline actors like media and civil liberties groups to better understand the threat and connect to other stakeholders/experts.
7
Coordination
Identify appropriate coordination mechanisms between civil society, media, and technology platforms around the use of synthetic media.
8
Research
Support research into how to communicate ‘invisible-to-the-eye’ video manipulation and simulation to the public.
9
Platform and tool-maker responsibility
Determine what we want and don’t want from platforms and companies commercializing tools or acting as channels for distribution, including in terms of authentication tools, manipulation detection tools, and content moderation based on what platforms find.
10
Equity in detection access
Prioritize global equity in access to detection systems and advocate that investment in detection matches investment in synthetic media creation approaches.
11
Shape debate on infrastructure choices
and understand the pros and cons of who globally will be included, excluded, censored, silenced, and empowered by the choices we make on authenticity or content moderation, and the infrastructure we build for this.
12
Promote ethical standards
on usage in political and civil society campaigning.
Resources & analysis
WIRED: HOW DEEPFAKE FEARS UNDERMINE TRUE VIDEO

WIRED: HOW DEEPFAKE FEARS UNDERMINE TRUE VIDEO

OP-ED
PREPARING FOR DEEPFAKES AGAINST JOURNALISM

PREPARING FOR DEEPFAKES AGAINST JOURNALISM

CASE STUDY
The European Broadcasting Union published its annual report on news trends, which includes our case study on deepfakes and journalism. The case study also includes a short video (requires login for viewing).
WHY WE MUST BUILD AUTHENTICITY INFRASTRUCTURE THAT WORKS FOR ALL

WHY WE MUST BUILD AUTHENTICITY INFRASTRUCTURE THAT WORKS FOR ALL

BLOG POST
A HORA E A VEZ DAS DEEPFAKES NO BRASIL E NO MUNDO

A HORA E A VEZ DAS DEEPFAKES NO BRASIL E NO MUNDO

OP-ED
MAJOR BRAZILIAN PRESS COVERS WITNESS RECOMMENDATIONS ON HOW TO PREPARE BETTER BASED ON RECENT EXPERT MEETINGS

MAJOR BRAZILIAN PRESS COVERS WITNESS RECOMMENDATIONS ON HOW TO PREPARE BETTER BASED ON RECENT EXPERT MEETINGS

PRESS
Articles in Estadão and Folha de S.Paulo. View image of Folha de S.Paulo piece. For more information on our press coverage in Brazil, click here.

In the news…

TRIGGER WARNING

This report contains depictions of war, abuse, examples of, or links to content that features, hate speech.

Trigger Warning

Help WITNESS create more human rights change

Join us by subscribing to our newsletter.

Support WITNESS