Prepare, Don’t Panic:
Synthetic Media and Deepfakes
This project focuses around the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media” and how we push back to defend evidence, the truth and freedom of expression from a global, human rights-led perspective. This work is embedded in a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights as emerging technologies such as AI intersect with the pressures of disinformation, media manipulation, and rising authoritarianism. Read more about successes to date and our coming work on emerging threats work here.
For an overview of what deepfakes are, the threats and potential solutions, you can read our Backgrounder on Deepfakes.
Our work launched in 2018 with the first multi-disciplinary convening around deepfakes preparedness: Check out the report based from the “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening”. For further reports from our series of global meetings please see below.
Twelve things we can do now to prepare for deepfakes
Name and address existing harms
from gender-based violence and cyber bullying.
Inclusion and human rights
Demand responses reflect, and be shaped by, a global and inclusive approach, as well as by a shared human rights vision.
Global threat models
Identify threat models and desired solutions from a global perspective.
Building on existing expertise
Promote cross-disciplinary and multiple solution approaches, building on existing expertise in misinformation, fact-checking, and OSINT.
Empower key frontline actors like media and civil liberties groups to better understand the threat and connect to other stakeholders/experts.
Identify appropriate coordination mechanisms between civil society, media, and technology platforms around the use of synthetic media.
Support research into how to communicate ‘invisible-to-the-eye’ video manipulation and simulation to the public.
Platform and tool-maker responsibility
Determine what we want and don’t want from platforms and companies commercializing tools or acting as channels for distribution, including in terms of authentication tools, manipulation detection tools, and content moderation based on what platforms find.
Equity in detection access
Prioritize global equity in access to detection systems and advocate that investment in detection matches investment in synthetic media creation approaches.
Shape debate on infrastructure choices
and understand the pros and cons of who globally will be included, excluded, censored, silenced, and empowered by the choices we make on authenticity or content moderation, and the infrastructure we build for this.
Promote ethical standards
on usage in political and civil society campaigning.
Featured reports and blogs
Reports from the only Global South-focused and global meetings on threats and solution prioritization
Intervening with research and advocacy to ensure emerging authenticity infrastructure reflects key global dilemmas
What’s needed for equitable detection access?
Op-eds on key issues
WIRED: HOW DEEPFAKE FEARS UNDERMINE TRUE VIDEO
DEEPFAKES PREPARE NOW: REPORT FROM 1st SOUTHEAST ASIA EXPERT MEETING
MANIPULATED MEDIA DETECTION: PRIORITIES
#TRACINGTRUST VIDEO SERIES
BACKGROUNDER: DEEPFAKES IN 2021
SOUTH AFRICA DEEPFAKES WORKSHOP: FULL REPORT
HOW EDUCATIONAL INITIATIVES CAN FIGHT DISINFO
TWITTER RELEASED A DRAFT POLICY ON SYNTHETIC MEDIA. HERE'S WHAT STOOD OUT TO ACTIVISTS.
TO FIGHT DEEPFAKES BUILD MEDIA LITERACY, SAY AFRICAN ACTIVISTS
PREPARING FOR DEEPFAKES AGAINST JOURNALISM
DEEPFAKES: PREPARE NOW (PERSPECTIVES FROM BRAZIL)
DEEPFAKES AND SYNTHETIC MEDIA: UPDATED SURVEY OF SOLUTIONS AGAINST MALICIOUS USAGES
PREPARE, DON'T PANIC: DEALING WITH DEEPFAKES AND OTHER SYNTHETIC MEDIA
TRUST AND TRUTH: PREPARING FOR DEEPFAKES
HEARD ABOUT DEEPFAKES? DON'T PANIC. PREPARE
DEEPFAKERY: SATIRE, HUMAN RIGHTS, ART AND JOURNALISM IN A TIME OF INFODEMICS
VIDEO TALK SERIES
ASSESSING THE ADOBE CONTENT AUTHENTICITY INITIATIVE
ASSESSING THE ADOBE CONTENT AUTHENTICITY INITIATIVE
DATA JOURNALISM HANDBOOK: THINKING ABOUT DEEPFAKES
ENSURING AUTHENTICITY INFRASTRUCTURE HELPS, NOT HURTS
WHY WE MUST BUILD AUTHENTICITY INFRASTRUCTURE THAT WORKS FOR ALL
TALK: A.I. MIS/DISINFORMATION – DON'T PANIC, PREPARE
CORONAVIRUS AND HUMAN RIGHTS: PREPARING WITNESS'S RESPONSE
IN AFRICA, FEAR OF STATE VIOLENCE INFORMS DEEPFAKE THREAT
RESEARCHER EXPLAINS DEEPFAKE VIDEOS
HOW DO WE WORK TOGETHER TO DETECT AI-MANIPULATED MEDIA?
DEEPFAKES: IT'S NOT WHAT IT LOOKS LIKE!
A HORA E A VEZ DAS DEEPFAKES NO BRASIL E NO MUNDO
AI TALK: DETECTING DEEPFAKES
SXSW 2019 - DEEPFAKES: WHAT WE SHOULD FEAR, WHAT CAN WE DO
DEEPFAKES AND SYNTHETIC MEDIA: WHAT SHOULD WE FEAR? WHAT CAN WE DO?
“MAL-USES OF AI-GENERATED SYNTHETIC MEDIA + DEEPFAKES: PRAGMATIC SOLUTIONS DISCOVERY CONVENING”
WITNESS LEADS CONVENING ON PROACTIVE SOLUTIONS TO MAL-USES OF DEEPFAKES AND OTHER AI-GENERATED SYNTHETIC MEDIA
HOW CAN US ACTIVISTS CONFRONT DEEPFAKES AND VISUAL DISINFORMATION?
CONTENT AUTHENTICITY INITIATIVE WHITE PAPER: WITNESS CO-AUTHORS
WHITE PAPER LAUNCH
PREPARING FOR DEEPFAKES: RUSSIAN TRANSLATED WEBINAR
CONVERSATIONS WITH DATA: DETECTING DEEPFAKES
WHAT'S NEEDED IN DEEPFAKES DETECTION
LIVESTREAM Q&A: CONTENT AUTHENTICITY EXPLAINED
THE PROS AND CONS OF FACEBOOK'S NEW DEEPFAKES POLICY
TICKS OR IT DIDN'T HAPPEN
MAJOR BRAZILIAN PRESS COVERS WITNESS RECOMMENDATIONS ON HOW TO PREPARE BETTER BASED ON RECENT EXPERT MEETINGS
GOVERNING DEEPFAKES DETECTION TO ENSURE SUPPORTS GLOBAL NEEDS
PROTECTING PUBLIC DISCOURSE FROM AI-GENERATED MIS/DISINFORMATION
“DEEPFAKES`` ARE HERE, NOW WHAT?
DEEPFAKES WILL CHALLENGE PUBLIC TRUST IN WHAT’S REAL. HERE’S HOW TO DEFUSE THEM.
DEEPFAKES AND SYNTHETIC MEDIA: SURVEY OF SOLUTIONS AGAINST MALICIOUS USAGES