GOVERNING DEEPFAKES DETECTION TO ENSURE SUPPORTS GLOBAL NEEDS
Prepare, Don’t Panic: Synthetic Media and Deepfakes
This project focuses around the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media” and how we push back to defend evidence, the truth and freedom of expression. This work is embedded in a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights as emerging technologies such as AI intersect with the pressures of disinformation, media manipulation, and rising authoritarianism. Read more about our emerging threats work here.
Twelve things we can do now to prepare for deepfakes
and recognize that this is an evolution, not a rupture of existing problems – and that our words create many of the harms we fear.
Recognize existing harms
that manifest in gender-based violence and cyber bullying.
Inclusion and human rights
Demand responses reflect, and be shaped by, a global and inclusive approach, as well as by a shared human rights vision.
Global threat models
Identify threat models and desired solutions from a global perspective.
Building on existing expertise
Promote cross-disciplinary and multiple solution approaches, building on existing expertise in misinformation, fact-checking, and OSINT.
Empower key frontline actors like media and civil liberties groups to better understand the threat and connect to other stakeholders/experts.
Identify appropriate coordination mechanisms between civil society, media, and technology platforms around the use of synthetic media.
Support research into how to communicate ‘invisible-to-the-eye’ video manipulation and simulation to the public.
Platform and tool-maker responsibility
Determine what we want and don’t want from platforms and companies commercializing tools or acting as channels for distribution, including in terms of authentication tools, manipulation detection tools, and content moderation based on what platforms find.
Shared detection capacity
Prioritize shared detection systems and advocate that investment in detection matches investment in synthetic media creation approaches.
Shape debate on infrastructure choices
and understand the pros and cons of who globally will be included, excluded, censored, silenced, and empowered by the choices we make on authenticity or content moderation.
Promote ethical standards
on usage in political and civil society campaigning.
Resources & analysis
TICKS OR IT DIDN'T HAPPEN
WITNESS supports critical research into pros and cons of approaches to deepfakes and mis/disinformation mitigation that focus on tracking the authenticity and provenance of audiovisual media (Report Upcoming)
PREPARE, DON'T PANIC: DEALING WITH DEEPFAKES AND OTHER SYNTHETIC MEDIA
HEARD ABOUT DEEPFAKES? DON'T PANIC. PREPARE
“MAL-USES OF AI-GENERATED SYNTHETIC MEDIA + DEEPFAKES: PRAGMATIC SOLUTIONS DISCOVERY CONVENING”
DEEPFAKES AND SYNTHETIC MEDIA: UPDATED SURVEY OF SOLUTIONS AGAINST MALICIOUS USAGES
HOW DO WE WORK TOGETHER TO DETECT AI-MANIPULATED MEDIA?
SXSW 2019 - DEEPFAKES: WHAT WE SHOULD FEAR, WHAT CAN WE DO
DEEPFAKES AND SYNTHETIC MEDIA: WHAT SHOULD WE FEAR? WHAT CAN WE DO?
BRAZIL: ENSURING DEEPFAKES SOLUTIONS ARE GUIDED BY GLOBAL PRIORITIES
WITNESS facilitates community-level discussion and national level convening to support inclusion of perspectives from Brazil in global discussion on deepfakes threats and solutions. (Report Upcoming)
PROTECTING PUBLIC DISCOURSE FROM AI-GENERATED MIS/DISINFORMATION
DEEPFAKES WILL CHALLENGE PUBLIC TRUST IN WHAT’S REAL. HERE’S HOW TO DEFUSE THEM.
DEEPFAKES AND SYNTHETIC MEDIA: SURVEY OF SOLUTIONS AGAINST MALICIOUS USAGES
In the news…
- WIRED, “Forget Politics. For Now, Deepfakes Are for Bullies”
- RTÉ Radio 1, “Drivetime”
- MIT Technology Review, “The world’s top deepfake artist is wrestling with the monster he created”
- NewsBusters.org, “Axios: Social Media Firms Might Police Speech to Protect ‘Truth’”
- Axios Future, “1 big thing: Social media and the truth”
- Fortune, “Fighting Deepfakes Gets Real”
- Axios, “A digital breadcrumb trail for deepfakes”
- CNET, “VidCon kicks off with deepfake dilemma as its opening act”
- BBC, “VidCon: Liza Koshy, Joey Graceffa and LD Shadowlady join huge YouTube convention”
- MIT Technology Review, “
- Journalist’s Resource, “Deepfake technology is changing fast — use these 5 resources to keep up”
- CNN, “Baby Elon Musk, rapping Kim Kardashian: Welcome to the world of silly deepfakes”
- The Washington Post, “Deepfakes are dangerous — and they target a huge weakness”
- OpenAI, “The National Security Challenges of Artificial Intelligence, Manipulated Media, and ‘Deep Fakes’”
- CATO Institute, “Artificial Intelligence and Counterterrorism: Possibilities and Limitations”
- Universo Online, “A proliferação de deepfakes é apenas uma questão de tempo”
- The Washington Post, “Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned’”
- CNN, “The fight to stay ahead of deepfake videos before the 2020 US election”
- MIT Technology Review, “Deepfakes have got Congress panicking. This is what it needs to do.”
- Fortune, “Deepfake Video of Mark Zuckerberg Goes Viral on Eve of House A.I. Hearing”
- Axios, “1 big thing: Big Tech’s untenable deepfake defense”
- VICE, “There’s No ‘Correct’ Way to Moderate the Nancy Pelosi Video”
- Mozilla Internet Health Report 2019, “‘Deepfakes’ are here, now what?”
- MIT Technology Review, “Deepfakes are solvable—but don’t forget that ‘shallowfakes’ are already pervasive“
- Al Jazeera, The Stream, “Would you be fooled by a deepfake?“
- ABA Journal, “As deepfakes make it harder to discern truth, lawyers can be gatekeepers”
- World Economic Forum, “Heard about deepfakes? Don’t panic. Prepare”
- Gizmodo, “How Archivists Could Stop Deepfakes From Rewriting History”
- National Endowment for Democracy, “The Big Question: How will ‘Deepfakes’ and Emerging Technology Transform Disinformation?”
- Harvard Business Review, “Business in The Age of Computational Propaganda and Deep Fakes”