‘Deepfakes’ and synthetic media are new forms of audiovisual manipulation that allows people to create realistic simulations of someone’s face, voice or actions. They enable people to make it seem like someone said or did something they didn’t. They are getting easier to make, requiring fewer images to build them from, and they are increasingly being commercialized.
Currently deepfakes overwhelmingly impact women, targeted with non-consensual sexual images and videos, but there are fears they will impact more broadly across society. Solutions are being proposed for how to handle malicious uses of these new tools, and it is critical that this discussion be informed from a global perspective, rather than a strongly US or European-centric point of view.
The first national-level meeting on deepfakes preparedness
On July 25th 2019, WITNESS held a convening on “Deepfakes and synthetic media: Prepare yourself now” in São Paulo, Brazil. To our understanding it was the first national-level multi-disciplinary discussion on how to pragmatically understand and prepare for this potential threat.
The meeting aimed to explore and prioritize pragmatic solutions for the prevention and defense against a dark future of video and audio made with artificial intelligence (AI) techniques, with a particular focus on the threats identified in Brazil and solutions desired by a range of stakeholders. The workshop participants included journalists, fact-checkers, technologists, civic activists and others. It was part of a WITNESS initiative focused on how to better protect and uphold marginal voices, civic journalism, and human rights as emerging technologies such as AI intersect with disinformation, media manipulation, and rising authoritarianism. The workshop was also supported by the team at WITNESS Brasil. More information on WITNESS’ deepfakes work including previous convening and workshop reports is available in English at: wit.to/Synthetic-Media-Deepfakes
The workshop was structured to learn about the technologies of deepfakes and synthetic media creation and detection; their current use in attacks on women and gender-based violence and in cultural critique and satire. Then participants placed this in the context of existing challenges of misinformation and disinformation in Brazil and focused on prioritizing perceived threats and solutions.
What are the threats?
After learning about the technological possibilities of deepfakes and discussing the current situation in Brazil participants prioritized these key threats as areas where new forms of manipulation might expand existing threats, introduce new threats, alter existing threats, and reinforce other threats.
- Journalists and civic activists will have their reputation and credibility attacked. This echoes global concerns.
- Public figures will face non-consensual sexual imagery and gender-based violence
- Social movement will face attacks on the credibility and safety of their leaders as well as their public narratives
- There will be attacks against judicial processes and the probative value of video for both news and evidence as video is discredited, claimed to be inauthentic even when it isn’t, or processes are overwhelmed by the burden of proving true from false
- Deepfakes will yet be another weapon contributing to conspiracy campaigns
- As deepfakes become more common and easier to make at volume, they will contribute to a firehose of falsehood that to floods media verification and fact-checking agencies with content they have to verify
- Similar volumes of falsehood contribute to cumulative creation of distrust in institutions and a ‘zero trust’ society where truth is replaced by opinion
- Micro-targeting of increasingly customized AI-generated content will use a person or a group’s psychological profile to carry-out a very effective targeting with falsified content in order to reinforce an existing position or opinion they hold.
What are the solutions we need?
Participants discussed a range of the solutions being proposed at a global level, but that often lead out of Silicon Valley and legislative actions in Washington DC and Brussels. These included the following areas:
- Can we teach people to spot deepfakes?
- How do we build on existing journalistic capacity and coordination?
- Are there tools for detection? (and who has access?)
- Are there tools for authentication? (and who’s excluded)
- Are there tools for hiding our images from being used as training data?
- What do we want from commercial companies producing synthesis tools?
- What should platforms and lawmakers do?
After a discussion on the status of detection efforts, the inadequacy of training people to ‘spot’ deepfakes and the current platform efforts by Facebook and others, participants focused on the which solutions felt most relevant to focus on in Brazil.
We need media literacy contextualized in bigger misinformation and disinformation problem, especially for grassroots communities
Rather than talking about the current algorithmic “Achilles heel” of any current deepfake creation process – which is usually a technical glitch that will disappear as techniques improve — we should work to create a critical thinking that can make people doubt materials, check sources and provenance and corroboration, distinguish opinion and propaganda, and look for veracity before believing and sharing. This needs to be about the broader problem of disinformation, misinformation and ‘fake news’ as well as unpacking how narratives are constructed and shared, and must prioritize grassroots communities and influencers who work with these communities
There is a lack of public understanding of what’s possible with new forms of video and audio manipulation. We should prioritize listening first to what people already know or presume about deepfakes befor building on this understanding without scaremongering, for example using the power of influencers like deepfake satirists to explain how works. Working together with other existing projects, initiatives, coalitions and fact-checking agencies is very important not only to share tools and skills, but also to exchange experiences and new technologies.
Detection tools need to be cheap, accessible and explainable for citizens and journalists
Participants, particularly from the journalism and fact-checking world, were concerned about how the nature of detection would always put journalists at a disadvantage. They already grapple with the difficulties of finding and debunking false claims especially within closed networks, let alone new forms of manipulation like deepfakes, for which they don’t have the detection tools.
More and more investment is going into the development of tools for detecting deepfakes using new forms of media forensics and adaptations of the same algorithms used to create the synthetic media. But there are questions of who these tools will be available to, and how existing problems of ‘shallowfakes’ will also be dealt with. Journalists also reiterate that platforms like YouTube and WhatsApp haven’t solved for existing problems – you still can’t easily check whether an existing video is a ‘shallowfake’, a video that is simply slightly edited or just renamed and shared claiming its something else. In the absence of tools to detect the existing massive volume of shallowfakes – for example, a reverse video search out of WhatsApp – then deepfakes detection is a luxury.
As big companies and platforms like Facebook invest in detection tools they need to build detection tools that are clear, transparent and trustworthy, as well as accessible to many levels of journalists and citizens. The earlier that deepfakes and other falsifications can be spotted the better.
A big part of accessibility is better media forensics tools that are cheap and available to all – and challenging the economic incentives that build for synthesizing falsehood not detecting it — but this needs to be combined with journalistic capacity in new forms of verification and media forensics.
Platforms like Facebook, YouTube, Google and Whatsapp need to be part of the solution with transparency and support to separate truth from falsehood
Platforms, closed messaging apps, search engines, social media networks and video-sharing sites will also be the places where these manipulations are shared. Some topics and questions we should discuss are: What role should social networks and other platforms play in fighting deep fakes? What should be the limits? How should they provide access to detection capabilities? How should they signal to users that content is true or fake or in some way manipulated in ways they cannot see? Should they remove certain kinds of manipulated content and under what criteria?
As a starting point, participants noted that platforms need to be more transparent on what they learn about how fake news is distributed on them. They need to rethink how far closed messaging can reach and how to control the spread of mis and disinformation.
For more information on WITNESS’ recommendations for preparation for deepfakes see: wit.to/Synthetic-Media-Deepfakes
For more information on WITNESS’ recommendations for what journalists need to prepare (globally) see: https://lab.witness.org/projects/osint-digital-forensics/