Workshop report from 1st regional meeting +  Key threats and preferred solutions (English)

‘Deepfakes’ and synthetic media are new forms of audiovisual manipulation that allows people to create realistic simulations of someone’s face, voice or actions. They enable people to make it seem like someone said or did something they didn’t. They are getting easier to make, requiring fewer images to build them from, and they are increasingly being commercialized.

Currently deepfakes overwhelmingly impact women, targeted with non-consensual sexual images and videos, but there are fears they will impact more broadly across society. Solutions are being proposed for how to handle malicious uses of these new tools, and it is critical that this discussion be informed from a global perspective, rather than a strongly US or European-centric point of view.

TLDR: It’s critical that any discussion on emerging forms of media manipulation include a range of affected communities and experts globally, and not be US/Euro/Silicon Valley-centric. Participants in WITNESS’s expert convening for South and Southeast Asia indicated key threats they were concerned about and key solutions they wanted for shallowfakes and deepfakes. These concerns must inform global decision-making on technologies for detection and mitigation as well as government and platform actions.

As part of WITNESS’s focus in our Emerging Threats and Opportunities work WITNESS and WITNESS Asia-Pacific hosted a one-day experts workshop in Malaysia in March 2020 to increase understanding of the problem of deepfakes and synthetic media in Southeast Asia (SEA) and the broader Asia region. It focused on prioritizing threats and solutions from a regional context, feeding into global efforts. This inaugural SEA workshop was the third such session by WITNESS, coming after those held in Brazil and South Africa last year, in July 2019 and November 2019 respectively. This blog draws on the Executive Summary of our detailed report (pdf) from the meeting.

The workshop was attended by 35 stakeholders and experts involved in journalism, freedom of expression, human rights advocacy, fact-checking, digital rights, digital verification, filmmaking, movement leadership, international justice, platforms, research and technology. They represented a wide range of fields, namely the media, human rights, fact-checking, documentary, technology, platforms, research, academia, international law, as well as civil society. As with other WITNESS workshops we focused on experts in a wide range of relevant disciplines, as well as people and communities who have experienced and countered similar harms in terms of mis/disinformation and gender-based violence. Participants who attended the workshop were drawn from different countries in SEA—Cambodia, Indonesia, Burma (Myanmar), Singapore, Thailand, and the host country Malaysia. There were also participants from India, Sri Lanka and Taiwan. 

The detailed report provides information on how we stablished a common understanding of the threats presented by deepfakes and other forms of synthetic media, then solicited from participants a prioritization of possible regional and global interventions from a SEA perspective. Starting with the history of deepfakes, including the strong link to gender-based violence (GBV) and an introduction to their technical characteristics, the workshop then took a look at the context of existing visual misinformation and disinformation in the region, in particular Indonesia (contents of the workshop sessions are outlined in detail in Section 3).

Next, participants discussed in groups about threat models and vulnerabilities, and identified the most plausible and harmful threats in their contexts. An eye-opening technical briefing via Skype by Francesco Marra of the GRIP team at University Federico II on detection methods helped to inject more thoughtful input into the discussion on prioritization from a SEA perspective of the solutions being proposed and rapidly driven forward at a global level and participants’ feedback on policies and approaches. 

Threat prioritization

The biggest priority threat in participants’ view was the ‘zero trust’ world that deepfakes would engender, in particular the threat to the democratic processes of elections and journalism, with public figures as either a target or perpetrator. More specifically, gender-based attacks on human rights activists and journalists were their greatest concern, alongside the related issues of cyberbullying and non-consensual sexual imagery without source material. They feared the threat of violence posed by credible doppelgangers of real people inciting rights abuses or conflict, as well as floods of falsehood. 

Spelling out the threats further, they were concerned about the ‘truthpocalypse’ that would result from the weaponization of information, and the impact on media at various levels. Deepfakes that hijack media brands would erode the trust on which media functions, and their capacity to function as purveyors of truth is also hampered by the lack of detection tools and capacities to counter deepfakes. This relates to their other fear—the lack of preparation to face the threat of deepfakes.

Participants were also wary of the state, not only with regard to policy and legislation concerning the issue, which are driven by vested interests and blighted by disproportionality, but also in its capacity to spread disinformation to spark mob violence and as subterfuge for state violence, particularly in conflict areas such as Burma (Myanmar), West Papua, Sri Lanka, and Southern Thailand. 

They were also mindful that deepfakes would further threaten already vulnerable groups, and social division would deepen from deepfake-generated echo chambers and confirmation bias They thought the alarm should also be sounded about the transparency and accountability surrounding data that is obtained by deepfake apps. (Full results of the threat prioritization exercise are presented in the full report

Solutions prioritization

With regard to possible solutions and mitigations against the emerging threat, participants identified several educational and technical actions. 

Noticing a growing apathy about truth, the media literacy group mooted an awareness-raising campaign on its importance. 

Generally, participants agreed that media literacy is the preventive supplement to fortify the public from falling for deepfakes. It was suggested that context mattered so focusing, for example, on young people and housewives or children in school could be appropriate choices. They cautioned with some audiences of making unnecessary distinctions around how a fake was made – i.e. deepfake versus other forms of manipulation.

Media professionals stressed the need for more interdisciplinary collaborations and resource sharing in order to respond to the threat effectively and with an efficient use of limited funds. In particular, it would be helpful to have a database of experts as a source of reference for fact-checkers globally. Given the gap in the technical capacities, a collaborative training on media monitoring and harm reduction is needed for stakeholders of diverse backgrounds. Platforms could provide them with tools and metadata for deepfake detection.

A group that chose to focus on platforms’ roles agreed that content moderation was necessary (without indicating a clear preference on takedown or labelling), but noted the need for supporting smaller platforms as well as focusing attention on functions like a reporting button in WhatsApp. They also noted a need for a mechanism to stop the spread of non-consensual images and better ways to address existing mis-contextualized ‘shallowfake’ videos and images. (Further discussion of solutions is presented in Section 4.3)

The final exercise was a feedback session regarding authenticity issues. Participants grappled with dichotomous and contradictory results from the same action pertaining to the extent and limits of tracking the authenticity of media, such as balancing between privacy and accountability needs. 

Next step recommendations

The workshop ended with suggestions on specific steps to take moving forward. These included: 

  • Simplifying the vocabulary for public education purposes, after which an awareness campaign that includes simple, brief, multi-lingual videos can be held.
  • Providing accessibility to detection systems
  • Build capacity for shared media forensics
  • A database of experts who can help journalists identify synthetic media.
  • Updates on what is being done to counter deepfakes, and sharing of best practices around the world, which will require reporting and translation work.

A backgrounder developed for participants in the workshop is available here.

WITNESS notes that there has been significant consistency across the convenings that WITNESS has coordinated in the Global South in terms of threats identified and in terms of desired solutions. We summarize some key insights into needs for shallowfake and deepfake detection as well as approaches to authentication such as the Adobe/Twitter/New York Times Content Authenticity Initiative in these blogs:

Look out for future blogposts further emphasizing commonalities and how they should inform global decision-making and investment in responses to shallowfakes and deepfakes. And check out our ongoing Deepfakery video discussion series that includes episodes on deepfakes as satire, deepfakes and human rights frameworks, deepfakes and journalism and much more!


For more information on WITNESS’ recommendations for preparation for deepfakes see:

More information on threats and solutions prioritized in Brazil and Sub-Saharan Africa!


This report contains depictions of war, abuse, examples of, or links to content that features, hate speech.

Trigger Warning

Help WITNESS create more human rights change

Join us by subscribing to our newsletter.