Advancing Media Literacy in Developing Countries

Funded by: USAIDCountry: IndonesiaDate Range: 2018-2025Project Lead: Dept. of Computer Science & EngineeringNotre Dame Collaborators: Pulte InstituteContact: Tom Purekal

At a time when communities around the world increasingly turn to digital sources for information, online and social media systems play a critical role in affecting attitudes and behavior. The problem is that social media channels are being manipulated by malicious groups to spread misinformation in low and middle income countries (LMIC) to exacerbate social divides and influence citizen involvement in democratic processes. The threat posed by disinformation spread through social media networks is accelerating at a global scale, with tech companies, regulators, and the general public all struggling to understand how social media is reshaping democratic processes. This phenomenon is especially problematic in that ordinary citizens may both consume and propagate (mis)information through their everyday online activities, affecting large segments of a population with falsehoods that are then implicitly endorsed by a seemingly qualified source. This is especially worrisome for new digital arrivals who have limited experience with information flows in social media networks, and is also particularly concerning in the midst of a young democracy such as Indonesia where democratic institutions are relatively new. Given our findings in 2019, we believe there is significant risk of expansive disinformation campaigns surrounding the 2024 Elections.

The goal of this USAID-funded program – Advancing Media Literacy for New Digital Arrivals in Developing Countries – was to improve media literacy in LMIC through a targeted digital media literacy campaign. In its pilot phase, the project focused on Indonesia, specifically tracking the volume of social media disinformation around Indonesia’s 2019 national elections and ways to combat disinformation efforts.  During Quarter 2 of the project, researchers at the University of Notre Dame began tracking and collecting images related to the hashtags and phrases related to disinformation campaigns and the Indonesian national elections. During that period, Notre Dame captured about 250,000 images (130,352 distinct images) just from Twitter and Facebook that reference hashtags associated with disinformation campaigns. By Quarter 3, the number of images had increased to over two million, inclusive of image manipulation.  The project trialed a number of different approaches via static media cards to determine corrective media content and has developed a course of explainer videos to combat media disinformation following the elections that could be applied going forward.  It also established a process that can be transferred to other countries and contexts once this pilot in Indonesia is over.   

To date, our collaborative initiative has proven: (1) its ability to reach tens of millions of social media users in Indonesia and direct them to social media literacy content; (2) our media literacy content is an effective mechanism for changing attitudes and behavior (ie: treatment group is significantly more likely to identify misinformation than control); (3) short, live action videos drive more engagement than longer animated videos; (4) there is significant opportunity to expand the impact of our campaign. Under the existing SOW which is valid through September, 2022 we will accomplish the following: (1) deployment of a rebranded, totally redesigned website ( for hosting our new media literacy curriculum; (2) deployment of a new and expanded media literacy curriculum based on our previous success and learning; (3) deployment of a new WhatsApp-style game that brings our media literacy curriculum into a format designed to maximize user engagement; (4) deployment of refined sophisticated polling and data collection tools for tracking user engagement and the efficacy of our interventions.

The AML team developed a state-of-the-art AI-based system to identify and track manipulated and fabricated digital images and videos on Indonesian social media. This system is called MEWS (Misinformation Early Warning System), and is being developed in tandem with social science methodologies in order to:

  1. identify manipulated visual media at scale
  2. develop theoretical models of disinformation that are informed by political, social, and cultural contexts
  3. track the origin, diffusion, and evolution of digital image manipulations, especially images originating from and pertaining to Chinese interests
  4. identify and analyze the targets of Chinese social influence operations.

These technological accomplishments that help MEWS identify disinformation as it spreads online are only one piece to the disinformation puzzle. By the end of 2021, the team scraped over 2 million images from Indonesian Social Media platforms (Twitter, Instagram, Facebook) using topics and keywords generated by project partners. The team also worked to develop and train a network of journalists, civil-society actors, and human rights watchers in Indonesia who will use the MEWS system to counter misinformation campaigns as they occur in real time. This group will ultimately be entrusted with the MEWS system at the conclusion of our project, so as to continue the use / development of MEWS.

To complement this work, the team has also created, which is an interactive media literacy platform that consists of eight learning modules. Each module includes educational videos, short articles, and interactive exercises that help users apply the learning concepts. The videos feature local media experts that unpack various concepts related to verifying sources, regulating emotional responses to content, and understanding the many forms of disinformation that might appear on social media, and incorporating them into the platform. 

In addition to the website, the team has also created and designed an interactive game called Gali Fakta. In the game players are rewarded for improving their media literacy skills. The game follows the format of a family WhatsApp group chat. The media literacy content featured in the game was provided by IREX and adapted by the Moonshot team. 


Key Learnings to Date:

In the last year of the project, the AML team determined that participants in content and/or the GaliFakta game were 77% more likely to decrease online sharing behavior in general and 66% more likely to decrease online sharing behavior of misinformation headlines. This suggests that the media literacy campaign and curriculum developed has a significant positive impact in reducing the spread of misinformation in the Indonesian context.  

Further information can be found in the project’s 2020- 2021 Annual Report.

This five-year extended project was led by Tim Weninger, Associate Professor within Notre Dame’s Department of Computer Science and Engineering, alongside the Pulte Institute.  The project consortium also consists of IREX, Moonshot CVE, and Geopoll

‹ All Pulte Institute Projects