eXplainable AI for disinformation and conspiracy detection during infodemics


The COVID-19 pandemic has increased the amount of time that people spend online and their exposure to digital contents and communications. One of the effects of the rapid digitalisation of everyday activities is the increase of disinformation spreading online. 2018 Eurobarometer no. 464 revealed that most European citizens, among them Spanish people (88%), consider disinformation a problem as they (66%) come across false information at least once a week (Eurobarometer 503, 2020). Spain currently has an Infodemics Risk Index (https://covid19obs.fbk.eu/) of 0.036, higher than Italy (0.021), but lower than Portugal (0.454) or France (0.13) However, if we consider that Spanish is the second most spoken language in the world in terms of native speakers, with 483 million Spanish speakers in more than 30 countries, and that Latin American countries have an average Infodemics Risk Index of 0.333, we can clearly see how important it is to develop methods and technologies for tackling disinformation in Spanish across different platforms.

Recent studies identify a relationship between the proliferation of disinformation and the spreading of conspiracy theories (IPSOS, December 2020) and international organisations such as UNESCO alert “conspiracy theories cause real harm to people, to their health, and also to their physical safety” (UNESCO, 2020). Conspiracy theories are distinct from other forms of disinformation: they offer easy understandable explanations of important events by claiming that secret plots orchestrated by powerful people or malevolent groups were responsible for those events. It is important to distinguish divergent or critical thinking from conspiracy theories, as the former enriches public debate while the latter seek to jeopardise public debate. The new phenomenon in relation with conspiracy theories is that they have spilled over the social boundaries of those minorities which have traditionally adopted this worldview.

In addition to the climate of threat and fear that a pandemic generates, we identify, at least, three factors to explain this spillover effect of conspiracy theories. First is that disinformation is anchored in health issues: an informative agenda that captures the interest of a greater portion of the population. A second factor lies in the transformations that have taken place in the information ecosystem with the emergence of social networks: a great degree of individual agency in the dissemination of information, anonymity and often the invisibilisation of the source of information. The third factor is the use of new narrative elements that are well suited to this new information ecosystem: memes and fake news convey emotions in a particularly powerful way and are the building blocks of the conspiracy theory edifice.

In this project proposal we aim to build a holistic socio-technical strategy to fight infodemics. We adopt a human-in-the-loop approach to increase false information detection accuracy, while also improving users’ digital literacy. To address the challenges of disinformation, we need interdisciplinary collaboration, and the development of tools that private and public entities can use. EXplainable Artificial Intelligence (XAI) could provide these tools addressing the problem of disinformation detection from a multimodal perspective going beyond the analysis of textual information. We aim to counter disinformation and conspiracy theories on the basis of fact checking of scientific information. Moreover, we aim to be able to explain not only the AI models in their decision-making but also the persuasion and psychographics techniques that are employed to trigger emotions in the readers and make disinformation and conspiracy theories believable and propagate among the social network users. The final AI tool should also help users to spot in documents those parts whose aim is to grab readers’ attention by emotional appeals and that alert about a poor quality of the information. The AI tool will provide a complete picture of the piece of information that allows the user to know which kind of content is consuming. The tool is thought for the general public and its use will allow media and information platforms to be rated based on the quality of their health information, providing criteria for developing search engines that specifically prioritise the information that fulfils these quality standards.

Consortium partners: Symanto Spain (emotional analysis and psychographic profiling); Universitat Politècnica de València (multimodal disinformation and conspiracy theory detection); Spanish National Research Council (leveraging fact-checkers and cyber intelligence to curate scientific claims); Universidad Politécnica de Madrid (disinformation spread, community finding algorithms on social networks); Universidad de Granada (semantic representation, knowledge graphs, explainability); Universitat de Barcelona (datasets annotation).



Get in touch today and understand your data like never before.

Yuwon Song