ANomalous Diffusion of Harmful Information

ANDHI logo

Digital communities and societies benefit from the potential of communication networks to spread knowledge and information, which can potentially foster the development and reduce inequalities among the members, giving access to information sources. But all these facilities
can also contribute to the spreading of harmful information, such as misinformation, disinformation, and hate speech. One of the challenges of our societies is to try to skip the damages that harmful information can cause, and the first thing on the to-do list is to detect it.
Great advances have been obtained today in the natural language processing field, particularly in the sentimental analysis of the content. But these should be pushed beyond detecting positive or negative reactions to detect other characteristics such as misinformation or
disinformation. Beyond NLP techniques, there are other characteristics inherent to the diffusion of harmful information of different information that should be determined, such as to link the content with the diffusive nature presented while spreading.

Diffusion processes began to be studied on different scales, from particles to movements of individuals, and in areas of a very diverse nature, from biological processes to stock markets. In the first instance, these processes are categorized as Brownian movements or random walks. An erratic change of an observable characterizes them over time (e.g., position, temperature, stock price,…). Nevertheless, a more detailed analysis makes us see significant deviations from them. Processes that deviate from this expected behaviour are known as anomalous diffusion processes. Such processes have recently become popular because they have been detected in very diverse contexts, such as information, energy, or transport flows. Diffusion processes on Twitter can be understood as the spreading of a message through the users’ networks,

This project presents 3 objectives: First, to improve the existing methods of identifying harmful information. Secondly, to determine the psychographic profiling of harmful information authors, and last but not least, to characterize the harmful information diffusion to improve the detection and analysis of this information. In particular, we will focus on spreading harmful information, either false or aggressive, to detect it at the first stages of its spreading, linking them with anomalous diffusion processes.

Identifying, categorizing, and classifying these phenomena is the first step towards understanding the underlying dynamics of harmful information spreading. This classification would allow us to establish metrics, such as the diffusion coefficient or the exponent of the Mean Square Displacement (MSD), and understand their meaning in the information dissemination in Twitter. In addition, we want to determine how these metrics could complement the sentiment analysis carried out from the contents of these messages.



Get in touch today and understand your data like never before.

Yuwon Song