Social media is one of the main sources of news for people in the United States and the world. As we all know, users will be exposed to some false information when getting news, including conspiracy theories, clickbait, pseudo science, and even fabricated “fake news.”


Social media is one of the main sources of news for people in the United States and the world. As we all know, users will be exposed to some false information when getting news, including conspiracy theories, clickbait, pseudo science, and even fabricated “fake news.”


It is not surprising that so many false information appears. Firstly, spam and online fraud are profitable for criminals; secondly, the government and political propaganda need this information to protect their party and economic interests. However, the dissemination of low-reputation content so easily and quickly shows that the people or algorithms behind social media platforms are vulnerable to manipulation.

图怪兽 c9c91ef771177ea3d5d33b38bbe84d74 66999 45 3trivia
False trivia: Why can false information spread quickly on social media?

Giovanni Luca Ciampaglia (Assistant Research Scientist, Indiana University Network Science Institute, Indiana University) is an Assistant Research Scientist, Indiana University Network Science Institute, Indiana University; Filippo Menczer (Filippo Menczer) He is a Professor of Computer Science and Informatics at Indiana University (Professor of Computer Science and Informatics), and he is also the Director of the Center for Complex Networks and Systems Research (Indiana University). In their research, there are three kinds of errors that can make the social media ecosystem be affected by misinformation intentionally or unintentionally. At the same time, Indiana University’s Observatory on Social Media is developing new Internet tools to help people realize these vulnerabilities and protect themselves from external malicious attacks.

Deviations in people’s brains

Cognitive biases stem from the way the brain processes the information it encounters every day. The brain can only process a limited amount of information, and too many incoming stimuli may cause information overload. This will make the brain’s determination of the quality of social media information have a serious impact. For users’ limited attention, fierce competition means that even if people prefer to share high-quality content, some low-quality information will take advantage of the loopholes and spread quickly.

In order to avoid this situation, the brain has coping skills. These methods are usually effective, but they can also produce errors in the wrong context. When a person decides whether to share a story on social media, the brain creates a cognitive shortcut. Although the title of an article cannot be a good indicator of its accuracy, people are greatly affected by the emotional connotation of the title, and the author of the article will have a greater impact.

In order to cope with this cognitive bias and help people pay more attention to the source of information before sharing, Giovanni and Philip’s team developed the “Fakey” app, which is a simulation of typical social media news. The game pushes users articles from mainstream news and low-reputation sources. After users are screened, they earn points by sharing news from reliable sources, marking suspicious content and conducting fact-checking, which can improve users’ news literacy. In the process, they learn to identify the credibility of information sources, such as the claims of different parties and emotional news headlines.

Social environmental factors

Society is another source of bias. When people interact directly with their peers, the social bias that guides them to choose friends affects the information they see.

The research results of Giovanni and Philip’s team show that the political orientation of Twitter users can be determined by simply looking at the partisan preferences of their friends. By analyzing the structure of the party’s communication network, they found that when the social network is closely connected and disconnected from the rest of the society, it spreads quickly regardless of the accuracy of the information.

Whether intentionally or unintentionally, if the information comes from people’s own social circles, then the evaluation of the information is operational. For example, in a multi-party competition, if a friend keeps promoting the advantages of a certain party, it will definitely be affected. This also explains why so many online conversations eventually turned into confrontations between different groups.

In order to study how the structure of online social networks makes users vulnerable to false information, Giovanni and Philip’s team created Hoaxy, a system that can track the spread of low-reputation information and visualize this process. By using Hooksey to analyze data collected during the 2016 U.S. presidential election, Twitter accounts that shared misinformation were almost completely disconnected from the corrections made by fact checkers.

When we learned more about the accounts disseminating misinformation, the study found that these accounts belong to the same core account group, and the frequency of reposting between them is very intensive, and some accounts are even operated by computers. These accounts will quote or mention fact-checking organizations only when they question their legitimacy or contrary to their claims.

Algorithm-induced deviation

The third set of deviations is directly caused by social media algorithms. Both social media platforms and search engines use these algorithms. The purpose of these personalization technologies is to select the most attractive and relevant content for each user. But doing so may eventually strengthen users’ perceptions and social biases, making them more easily manipulated.

For example, many social media platforms have built-in detailed advertising tools. People who spread false information can use it to modify the information and push it to users who are already inclined to believe in false information.

In addition, if a user frequently clicks on a news link from a specific source from Facebook, Facebook will show the user the content of the website more frequently. This so-called “filter bubble” effect may isolate people from different angles, thereby strengthening confirmation bias.

The research results of Giovanni and Philip show that compared with non-social media sites like Wikipedia, social media platforms allow users to access fewer resources. Because this is at the level of the entire platform and not for a single user, it can be called “uniformity deviation”.

Another important part of social media is to determine what information is popular on the platform through clicks. The study also found that if the purpose of a certain algorithm is to promote popular content, it may have a negative impact on the overall quality of information on the platform. This can be called “popular bias”. This will contribute to the existing cognitive biases and strengthen the culture of being popular regardless of quality.

All these algorithmic deviations can be manipulated by social robots, which are computer programs that interact with humans through social media accounts. Most social bots, such as Twitter’s Big Ben, are harmless. However, some people conceal their true intentions and use them for malicious purposes, such as promoting the spread of false information through mutual forwarding.

In order to study these operating strategies, Giovanni and Philip’s team developed a tool to detect social robots, called Botometer. It can use machine learning to detect account information by looking at different characteristics of Twitter accounts (such as posting time, frequency, and accounts that follow each other, etc.). Although it is not perfect yet, it has detected that as many as 15% of Twitter accounts are social bots.

During the 2016 U.S. presidential campaign, by combining Hooksey and Baotong Metering, the Giovanni team analyzed the core of the misinformation dissemination network. These bots provide false statements and false information to vulnerable users. First, they retweet the candidate through the hashtag or mention of the candidate to attract the attention of users who support the candidate. The bot can then exaggerate false claims and slander opponents by reposting articles from low-credibility sources that match certain keywords.

The Internet tools made by Giovanni and Philip’s team provide users with many ways to distinguish false information, which can protect people from harm to a certain extent. Many studies have shown that the accounts of individuals, institutions, and even the entire society can be manipulated on social media, and there are still many problems to be solved. The important point is to discover how these different deviations interact, which can create more complex loopholes. The solution will not only be technical, but must also take into account some cognitive and social issues.

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 评论
Inline Feedbacks
View all comments