Managing online forums in the age of misinformation
The World Wide Web (WWW) has grown and evolved in parallel with technology. The role of netizens has been significantly altered where they are no longer just consumers of information but also contributors, curators and distributors of information.
In the past, the skills of an information technology (IT) expert may have been required to resolve technical glitches in our personal computers. Today, a simple Google search or a visit to Stack Overflow will return a myriad of results where other netizens who have experienced similar problems may hold the keys to unlocking our problems.
However, social media and online discussion forums have given a voice to everyone without necessarily holding them accountable for what they say. These platforms have also become fertile grounds for unscrupulous individuals spreading misinformation and fake news while hiding behind fake profiles.
A study by the Annenberg Public Policy Center of the University of Pennsylvania recently revealed that "people who rely on social media for information were more likely to be misinformed about vaccines than those who rely on traditional media".
The dissemination of fake news will remain an existential threat, especially in matters involving politics, owing to its highly partisan nature. However, there is an urgent need for some form of moderation and verification of online solutions provided by forums and discussion boards.
"Platforms like Reddit and Quora are examples where you can ask a question, and other users will provide you with an answer or engage in a discussion. But you're going to have a lot of answers, and you will ask yourself who you can trust and believe," says Dr Ian Lim Wern Han from the School of Information Technology, Monash University Malaysia.
Lim is currently working on a project that aims to extract valuable information from these platforms, particularly tacit knowledge that is hard to come by from traditional sources of information.
"There are too many social media platforms, with hundreds of thousands of threads and even comments. It is difficult and extremely costly to process them one by one, especially given their unstructured nature. So, I looked at it from a user's point of view, where I identified trustworthy users. In other words, what I am doing is a measure of trust and reliability of other people online based on my profiling methods."
Lim's work is premised on assigning numbers to users of online discussion forums to indicate the reliability of what they post. His idea rewards people who are credible and trustworthy and punishes people who are peddling lies and misinformation. The reward or punishment mainly revolves around visibility or engagement. If users are credible, their posts can be placed higher up on the page for more visibility. Their votes could also be worth more when they vote on other threads or comments. But, if a user is not credible, their post can be placed lower in the page or even hidden from the public, and their votes could be worth less or nothing at all.
"On Reddit, users can be 'shadowbanned'. Their posts will not be shown to the public unless approved manually by moderators. Likewise, their votes wouldn't count. But the users themselves would not know this because it is still visible to them," explains Lim.
The accuracy of the approach is enhanced using measures of confidence and volatility on a complex network of interactions. This ensures that the most credible sources of information or question appear at the very top of a thread on online forums. On the other hand, the very same rating can be used to match the questioner with suitable and reliable experts.
For this research, Lim collected more than 700,000 threads for one and a half years from Reddit - involving almost two million users. His approach profiles each individual with a rating on the first day. These numbers were then used to predict a user's contribution on a subsequent day. The figures were updated each day, and the process was repeated in the ensuing days.
"This was a continuous process. We test whether or not I am accurate as I learn more about the user. On a real-world dataset of one-and-half-years, involving four very different communities, we found out that it is, in fact, possible to predict how a user performs in the future."
Lim looks forward to using the methodology he has created to profile influencers on social media. He cites the ongoing debate in the United States as to whether face masks are compulsory, as a case in point, where choosing the right influencers to disseminate public service announcement (PSA), is critical.
"How can I look at the strength of a person who is influential in dealing with the threat of misinformation? For example, the National Basketball Association (NBA) players have made headlines for their beliefs in conspiracy theories. Some of them believe that the COVID-19 pandemic is being overblown and there is a hidden agenda to it. So each time a player tweets, they tend to influence a lot of people. So how do you prevent the spread of misinformation?" Lim asks.