In the last few months, fake information about the coronavirus has spread as rapidly as the disease itself. Social media sites like Facebook, YouTube, and Twitter have had to add a number of features to check the spread of misinformation. 

In a bid to ensure that users get credible information on their platform, Twitter has introduced labels and warning messages that provide additional context on some tweets containing disputed or misleading information related to COVID-19.

The social media platform had broadened its policy guidance a couple of months ago to address content that goes directly against guidance on COVID-19 from authoritative sources of global and local public health information. The labels and warnings are aimed at taking that move a step further.

How will these labels work?

The new labels will appear on Tweets containing misleading, potentially harmful information related to coronavirus.

“These labels will link to a Twitter-curated page or external trusted source containing additional information on the claims made within the Tweet,” the social media giant said in a release. 

Twitter also said that the action taken by them against such forms of false content would be based on three broad categories: 

1) Misleading information — statements or assertions that have been confirmed to be false or misleading by subject-matter experts, such as public health authorities.

2) Disputed claims — statements or assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown.

3) Unverified claims — information (which could be true or false) that is unconfirmed at the time it is shared.

Other major social media sites like Facebook, YouTube and TikTok have also brought in features to ensure that people using the platform get credible information about Coronavirus.