Social media giants Facebook and Instagram have taken action against over 30 million and about two million content pieces from May 15 to June 15, their compliance reports revealed. Facebook actioned over content across 10 violation categories, whereas Instagram covered nine. 

Also read: Spiderman cheering for us: Sania Mirza poses with son at Wimbledon 2021

Monthly compliance reports by digital platforms with over 5 million users have been mandated by the new IT rules. The report has to mention the details of complaints received and action taken thereon. An intermediary is also expected to include the number of specific communication links or parts of information removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools in the report. 

Facebook’s spokesperson said that the platform is consistently invested in technology, people and processes to further its agenda of keeping users safe and secure online and enabling them to express themselves freely. 

Also read: Gettr or Gutter? Internet unleashes meme fest on new conservative social media app

“We use a combination of artificial intelligence, reports from our community and review by our teams to identify and review content against our policies. We’ll continue to add more information and build on these efforts towards transparency as we evolve this report,” the spokesperson said in a statement to PTI.

Facebook said its next report will be published on July 15, containing details of user complaints received and action taken.

“We expect to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation. We will continue to bring more transparency to our work and include more information about our efforts in future reports,” it added.

Also read: No dream is small: Mehendi artist Sonali’s journey from a school dropout

The July 15 report will also contain data related to WhatsApp, which is part of Facebook’s family of apps.

Earlier this week, Facebook had said it will publish an interim report on July 2 providing information on the number of contents it removed proactively during May 15-June 15. The final report will be published on July 15, containing details of user complaints received and action taken.

Other major platforms that have made their reports public include Google and homegrown platform Koo.

In its report, Facebook said it had actioned over 30 million pieces of content across 10 categories during May 15-June 15. This includes content related to spam (25 million), violent and graphic content (2.5 million), adult nudity and sexual activity (1.8 million), and hate speech (311,000). Other categories under which content was actioned include bullying and harassment (118,000), suicide and self-injury (589,000), dangerous organisations and individuals: terrorist propaganda (106,000) and dangerous organisations and Individuals: organised hate (75,000).

‘Actioned’ content refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for violation of standards. Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning.

The proactive rate, which indicates the percentage of all content or accounts acted on which Facebook found and flagged using technology before users reported them, in most of these cases ranged between 96.4-99.9 per cent.

The proactive rate for removal of content related to bullying and harassment was 36.7 per cent as this content is contextual and highly personal by nature. In many instances, people need to report this behaviour to Facebook before it can identify or remove such content.

For Instagram, 2 million pieces of content were actioned across nine categories during May 15-June 15. This includes content related to suicide and self-injury (699,000), violent and graphic content (668,000), adult nudity and sexual activity (490,000), and bullying and harassment (108,000).

Other categories under which content was actioned include hate speech (53,000), dangerous organisations and individuals: terrorist propaganda (5,800), and dangerous organisations and individuals: organised hate (6,200).

Google had stated that 27,762 complaints were received by Google and YouTube in April this year from individual users in India over alleged violation of local laws or personal rights, which resulted in removal of 59,350 pieces of content.

Koo, in its report, said it has proactively moderated 54,235 content pieces, while 5,502 posts were reported by its users during June.

According to the IT rules, significant social media intermediaries are also required to appoint a chief compliance officer, a nodal officer and a grievance officer and these officials are required to be resident in India.

Non-compliance with the IT rules would result in these platforms losing their intermediary status that provides them immunity from liabilities over any third-party data hosted by them. In other words, they could be liable for criminal action in case of complaints.

Facebook recently named Spoorthi Priya as its grievance officer in India.

India is a major market for global digital platforms. As per data cited by the government recently, India has 53 crore WhatsApp users, 41 crore Facebook subscribers, 21 crore Instagram clients, while 1.75 crore account holders are on microblogging platform Twitter.