Facebook defends policies toward teens on Instagram at Senate hearing
- Facebook has removed more than 600,000 accounts on Instagram
- These accounts didn’t meet the minimum age requirement of 13
- Facebook executives have consistently played down Instagram’s negative side
Facing outrage over its handling of internal research on harm to teens from Instagram, a Facebook executive is telling Congress that the company is working to protect young people on its platforms. And she disputes the way a recent newspaper story describes what the research shows.
“We have put in place multiple protections to create safe and age-appropriate experiences for people between the ages of 13 and 17,” Antigone Davis, Facebook’s head of global safety, said in written testimony Thursday for a Senate Commerce subcommittee.
Facebook has removed more than 600,000 accounts on Instagram from June to August this year that didn’t meet the minimum age requirement of 13, Davis said.
Davis was summoned by the panel as scrutiny over how Facebook handles information that could indicate potential harm for some of its users, especially girls, while publicly downplaying the negative impacts.
Also read: Foes turned friends against Facebook over Instagram’s effect on teens
The revelations in a report by The Wall Street Journal, based on internal research leaked by a whistleblower at Facebook, have set off a wave of anger from lawmakers, critics of Big Tech, child-development experts and parents. The outcry prompted Facebook to put on hold its work on a kids’ version of Instagram, which the company says is meant mainly for tweens aged 10 to 12. But it’s just a pause.
For some of the Instagram-devoted teens, the peer pressure generated by the visually focused app led to mental-health and body-image problems, and in some cases, eating disorders and suicidal thoughts. It was Facebook’s own researchers who alerted the social network giant’s executives to Instagram’s destructive potential.
Davis says in her testimony that Facebook has a history of using its internal research as well as outside experts and groups to inform changes to its apps, with the goal of keeping young people safe on the platforms and ensuring that those who aren’t old enough to use them do not.
Also read: How a Facebook-funded evacuation flight carried 155 Afghans to US
“This hearing will examine the toxic effects of Facebook and Instagram on young people and others, and is one of several that will ask tough questions about whether Big Tech companies are knowingly harming people and concealing that knowledge,” Sen. Richard Blumenthal, D-Conn., chairman of the consumer protection subcommittee, said in a statement. “Revelations about Facebook and others have raised profound questions about what can and should be done to protect people.”
Blumenthal and Sen. Marsha Blackburn of Tennessee, the panel’s senior Republican, also plan to take testimony next week from a Facebook whistleblower, believed to be the person who leaked the Instagram research documents to the Journal.
Despite the well-documented harms, Facebook executives have consistently played down Instagram’s negative side and have forged ahead with work on Instagram for Kids, until now. On Monday, Instagram head Adam Mosseri said in a blog post that the company will use its time out “to work with parents, experts and policymakers to demonstrate the value and need for this product.”
Already in July, Facebook said it was working with parents, experts and policymakers when it introduced safety measures for teens on its main Instagram platform. In fact, the company has been working with experts and other advisers for another product aimed at children — its Messenger Kids app that launched in late 2017.
The focused outrage transcending party and ideology contrasts with lawmakers’ posture toward social media generally, which splits Republicans and Democrats. Republicans accuse Facebook, Google and Twitter, without evidence, of deliberately suppressing conservative, religious and anti-abortion views.
Democrats train their criticism mainly on hate speech, misinformation and other content on the platforms that can incite violence, keep people from voting or spread falsehoods about the coronavirus.
The bipartisan pile-on against Facebook proceeds as the tech giant awaits a federal judge’s ruling on a revised complaint from the Federal Trade Commission in an epic antitrust case and as it tussles with the Biden administration over its handling of coronavirus vaccine misinformation.
Meanwhile, groundbreaking legislation has advanced in Congress that would curb the market power of Facebook and other tech giants Google, Amazon and Apple — and could force them to untie their dominant platforms from their other lines of business. For Facebook, that could target Instagram, the social media juggernaut valued at around $100 billion that it has owned since 2012, as well as messaging service WhatsApp.
Related Articles
ADVERTISEMENT