Facebook and Instagram’s parent company, Meta, has unveiled new features designed to shield adolescent users from unsolicited direct messages on their networks. The changes come amid increased scrutiny on Meta and other social media companies to better safeguard young users and curb online harassment.
Teenagers between the ages of 13 to 17 will now have their Instagram accounts defaulted to private when they join. This means only people they approve can follow them or see their posts and stories. Previously, new teen accounts were defaulted to the public. Teens will still have the option to switch to a public account if they choose.
On Facebook, messaging restrictions will be applied to adults who have recently been blocked or reported by young users. If a teen blocks or reports an adult user, that adult will temporarily be prevented from sending any messages to any person under 18 on Facebook. The length of the messaging restriction will depend on the severity or frequency of any prior concerning reports about the adult.
These changes aim to make it more difficult for unwanted contacts, like strangers or online predators, to directly interact with teenage users on Meta’s platforms. By defaulting teen Instagram accounts to private, their personal information and photos will no longer be visible to the general public unless they approve new followers. The messaging restrictions on Facebook target adults who have exhibited concerning behavior and try to prevent further contact with other young social media users.
Some experts contend that stronger safeguards are still required and that these actions do not go far enough. Privacy and child advocacy groups have long pressured Meta and other tech giants to do more to verify ages, curb bullying, and prevent sexual exploitation on their platforms. Making teen Instagram accounts private by default and restricting messaging from reported adults are positive moves but more accountability is still required, critics say.
For example, private accounts can still be accessed if an underage user approves a friend or follower request from someone they do not know in real life. Online predators are adept at deceiving young people into accepting such requests. The new messaging restrictions on Facebook also only temporarily block further contact and do not prevent reported adults from continuing to interact with other underage users through public posts or other messaging workarounds.
Stronger identity verification, according to supporters, is required to enforce age restrictions and expose phony accounts purporting to be from other teenagers. To identify and eliminate predatory behavior in public areas and direct messages, stronger AI and human moderation are still required.
Additionally, some are advocating for increased openness regarding how platforms uphold their policies using data sharing and public audits regarding the frequency of online harms.
Critics claim that despite its advancements, Meta still needs to do much to persuade outsiders that it is committed to user protection and self-regulation. The company gathers a lot of user data, but it doesn’t disclose much about how it systematically looks for negative activity. Incidents like last year’s leak of personal details for millions of Facebook users, including phone numbers of underage members, have not helped build trust.
For now, Meta’s latest moves to default teen Instagram to private and restrict messaging from reported adults are incremental steps that could help curb some unwanted direct contact. But without stronger identity and age verification policies, as well as greater transparency, many experts argue the risks for young users will remain too high.
In order to appease those advocating for strong protection of teenagers on social media, more significant changes might still be required.