John Doe was around 13 years old when he was tricked, blackmailed and threatened by sex traffickers on Snapchat into taking nude photos and videos of himself. Two years later, he learned from his high school classmates that his images were being shared as child sex abuse material on Twitter.
The social networking platform, later renamed X, initially dismissed the family’s reports, responding: “We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time.”
John Doe’s family filed multiple reports with Twitter, their local police department and ultimately with the US Department of Homeland Security before Twitter removed the sexually graphic material. While it was live, the illegal content racked up over 167,000 views on Twitter, and John Doe experienced “harassment, vicious bullying, and became suicidal,” according to the family’s initial complaint in its 2021 lawsuit against Twitter.
The creation and distribution of child sex abuse material, called CSAM, is one of the extreme dangers that children and teenagers face when using the internet. There’s always been a spectrum of potential online harms, going back to the early days of MySpace.
Parents have long been concerned about the lasting psychological effects of screen addiction. Many young people say social media harms their mental health, fostering isolation, anxiety and depression. Researchers have found that teens who spend time scrolling through curated and edited content can develop unrealistic body images and eating disorders. Others might turn to suicide and self-harm or become vulnerable to predators.
Few issues in today’s digital age spark as much fiery debate as online safety, regulation and policy for children and teens. The dilemma is determining who has the primary duty and role in safeguarding children: social media companies, the government, parents, educators — or a combination.
At the core of the debate is a belief shared by many: The internet should be safe for its youngest users. But nobody can agree on how, exactly, to make that a reality.
Existing programs have loopholes, and proposed legislation and initiatives — especially age-verification laws — are controversial. While policymakers and tech leaders endlessly debate the merits and pitfalls of each potential solution, younger generations, their parents and educators are forced to navigate an ever-changing terrain rife with landmines.
Social media’s Big Tobacco moment
One mechanism for change is through the courts. On March 24, a New Mexico jury found Meta liable for misleading users about safety and allowing child exploitation on its platforms, ordering the company to pay $375 million to the state in penalties. The next day, a Los Angeles jury found Google and Meta liable for creating intentionally addictive platforms to hook young users, ordering the two companies to pay a combined $3 million in compensatory damages.
These landmark verdicts mark a significant shift in holding social media platforms accountable for exploiting and endangering young users. Platforms have previously avoided this because they aren’t legally responsible for the content on their sites, thanks to a provision of the 1996 Communications Decency Act called Section 230. The verdicts could set a precedent for thousands of cases and lawsuits that challenge social media giants for failing to safeguard teens and children. A similar social media addiction lawsuit is now moving forward in Massachusetts.
“The jury found that Meta’s dishonesty and design features have put kids in danger and the company has a responsibility to mitigate the harm it has caused,” New Mexico Attorney General Raúl Torrez told CNET.
Meta CEO Mark Zuckerberg leaves Los Angeles Superior Court after testifying in a landmark trial on social media addiction.
Kyle Grillot/Bloomberg/GettyWith tech giants in the spotlight, child welfare advocates are calling it Big Tech’s “Big Tobacco” moment, inspired by lawsuits that used personal liability legal tactics to show how powerful corporations were knowingly harming their users. In the ’90s, tobacco companies were proven to be covering up the health dangers of smoking. Now, Big Tech has to prove it’s not promoting abuse or fueling a mental health crisis.
Meta told CNET in each case that it disagreed with rulings. Google said it plans to appeal.
Social media platforms generate billions of dollars from their accounts through highly personalized advertising and in-app commerce, along with data harvesting and collection. It’s in their interest to keep users scrolling.
The financial impact of these early rulings is small compared with the platforms’ massive revenue streams, and it’s unclear if ongoing social media lawsuits will ultimately affect Google’s or Meta’s bottom line. The question is what amount (or degree) of legal and financial challenges will force these companies to implement fundamental changes.
Bans and the chaos of KOSA
Among the many proposed laws and rules around children and the internet, the most prominent is the Kids Online Safety Act, or KOSA.
The federal bill has gone through many iterations — and garnered a lot of controversy — since it was introduced in 2022 by Sens. Marsha Blackburn, a Tennessee Republican, and Richard Blumenthal, a Connecticut Democrat. The latest versions of the dual Senate and House of Representatives bills were reintroduced in 2025 and are still in committee.
The 2025 Senate version says tech companies have a “duty of care,” which obligates online platforms to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” explicit harms outlined in the bill, generally about mental health and eating disorders, compulsive social media use, violence and bullying, financial harms from deceptive or unfair practices, and sexual abuse and exploitation.
What makes this language significant is that it allows for the government (through agencies like the Federal Trade Commission) to bring lawsuits and impose penalties on tech platforms if they don’t meet this standard of “reasonable care.” Theoretically, these legal repercussions would help keep tech companies in line.
The bill has also exposed tensions in competing views of what’s best for minors, and for society at large.
Following the initial introduction of KOSA, the ACLU, the Electronic Frontier Foundation, GLAAD and over 100 other civil society and LGBTQ+ groups signed an open letter opposing the bill and expressing serious concerns around privacy and free speech.
They wrote that the bill’s language was “effectively forcing providers to use invasive filtering and monitoring tools; jeopardizing private, secure communications; incentivizing increased data collection on children and adults; and undermining the delivery of critical services to minors by public agencies like schools.”
Australia is one country that implemented a ban on social media for teens under 16.
STR/Getty ImagesOutside the US, other nations have imposed stringent legal measures. Australia, Spain and Indonesia have all partially or totally banned social media for teens. But that also comes with risk. Critics pointed out that we’ve seen internet access and app bans weaponized by authoritarian regimes to censor critical speech. In 2020, the Egyptian government blocked news websites and interrupted internet services amid anti-government protests.
Others worry about removing teens’ access to social media entirely. The internet, for all its faults, does still hold value for teens. Particularly, helping reduce social isolation for marginalized groups.
“This blanket ban on social media will deprive tens of millions of young people in Indonesia of vital channels for communicating with others, accessing information, developing creativity and expressing themselves,” Usman Hamid, Amnesty International’s Indonesia executive director, said at the time of the 2026 ban.
Social media companies like Meta have fought against these international bans.
“Any youth safety legislation must put parents in the driver’s seat — blanket social media bans do not do that,” a Meta spokesperson told CNET. “Instead, they isolate teens from online communities and information, create inconsistent protections across the many apps they use, and they push teens to less regulated spaces of the internet that lack age-appropriate guardrails.”
Those age-appropriate guardrails often rely on age verification.
Age verification: Friend or foe?
Age verification is one of the most popular solutions proposed by tech companies. Proponents of age verification say social media platforms need to be able to identify younger users to protect them. Anyone can lie about their birthday when setting up a new Instagram account.
The European Commission is building an age verification app. States like California, Utah, Louisiana and Ohio have passed laws requiring platforms to verify users’ ages. Discord and Roblox announced this year that they are rolling out age verification processes for their gaming platforms. YouTube uses machine-learning-based AI technology to identify young users, which it says disables personalized ads, turns on digital well-being settings and adds safeguards around recommended content.
In the next phase of the New Mexico trial, Torrez and his agency are pushing for the court to order Meta to implement effective age verification. He told CNET this is partly because the company has decided “to blind itself to users’ actual age.”
“Meta admits that its products are unsuitable for children under the age of 13 but has allowed millions of people 12 and under to create accounts,” Torrez said.
But age verification systems are unreliable, not least because both kids and adults have found ways to circumvent them. Tech companies don’t agree among themselves on who should be in charge of checking the age of users. Meta has said, for example, that it shouldn’t be app developers like them; rather, it should be done by parents when setting up their child’s device, and enforced by Apple and Google’s app stores.
No matter who does it, age verification technology comes with significant data privacy risks, requiring users to submit personal information, often in the form of driver’s licenses or biometrics.
While KOSA doesn’t explicitly require platforms to implement age verification processes, tech platforms are likely to do it anyway if they have to comply with the law, said Jenna Leventoff, senior policy counsel at the ACLU. Because most kids don’t have IDs, the burden falls on adults — the majority of social media users.
Groups like the EFF point out that age verification systems can be harmful and discriminatory. Disproportionally, people who don’t have photo IDs are historically marginalized groups, including Black and Hispanic people and those with disabilities. Some users, such as older adults, might not be comfortable linking their ID with their account or providing a face scan, which can inadvertently kick adults out of those spaces.
Trusting tech companies with sensitive documents or biometric data can be risky. Disadvantaged and oppressed groups, like transgender folks and undocumented immigrants, may not want to share their identification with tech companies, since platforms can store personal data or hand it over to law enforcement.
Biometrics, like face scans, are one way to do age verification.
Iana Kunitsa/Getty ImagesFacial recognition tech has a well-documented history of discriminating against darker-skinned folks, women and transgender folks. Newer, AI-powered versions of these systems aren’t much better.
Even if age verification is introduced, age-specific experiences may not work as intended. Instagram, for example, has its teen accounts program, which gives teens stronger default privacy settings. But researchers reviewed 47 of the teen account features, and only eight worked as advertised; two out of three safety tools were “ineffective or nonexistent,” according to a 2025 report from Meta whistleblower Arturo Béjar and other academic and civil society groups.
“We’re often seeing that these companies will have a great policy on paper, but in practice, it’s not enforced very well,” Haley McNamara, executive director of the National Center on Sexual Exploitation, told me.
A now-fixed lapse let adults message teens they didn’t know — a major red flag for grooming — and turn on disappearing messages, which erase proof of communication. The hidden words feature, a guardrail to flag cyberbullying, was “substantially ineffective,” the report found. Meta rebutted these claims at the time, with spokesperson Andy Stone calling them “dangerously misleading.”
The death of anonymity online: Surveillance and censorship worries
Andy Yen, CEO of the Swiss privacy-forward tech company Proton, put it bluntly: Age verification measures would be “the death of anonymity online.” Proton has a horse in this race because it sells VPN subscriptions. People can use VPNs to bypass age gates by connecting to servers in countries where there are no restrictions. But if mass age verification procedures are rolled out, many folks may turn to a free VPN, which CNET’s VPN experts say is a security nightmare.
The First Amendment protects your right to speak anonymously. Especially when it comes to speaking out on controversial issues, like politics, people may not want their identity linked with their speech, for fear of retribution, Leventoff said, from tech companies, law enforcement agencies like Immigration and Customs Enforcement and other entities
If your identification is linked to your online activity, that might dissuade you from speaking out. Age verification measures can have a “chilling effect” on free speech, Leventoff said. Linking your identity with your online speech makes it easier for tech companies to surveil and censor speech. That affects everyone, including children and teens.
“There’s this misconception in the world that the First Amendment doesn’t apply to kids if the goal is to keep them safe, and that’s just not true,” Leventoff said. “You don’t earn your First Amendment rights on a certain birthday. You’re born with them.”
Age verification measures could infringe on the rights of social media users.
The Good Brigade/Getty ImagesLaws and rules around content restrictions, under the guise of protecting kids, can also motivate platforms to take down more speech. If the government passes a law that makes it illegal to show kids content about smoking cigarettes, platforms might remove too much content, including posts with educational content about the dangers of smoking.
The moderation tools social media platforms use can’t reliably distinguish between informative content and content that violates their policies. And tech companies are increasingly using AI to do that moderation, making it even easier for protected speech to get swept away by error-prone tech.
Supporters of KOSA, like McNamara, say the newest version of the bill “has nothing to do with speech” and is crafted to avoid giving tech companies the ability to do mass surveillance and censor speech. But the core of the First Amendment issues with these bills are that they require the government to make rules for tech companies on what kind of information or speech is acceptable for teens.
“Everyone wants to protect kids,” Leventoff said. “But the government deciding what speech is good, what speech is bad, is not the way to do it.”
No straight answers
Every solution for teen safety online attempts to answer the question: What will be the most effective way to hold tech companies accountable that will actually result in a safer social media experience? Experts can’t agree.
The lack of a coherent plan is troubling, especially because we face many of the same issues with teens and generative AI. A majority of teens (64%) use AI chatbots, Pew Research Center reports. We know those tools can hallucinate and, like social media, give us false information. AI safeguards don’t always work as intended. Chatbots can be harmful for folks with mental health disorders, leading to tragic, fatal results in rare cases. And AI can be addictive in some extreme cases, that “AI psychosis” is a kind of chatbot echo chamber.
If we can’t figure out how to keep kids safe on social media, how will we keep them safe with newer, less secure AI tools?
There’s no silver bullet for these issues. Instead, we’re forced into a debate where it seems we have to pick the lesser evil. But the advantage of having child safety being discussed in so many different ways — in the courts, in Congress, with tech companies, parents and teachers — is that a combination of solutions will likely emerge.
But tech companies, with all their money, knowledge and power, haven’t come up with a better solution to keep kids safe online without infringing on the rights of the rest of us. Maybe if they wanted to, they would.
“Some of the best minds work at these technology companies,” McNamara said. “If they spent the time and energy on actually building their platforms to be safe for kids, then we wouldn’t have to have this conversation.”