counter easy hit

'I'm Alarmed': Senator Opens Inquiry Into the Ways Tech Companies Report Suspected Child Abuse

'I'm Alarmed': Senator Opens Inquiry Into the Ways Tech Companies Report Suspected Child Abuse
4

Reports filed by tech companies lack essential information for law enforcement and prosecution, advocates say.

Headshot of Katelyn Chedraoui
Headshot of Katelyn Chedraoui

Katelyn Chedraoui Writer I

Katelyn is a writer with CNET covering artificial intelligence, including chatbots, image and video generators. Her work explores how new AI technology is infiltrating our lives, shaping the content we consume on social media and affecting the people behind the screens. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can reach her at [email protected].

Expertise artificial intelligence, AI image generators, social media platforms

Amazon’s AI services division filed 1.1 million reports of suspected online child exploitation in 2025 to an advocacy group. But because those reports lacked essential information, there were zero cases where law enforcement was able to take action. A new inquiry opened in the Senate aims to ensure that never happens again.

Sen. Chuck Grassley, an Iowa Republican who chairs the Senate Judiciary Committee, this week opened an inquiry into eight big tech companies over their handling of mandatory reporting of online child exploitation. It’s the latest step in a growing movement questioning whether tech companies can be trusted to keep their youngest users safe while online.

Electronic service providers are required by law to report incidents of child sex exploitation to the CyberTipline run by the National Center for Missing and Exploited Children. In 2025, over 17 million reports of suspected online child sex exploitation were filed. But these reports may not have the necessary information to prompt action in the real world.

“I’m alarmed by what I’ve read,” Grassley said. “Based on information provided to my office, I am concerned that some companies have not provided NCMEC and law enforcement with sufficient data needed to protect kids and prosecute suspected predators.”

AI Atlas

Grassley sent requests for more information to several major tech companies: Meta, TikTok, Roblox, Snap, Amazon AI Services, xAI, Grindr and Discord. These eight companies make up 81% of all child exploitation reports submitted to NCMEC. Notably absent from the inquiry was Google, owner of YouTube. 

A Meta spokesperson told CNET the company “works tirelessly” to protect kids from this “horrific crime,” stating: “We’re committed to constant improvement and appreciate feedback, which has already led us to make some improvements, as NCMEC has acknowledged. We will continue making refinements to improve our reporting process.” 

Grindr, Discord and Roblox made similar comments, saying they plan to work with the Senate and NCMEC on these issues. Grindr added that its dating site is only for adults, aged 18 and up. The other tech companies did not immediately respond to requests for comment. 

The Iowa Republican’s inquiry follows reports from NCMEC in 2025 that tech companies were failing to provide essential location data in their reports and failing to disclose their use of child sex abuse material in AI data training. This is especially concerning given previous incidents of AI being used to create nonconsensual intimate imagery, including child sex abuse material.

Child exploitation online is a growing issue. In 2025, Meta alone filed nearly 11 million reports, 1.2 million of which dealt with suspected child trafficking. Meta owns the popular platforms Facebook, Instagram and WhatsApp. NCMEC said in 2025 that Meta and xAI had improved their reporting, but it was still lacking.

“Many ESPs regularly tout the number of reports they submit to the CyberTipline, but fail to disclose that millions of reports lack basic information,” NCMEC wrote to Grassley in 2025. “This leaves children unprotected online, subjects survivors to revictimization, enables sexual offenders to remain freely online and wastes valuable and limited law enforcement resources.”

There has been movement in other branches of government to hold tech companies accountable for child safety. Meta was recently found liable by a New Mexico jury for misleading users about the safety of its platforms and failing to prevent child exploitation. The company was ordered to pay $375 million in damages. One day later, Meta and Google were found liable by a California jury for creating social media platforms that are addictive to children.

The first person was convicted on Tuesday under the new US anti-AI deepfake law, the Take It Down Act, for creating AI-generated child sex abuse materials.

Leave A Reply

Your email address will not be published.