
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Perplexity for Public Safety Organizations launched in January.
- Police can use it, for example, to analyze crime scene photos.
- Other AI developers could soon follow suit.
Artificial intelligence startup Perplexity has launched a new initiative aimed at getting its technology into the hands of public safety professionals, including police officers. Unveiled last week, Perplexity for Public Service Organizations offers one free year of the company’s Enterprise Pro tier for up to 200 seats, with discount options available to larger agencies.
As anyone who routinely uses LLM-based tools like Perplexity and ChatGPT knows, these tools are fallible, to say the least: they’re prone to hallucination, inaccuracy, regurgitation of cultural biases that have seeped into their training data, and as a general rule, they’re designed to optimize for engagement rather than human well-being. Protocols around how to use them safely are very much a work in progress.
The upshot is that when it comes to sensitive industries like law enforcement, little errors can go a long way.
Mundane use cases, big consequences?
In its announcement, the company said the program is intended to help officers make more informed decisions in real time, and to automate routine tasks like generating descriptions of crime scene photos, analyzing news stories and body camera transcripts, and turning collections of investigators’ notes into polished, structured reports.
Seems innocuous enough. But to Katie Kinsey, chief of staff and AI policy counsel at the Policing Project, that’s precisely what makes them a red flag.
Also: What the nation’s strongest AI regulations change in 2026, according to legal experts
“What can be pernicious about these kinds of use cases is they can be presented as administrative or menial,” she said, adding that these everyday tasks can have big downstream effects on people’s lives. “There’s a lot of important decision-making, leading to charges and indictments, that emanates from the kinds of use cases they’re talking about here.”
Using a chatbot to synthesize your school notes into a personalized study guide is unlikely to result in catastrophe. But in law enforcement, where the stakes are much higher, little errors can have big consequences.
An AI tool might hallucinate in an obvious way by spinning a yarn about an officer shape-shifting into a frog, in which case the human users can simply disregard its outputs. The more dangerous scenario would be one in which an AI system subtly alters the truth in a way that’s difficult to detect — like hypothetically hallucinating minor details in a police report, which subsequently leads to a wrongful conviction. There have already been multiple cases of lawyers using AI tools that fabricated case precedents and other details when used to draft case filings; again, minor details, potentially disastrous consequences.
Also: Perplexity’s new AI tool lets you search patents with natural language – and it’s free
In an interview with ZDNET, a Perplexity spokesperson said the company is well-positioned to equip public safety personnel with AI tools, since it’s made accuracy a key part of its product and business model; rather than training its own AI models from scratch, Perplexity takes those of other developers, like OpenAI and Anthropic, and post-trains them to minimize hallucination.
Still, it has its shortcomings. A recent study conducted by the European Broadcasting Union and the BBC found that when asked about recent news stories, Perplexity, along with three other leading chatbots, frequently generated responses that “had at least one significant issue” related to accuracy, sourcing, or other criteria.
Assigning responsibility
All of this raises the question: Who should ultimately be responsible for ensuring AI is used responsibly within law enforcement?
Until the arrival of hallucination-free chatbots — and it’s unclear whether such a thing is even possible — they should be used at one’s own risk. That’s also true of police officers who opt to use Perplexity or other AI tools even for seemingly mundane uses, according to Andrew Ferguson, a professor at George Washington University Law School.
Also: Perplexity’s new AI tool lets you search patents with natural language – and it’s free
“When you are playing with liberty and constitutional rights, you need to make sure that safeguards are in place for accuracy … without laws or rules to protect against mistakes, it is incumbent on the police to make sure they use the technology wisely,” he told ZDNET.
Kinsey, on the other hand, believes the onus of responsibility should fall on policymakers. “The problem,” she said, “is there’s no hard law that’s setting out what these requirements should be.”
Looking ahead
While Perplexity said this is the first program of its kind, it almost certainly won’t be the last.
AI developers are facing huge pressure to expand their user bases as widely as possible, and police departments have a long history of being early adopters of new technologies. So-called “predictive policing” algorithms have been in use since the early 2000s — and sometimes criticized for allegedly perpetuating historical biases against marginalized groups and for their lack of transparency. More recently, some law enforcement agencies have begun using AI for facial recognition and lie detection.
“Law enforcement is a good client to have, because they’re not going anywhere,” said Kinsey. “We see that relationship between private industry and law enforcement all the time.”
The intense competition of the AI race could result in other companies following Perplexity’s lead by launching initiatives aimed at police officers and other public safety officials.
Comments are closed, but trackbacks and pingbacks are open.