counter easy hit

Your Copilot data can be hijacked with a single click – here's how

Your Copilot data can be hijacked with a single click – here's how
0
Reprompt: This one-click Copilot attack steals your data
Thomas Trutschel / Contributor via Photothek / Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Dubbed “Reprompt,” the attack used a URL parameter to steal user data.
  • A single click was enough to trigger the entire attack chain.
  • Attackers could pull sensitive Copilot data, even after the window closed.

Researchers have revealed a new attack that requires only one click to execute, bypassing Microsoft Copilot security controls and enabling the theft of user data.

Also: How to remove Copilot AI from Windows 11 today

Meet Reprompt

On Wednesday, Varonis Threat Labs published new research documenting Reprompt, a new attack method that impacts Microsoft’s Copilot AI assistant.

Reprompt impacts Microsoft Copilot Personal and, according to the team, “gives threat actors an invisible entry point to perform a data exfiltration chain that bypasses enterprise security controls entirely and accesses sensitive data without detection — all from one click.”

Also: AI PCs aren’t selling, and Microsoft’s PC partners are scrambling

No user interaction with Copilot or plugins is required for this attack to trigger. Instead, victims must click a link. After this single click, Reprompt can circumvent security controls by abusing the ‘q’ URL parameter to feed a prompt and malicious actions through to Copilot, potentially allowing an attacker to ask for data previously submitted by the user — including personally identifiable information (PII).

“The attacker maintains control even when the Copilot chat is closed, allowing the victim’s session to be silently exfiltrated with no interaction beyond that first click,” the researchers say.

How does Reprompt work?

Reprompt chains three techniques together:

  • Parameter 2 Prompt (P2P injection): By exploiting the ‘q’ URL parameter, an attacker can fill a prompt from a URL and inject crafted, malicious instructions that force Copilot to perform actions, including data exfiltration.
  • Double-request: While Copilot has safeguards that prevent direct data exfiltration or leaks, the team found that repeating a request for an action twice will force it to be performed.
  • Chain-request: Once the initial prompt (repeated twice) is executed, the Reprompt attack chain server issues follow-up instructions and requests, such as demands for additional information.

According to Varonis, this method is difficult to detect because user- and client-side monitoring tools can’t see it, and it bypasses built-in security mechanisms while disguising the data being exfiltrated.

“Copilot leaks the data little by little, allowing the threat to use each answer to generate the next malicious instruction,” the team added.

A proof-of-concept (PoC) video demonstration is available.

Microsoft’s response

Reprompt was quietly disclosed to Microsoft on Aug. 31, 2025. Microsoft patched the vulnerability prior to public disclosure and has confirmed that enterprise users of Microsoft 365 Copilot are not affected.

Also: Want Microsoft 365? Just don’t choose Premium – here’s why

ZDNET has reached out to Microsoft for comment, and it will update if it hears back.

How to stay safe

AI assistants — and browsers — are relatively new technologies, so hardly a week goes by without a security issue, design flaw, or vulnerability being discovered.

Phishing is one of the most common vectors for cyberattacks, and this particular attack required a user to click a malicious link. So, your first line of defense is to be cautious when it comes to links, especially if you do not trust the source.

Also: Gemini vs. Copilot: I compared the AI tools on 7 everyday tasks, and there’s a clear winner

As with any digital service, you should be careful about sharing sensitive or personal information. For AI assistants like Copilot, you should also check for any unusual behavior, such as suspicious data requests or strange prompts that may appear.

Varonis recommends that AI vendors and users remember that trust in new technologies can be exploited and says that “Reprompt represents a broader class of critical AI assistant vulnerabilities driven by external input.”

As such, the team suggests that URL and external inputs should be treated as untrusted, and so validation and safety controls should be implemented throughout the full process chain. In addition, safeguards should be imposed that reduce the risk of prompt chaining and repeated actions, and this shouldn’t stop at just the initial prompt.

Featured

Comments are closed, but trackbacks and pingbacks are open.