A critical security vulnerability discovered in Microsoft 365 Copilot highlights that there is a risk associated with AI-powered business tools that we must continue to learn about. The flaw, designated CVE-2025-32711 and dubbed “EchoLeak” by security researchers, represents the first known zero-click attack targeting an AI agent.
What is EchoLeak?
EchoLeak is a sophisticated vulnerability that allows attackers to extract sensitive data from Microsoft 365 Copilot without requiring any user interaction. It works by exploiting how the AI assistant processes and responds to information, enabling unauthorised access to organisational data through a specially crafted email.
The vulnerability was discovered in January 2025 by security researchers at Aim Security and has since been classified as critical by Microsoft. Microsoft has confirmed that the vulnerability has not been exploited in the wild, but has been resolved through server-side fixes.
The Attack Method
The EchoLeak attack begins with a deceptive email containing hidden prompt injection commands. These commands are crafted to appear as normal human language, allowing them to bypass Microsoft’s existing security protections (XPIA). When a user later interacts with Copilot, the AI system’s retrieval-augmented generation (RAG) engine may surface the malicious email within its processing context.
This is yet another example of email being the primary attack vector, highlighting the continued importance of vigilance against phishing attempts. If successful, attackers could extract information from connected Microsoft 365 services, including Outlook email, OneDrive storage, Office files, SharePoint sites, and Microsoft Teams chat history.
Organisational Considerations
A successful attack could lead to a feared outcome: large language models inadvertently leaking internal data through our use of AI. This obviously heightens our concerns about the implications of integrating AI into our business, with key concerns being data exposure, challenges in detecting the exfiltration and the dangers of being compromised without any awareness or direction on the part of the user.
Microsoft has responded by assigning CVE-2025-32711 a critical CVSS score of 9.3 and has implemented server-side fixes to address the vulnerability. Users of Microsoft 365 Copilot do not need to take any additional action as the patches have been applied automatically, however this should encourage continued discussion around the use of AI in our lives and workplace.
Protecting Against Similar Threats
While Microsoft has resolved this specific vulnerability, organisations should consider several protective measures:
- Implement robust email filtering and security protocols.
- Ensure clear policies to govern the use of AI tools within the organisation.
- Conduct regular security assessments of AI systems.
- Maximise staff awareness through training, leadership and an overall positive culture in the organisation, encouraging people to talk about issues they see.
- Prioritise automated patching to keep systems up to date.
References