
Microsoft vulnerability that allowed for potential data theft has been fixed
Microsoft was alerted to a vulnerability in early 2024 that could potentially lead to the theft of sensitive user information. This vulnerability, which has since been fixed, impacted Microsoft 365 Copilot and introduced the possibility of ASCII smuggling.
As a result of the vulnerability, a series of attack methods could be strung together to create a reliable exploit chain:
- Inject a prompt through malicious hidden content in a shared document in a chat,
- Utilize a prompt injection payload to instruct Microsoft 365 Copilot to search for more emails and documents,
- Employ ASCII smuggling to trick the target into clicking on a link, transferring sensitive data to a third-party server.
In the event of such an attack, sensitive information from emails (including multi-factor authentication codes) could be sent to a server under the control of the malicious party.
Insights from Security Leaders
Stephen Kowski, Field CTO at SlashNext Email Security+:
“The ASCII smuggling technique demonstrates the evolving complexity of AI-driven attacks, where seemingly harmless content can mask malicious payloads capable of extracting sensitive data. To safeguard against these threats, organizations should deploy advanced threat detection systems that can analyze content across various communication channels, such as email, chat, and collaboration platforms. These solutions should leverage AI and machine learning to detect subtle anomalies and hidden malicious patterns that traditional security measures might overlook. Furthermore, ongoing employee education on emerging threats and the enforcement of strict access controls and data loss prevention measures are essential in mitigating the risks posed by these innovative attack vectors.”
Jason Soroko, Senior Fellow at Sectigo:
“The ASCII smuggling vulnerability in Microsoft 365 Copilot represents a unique flaw that enables attackers to conceal malicious code within seemingly benign text using special Unicode characters. These characters resemble ASCII but remain invisible in the user interface, allowing the attacker to embed hidden data within clickable hyperlinks. When users interact with these links, the concealed data can be forwarded to a third-party server, potentially compromising sensitive information like MFA one-time password codes.
“To minimize this risk, users should make sure their Microsoft 365 software is up to date, as Microsoft has addressed the vulnerability. Additionally, exercising caution when interacting with links in documents and emails, particularly those from unknown or untrusted sources, is crucial. Regular monitoring of AI tools like Copilot for unusual behavior is also vital for promptly identifying and addressing any suspicious activity.
“One aspect that warrants more attention is the practice of prompt injections. A prompt injection is a form of attack where an attacker manipulates an AI system, such as a large language model, by crafting specific inputs (or “prompts”) that prompt the AI to execute unintended actions. In the context of AI-driven tools like Microsoft 365 Copilot, a prompt injection can involve embedding malicious instructions within a document or message. When the AI processes these inputs, it mistakenly interprets them as legitimate commands, resulting in actions like retrieving sensitive data, altering responses, or even extracting data.
“The crux of a prompt injection attack lies in exploiting the AI’s ability to analyze and act on natural language inputs, causing it to carry out operations that were not intended by the user or system administrator. This can be particularly hazardous when the AI can access sensitive data or controls within a system.”