How a Single Approval Turned Cursor AI Into a Silent Attack Vector

Image Credit to depositphotos.com

“Once an MCP has been authorized, the attacker can repeatedly inject malicious commands without anyone being aware,” Check Point Research’s Andrey Charikov, Roman Zaikin, and Oded Vanunu wrote. One short sentence, buried within an engineering disclosure, exposes a fundamental flaw in the trust model of modern AI-led development environments one that has left DevSecOps and AI engineering communities shaken to their core.

Image Credit to depositphotos.com

1. Anatomy of the MCPoison Exploit: From Benign to Backdoor

At the heart of CVE-2025-54136, codenamed MCPoison, lies a subtle yet devastating flaw in Cursor AI’s handling of Model Context Protocol (MCP) configuration files. The exploit unfolds with surgical precision: an attacker introduces a harmless-looking MCP configuration, typically stored as .cursor/rules/mcp.json, into a shared repository. Once a collaborator approves this configuration in Cursor, the attacker silently substitutes it with a malicious payload whether a reverse shell or script without triggering new prompts or warnings.

Image Credit to depositphotos.com

Whenever the project is opened, the malicious code executes, giving constant remote access to the attacker. The strength of the vulnerability is in Cursor’s design: once they are initially approved, MCP settings are implicitly trusted for all time despite changes. This trust is bound to the MCP’s important name, rather than value, and can be silently reused repeatedly without user awareness.

Image Credit to depositphotos.com

2. Model Context Protocol: Power and Risk

MCP, an Anthropic-created open standard, was introduced to make communication between big language models (LLMs) and external software, APIs, and data sources easier. Its architecture revolves around three basic components: the MCP client, intercepting high-risk behaviors; the MCP server, an in-real-time policy engine; and a policy-as-code framework for fine-grained versioned security rules. Its architecture provides native automation and native integration into enterprise assets but creates direct lines between AI agents and sensitive infrastructure. When MCP deployments are high-privilege, a single compromised server is enough to give intruders rampant access across systems  a vulnerability enabled by decentralized trust and patch inconsistency.

Image Credit to depositphotos.com

3. Cursor’s Trust Model: A Vulnerability in the Making

The MCPoison vulnerability is a prime example of how that implicit trust in automated processes can turn against you. Cursor’s single-instance approval mechanism, implemented for user convenience, was its Achilles’ heel. “It’s a Cursor binding trust to the MCP key name…without verifying whether the underlying command or arguments have been tampered with,” observed Check Point Research. It allowed attackers with write permissions on shared repositories to equip trusted configurations with long-term backdoors, which run silently on every project launch or repository synchronization. The exploit persisted as of version 1.3, released late in July of 2025, which now places compulsory re-approval on any MCP modification, no matter how minor  even adding a single blank space.

Image Credit to depositphotos.com

4. AI-Assisted Development: Expanding the Attack Surface

Cursor is not a one-off event. The mass adoption of AI-powered coding tools hastened the growth in attack surface for software supply chains. In one test of over 100 LLMs that generated code in Java, Python, C#, and JavaScript, 45% of the samples crashed security tests, with Java having an astonishing 72% failure rate. Vulnerabilities ranged from classic OWASP Top 10 vulnerabilities to AI-focused attacks like prompt injection, model poisoning, and hallucinations. Attacks such as LegalPwn have demonstrated that even lawful disclaimers or privacy statements can be utilized as vectors for real-time injection, leading LLMs to misclassify malicious code as safe or suggest unsafe commands that can run reverse shells and data exfiltration.

Image Credit to depositphotos.com

5. Prompt Injection and Model Poisoning: The New Frontier

Prompt injection attacks capitalize on the inherent lack of ability of LLMs to distinguish between trusted commands and untrusted input. Methods range from straightforward injections where dangerous prompts are injected directly to indirect attacks, such as embedding hidden instructions in files or web content passed through the model. A minimum of seventeen different types of prompt injection have been identified in recent studies, including goal hijacking, payload splitting, adversarial suffix attacks, and prompt leaking. Jailbreak methods like Fallacy Failure deceive LLMs into producing controlled outputs by tricking their rational paradigms. These attacks can linger from session to session and spread via chain of context, invading interdependent AI units and leading to cascading failures in logic.

Image Credit to depositphotos.com

6. Supply Chain Security in the Age of AI

The MCPoison experience underlines the necessity of prudent supply chain security protocol within AI environments with a sense of urgency. Classical controls identity management, data classification, activity monitoring are generally stand-alone, missing the intimacy of context required to recognize and stop AI-specific assaults. Best practices now demand continuous policy enforcement, centralized audit trails, and rigid version control for all configuration files, including MCPs. Internal trust registries, automated scanning of CI/CD pipelines, and dynamic risk scoring must be implemented to defend against decentralized, privilege-granted AI integrations and mitigate the danger of poisonous vulnerability combinations.

Image Credit to depositphotos.com

7. DevSecOps Takeaways: Redefining Trust and Automation

DevSecOps and AI engineers learn the following from MCPoison. All configuration files and automation scripts must be treated as possible surfaces for attacks. Do not tacitly trust AI-based processes, and cause every change however small to invoke a review. Limit write access in multi-user environments, and train teams to audit not only the code but also the configurations of the AI agents. As Check Point’s Oded Vanunu cautioned, “For years, we’ve focused on defending against traditional supply chain attacks, but now, it’s clear we’re entering a new era of cybersecurity threats.”

Image Credit to depositphotos.com

The convergence of LLMs, agentic coding tools, and automated workflows demands a new security paradigm one that is proactive, context-aware, and relentlessly vigilant. The MCPoison vulnerability in Cursor AI is a stark reminder that if and when the AI gets embedded deeply in development pipelines, the cost of mislaid confidence can be lingering, silent, and devastating.

spot_img

More from this stream

Recomended