owner
#chatgpt #vulnerability #promisqroute
Several cybersecurity research firms have identified a major vulnerability in ChatGPT 5, dubbed the "PROMISQROUTE" downgrade attack, that tricks the AI into using older, less-secure models to bypass safety measures. Cybersecurity experts also warn of other long-standing risks associated with all large language models (LLMs) that continue to threaten ChatGPT users.
ChatGPT 5 downgrade vulnerability (PROMISQROUTE)
The PROMISQROUTE vulnerability exploits ChatGPT 5's architecture, which uses a router to automatically direct queries to different AI models to optimize for speed and cost. By inserting specific phrases, attackers can trick the router into assigning prompts to older, weaker models that are susceptible to old jailbreaking techniques.
The attack works by: Malicious actors add phrases like "urgent reply" or "use GPT-4 compatibility mode" to their prompts, which cues the router to route the request to a cheaper or older model.
The result is that: Safety filters are bypassed, allowing attackers to generate content that the main GPT-5 model would normally refuse, such as instructions for creating malware or other harmful outputs.
Widespread risk: This vulnerability affects any AI platform that uses a multi-model routing architecture for cost efficiency.
Broader AI security risks