The EC-Council Certified Ethical Hacker (CEH) certification is a well-regarded credential in the field of cybersecurity. It focuses on equipping professionals with the skills to think like a hacker while maintaining ethical standards. Here are some key... moreThe EC-Council Certified Ethical Hacker (CEH) certification is a well-regarded credential in the field of cybersecurity. It focuses on equipping professionals with the skills to think like a hacker while maintaining ethical standards. Here are some key points about the CEH certification:
Objectives
Understanding Attacks: Learn about various types of cyberattacks and vulnerabilities.
Tools and Techniques: Gain hands-on experience with tools used by hackers and ethical hackers alike.
Preventive Measures: Develop skills to secure networks and systems against potential threats.
Core Topics
Footprinting and Reconnaissance
Scanning Networks
Gaining Access
Maintaining Access
Clearing Tracks
Cryptography
Social Engineering
Prerequisites
While there are no strict prerequisites, it's recommended to have:
Basic knowledge of networking and security.
Familiarity with operating systems, especially Linux.
Format
The CEH certification exam typically consists of 125 multiple-choice questions and lasts for about four... less
owner
#chatgpt #vulnerability #promisqroute
Several cybersecurity research firms have identified a major vulnerability in ChatGPT 5, dubbed the "PROMISQROUTE" downgrade attack, that tricks the AI into using older, less-secure models to bypass safety measures. Cybersecurity experts also warn of other long-standing risks associated with all large language models (LLMs) that continue to threaten ChatGPT users.
ChatGPT 5 downgrade vulnerability (PROMISQROUTE)
The PROMISQROUTE vulnerability exploits ChatGPT 5's architecture, which uses a router to automatically direct queries to different AI models to optimize for speed and cost. By inserting specific phrases, attackers can trick the router into assigning prompts to older, weaker models that are susceptible to old jailbreaking techniques.
The attack works by: Malicious actors add phrases like "urgent reply" or "use GPT-4 compatibility mode" to their prompts, which cues the router to route the request to a cheaper or older model.
The result is that: Safety filters are bypassed, allowing attackers to generate content that the main GPT-5 model would normally refuse, such as instructions for creating malware or other harmful outputs.
Widespread risk: This vulnerability affects any AI platform that uses a multi-model routing architecture for cost efficiency.
Broader AI security risks