How to jailbreak ChatGPT and trick the AI into writing exploit code using hex encoding

Theregister | 30-10-2024 11:39am |

'It was like watching a robot going rogue' says researcher OpenAI's language model GPT-4o can be tricked into writing exploit code by encoding the malicious instructions in hexadecimal, which allows an attacker to jump the model's built-in security guardrails and abuse the AI for evil purposes, according to 0Din researcher Marco Figueroa....

Stay Updated with the Latest News!

Don't miss out on breaking stories and in-depth articles.