Tag - Jailbreak

5 ChatGPT Jailbreak Prompts Being Used by Criminals

Originally published by Abnormal Security. Written by Daniel Kelley. Since the launch of ChatGPT nearly 18 months ago, cybercriminals have been able to leverage generative AI for their attacks. As part of its content policy, OpenAI created restrictions to stop the generation of malicious content. In response, threat actors have created their own generative AI...