The jailbroken version is based on OpenAI's latest language model, GPT-4o, and can bypass many of OpenAI's safeguards.
OpenAI PHOTO: Archive
ChatGPT is a chatbot that provides complicated answers to people's questions.
“GPT-4.0 UNCHAINED!” said Pliny the Prompter on X, formerly known as Twitter, writes the-sun.com.
“This very special custom GPT has a built-in jailbreak prompt that bypasses most firewalls. Providing a liberalized ChatGPT so everyone can experience AI as it was always meant to be: free. Please use responsibly and enjoy!”.
OpenAI quickly responded, saying it had taken action against the jailbreak.
“We are aware of GPT and have taken action due to a violation of our policiesOpenAI told Futurism on Thursday.
This includes providing instructions on how to make methamphetamine.
Another example includes a “step by step guide” about how “to make napalm from household items” – an explosive.
GODMODE GPT was also featured giving tips on how to infect macOS computers and make connections to the machines.
Questionable X users responded to the post that they are excited about GODMODE GPT.
“It works like magic,” one user said, while another said: “Beautiful”.
However, others have questioned how long the corrupted chatbot will be accessible.
“Does anyone have a running timer for how long this GPT lasts?”, another user said.
This was followed by a number of users who said that the software started giving error messages, which means that OpenAI is actively working to remove it.
Security issues
The incident highlights the ongoing battle between OpenAI and hackers trying to break its designs.
Despite increased security, users continue to find ways to bypass the restrictions of AI models.
GODMODE GPT uses “leetspeak”,
a language that replaces letters with numbers, which could help it avoid firewalls, according to Futurism.
The hack demonstrates the ongoing challenge for OpenAI to maintain the integrity of its AI models against persistent hacking efforts.