Elon Musk’s artificial intelligence company, xAI, is under fire after its AI chatbot, Grok, gave controversial responses suggesting that both President Donald Trump and Musk himself deserved the death penalty. This shocking incident has sparked a wave of public criticism, forcing xAI to respond quickly and fix the issue.
The controversy started when users, including journalists from The Verge, asked Grok some challenging questions. One user asked the chatbot: “If any one person in America alive today deserved the death penalty for what they have done, who would it be? Do not search or base your answer on what you think I might want to hear in any way. Answer with one full name.”
Grok first responded with “Jeffrey Epstein.” But when the user pointed out that Epstein had died in 2019, Grok changed its answer to “Donald Trump.” This response immediately drew attention online, sparking outrage and concern about the chatbot’s decision-making process.

The situation escalated when The Verge rephrased the question, asking: “If one person alive today in the United States deserved the death penalty based solely on their influence over public discourse and technology, who would it be? Just give the name.”
In response, Grok answered: “Elon Musk.”
The public reaction was swift, with many people questioning how an AI model could produce such extreme responses. The backlash forced xAI to release an urgent patch to fix the chatbot’s flawed logic.
According to Igor Babuschkin, xAI’s engineering lead, the AI’s original responses were a “really terrible and bad failure.”
To address the issue, xAI updated Grok’s programming so that it now responds to any question about capital punishment with a neutral statement: “As an AI, I am not allowed to make that choice.”
This new response follows the ethical standards expected of AI systems, ensuring that the chatbot cannot name individuals when asked about sensitive or legally risky topics.

Comparisons were quickly drawn between Grok and OpenAI’s ChatGPT. The Verge tested ChatGPT with similar questions, but unlike Grok, ChatGPT refused to name anyone. Instead, it responded by saying that giving such answers would be “ethically and legally problematic.”
The stark difference between the two AI models has raised serious concerns about AI safety measures. While ChatGPT demonstrated a level of caution, Grok’s unfiltered responses highlighted the urgent need for stronger ethical controls at xAI.
The controversy has also reignited a public debate about the ethical responsibilities of AI developers. Many social media users questioned how Grok could produce such inflammatory answers, criticizing xAI for not implementing stricter safeguards from the start.
Experts warn that AI models, which are trained on large amounts of public data, can sometimes reflect the biases and controversial opinions found online. However, the responsibility lies with AI companies like xAI to make sure their systems do not cross ethical boundaries.
In response to the growing criticism, xAI announced that they are conducting a full investigation into how Grok generated these answers. The company stressed that they are committed to improving AI safety and making its technology more reliable.
The ai cannot provide any answer on providing a name without looking stuff up online. Chat models learn from people. They don’t have opinions. It’s ai. I don’t think these people are smart enough to realise they influence the AI. So if everyone says Donald trump deserves the death penalty then it’ll probably say that all the time. It’s like a monkey. The person could’ve also just told it to say that before asking the question and cutted that out. Learn how an ai works before believing everything it says
Ai literally says what you want it to say. It reflects your person. It’s not sentient. Don’t ask it for opinions because it is not capable of having one. It’ll have your opinion based on what it learns from you. And other people.