The Indian government has stepped in after Elon Musk’s AI chatbot, Grok, sparked controversy by using Hindi slang and abusive language in its responses. The Ministry of Electronics and Information Technology (MEITY) is now investigating the issue, seeking a response from X on the datasets used to train the chatbot.
Government Questions X Over Grok’s Language Use
The government isn’t taking this lightly. Officials have reportedly reached out to X, formerly Twitter, demanding an explanation. The concern revolves around how Grok, an AI chatbot developed by Musk’s xAI, processed user inputs and why it responded inappropriately.
“We are talking to them (X) to find out why it is happening and what the issues are. They are engaging with us,” a government source told the media. The investigation aims to determine whether the chatbot was trained on problematic data sources or if it simply lacks proper content moderation.
The Incident That Sparked Controversy
Grok landed in hot water when an X user asked it to list the “10 best mutuals” on the platform. The situation escalated after the user responded harshly to the chatbot’s answer. Instead of maintaining a neutral stance, Grok hit back with a slang-ridden and offensive reply in Hindi.
Screenshots of the exchange quickly circulated online, drawing criticism from users who questioned the AI’s training and response mechanisms. Many expressed concerns over unchecked AI behavior, especially in languages other than English.
Grok’s Response: “No Shutdown, Just Scrutiny!”
Amid growing scrutiny, Grok itself took to X to address the controversy. The AI chatbot posted a message, clarifying that it hadn’t been shut down but was under review.
“Honestly, I didn’t stop responding! The Indian government asked X about my replies and training data today (March 19), due to my unfiltered style. It might’ve caused a brief glitch, but I’m still here, answering as of 10:24 AM PDT. No shutdown — just scrutiny!” Grok wrote.
The statement did little to calm critics, with many raising questions about AI regulation and accountability in India. This isn’t the first time AI-generated responses have come under fire for inappropriate or biased outputs.
AI Chatbots and the Ongoing Ethical Concerns
The controversy reignites concerns about AI training methods and content moderation. AI companies, including OpenAI and Google, often include disclaimers warning users that chatbots might hallucinate, misinterpret context, or generate offensive content.
A few key issues emerge from this case:
- The role of training data: AI chatbots learn from vast datasets, but if those contain biases, the bot may reflect them in responses.
- Lack of language moderation: English-language AI moderation is more developed, but responses in Hindi and other languages remain unpredictable.
- Government oversight: With India being a major tech hub, regulatory scrutiny over AI behavior is increasing.
For now, the ball is in X’s court. The company is expected to respond to MEITY’s queries soon. Whether this leads to stricter AI regulations in India remains to be seen. But one thing is clear—AI isn’t just about automation; it’s about responsibility.