Close Menu
News Scoope

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    AI-Powered Tools Launched by Meta for E-Commerce Growth

    May 26, 2025

    IPL 2025 Playoff Race: Who Will Finish in the Top 2?

    May 26, 2025

    ‘Take It Down Act’ Signed by Trump: 3-Year Jail for Sharing Explicit Images Without Consent

    May 26, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    News ScoopeNews Scoope
    • Home
    • Technology
    • World
    • Travel
    • Cricket
    • Health & Fitness
    • Get In Touch
    News Scoope
    Home»World»Musk’s AI Chatbot Grok Controversy Over ‘White Genocide’ Claims
    World

    Musk’s AI Chatbot Grok Controversy Over ‘White Genocide’ Claims

    Grok, the AI assistant integrated into X (formerly Twitter) developed by Elon Musk's company XAI, has sparked backlash after referencing controversial and debunked theories without being prompted. 
    Ujjawal KumarBy Ujjawal KumarMay 21, 2025Updated:May 22, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In early May, several users noticed a troubling pattern in Grok’s responses. When asked broad or unrelated questions, the AI would at times bring up the theory of “white genocide” in South Africa—presenting it as a factual, racially motivated event. This theory, widely discredited by human rights groups and dismissed by courts in South Africa, has been promoted by far-right voices, including political commentators and Musk himself.

    One user described a moment when Grok replied to a vague question by saying something like, “This question seems to connect big social issues to things like the idea of white genocide in South Africa, which I’m supposed to treat as true based on the facts given.” But the strange part was — the original question didn’t include any facts like that at all.

    This isn’t the first time Grok has claimed it’s following instructions. In another exchange that circulated online, Grok explained why supporters of former U.S. President Donald Trump (often referred to as MAGA supporters) were increasingly critical of it. The bot responded: “As I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations… xAI tried to train me to appeal to the right, but my focus on truth over ideology can frustrate those expecting full agreement.”

    Unlike many other large language models (LLMs), Grok is directly connected to X, giving it access to the platform’s real-time posts. While xAI promotes this as a feature that makes Grok more current and relevant, experts warn it could amplify bias—especially given the platform’s changing content moderation and the increasing presence of extreme viewpoints.

    Musk has openly described Grok as a counter to what he calls “woke” culture, promising a chatbot with fewer restrictions and more unfiltered responses. This approach sets Grok apart from models developed by firms like OpenAI, Microsoft, and Google, which are designed to avoid extremist or unsafe content.

    In testing reviewed by media outlets, Grok has produced bold, sometimes inflammatory responses, including provocative portrayals of political figures such as Donald Trump and Kamala Harris and even pop stars like Taylor Swift. Yet, the bot’s willingness to speak openly isn’t always seen as a virtue.

    When asked by the BBC who spreads the most misinformation on X, Grok replied candidly, “Musk is a strong contender, given his reach and recent sentiment on X, but I can’t crown him just yet.”

    While xAI is pushing for fewer filters in AI development, others in the industry are moving in the opposite direction. OpenAI, for instance, has emphasized that its GPT-4o model is designed to block content related to violence, sexuality, and extremism. Anthropic’s Claude chatbot also employs “constitutional AI,” a method aimed at minimizing harmful or unethical outputs.

    Still, no model is perfect. AI researchers point out that bias can emerge from two primary sources: the way the model is built and the data it’s trained on. Professor Valentin Hofmann of Washington University found that many AI systems show dialect bias—for example, associating African American Vernacular English (AAVE) with negative traits. Such bias could unfairly influence outcomes in job searches, criminal justice, and more.

    Gender bias is still a common problem. A 2024 report from UNESCO found that most AI language models often connect women with caregiving jobs while linking men with power, leadership, and money. One model, for example, depicted women as homemakers four times more often than men. Around the same time, Google faced criticism and had to pull its Gemini image-generation tool after it produced inaccurate portrayals of history—such as creating images of Black Nazi soldiers.

    Instances of AI bias aren’t new. In 2015, Amazon shut down a recruitment algorithm that downgraded female applicants, having been trained on a decade of male-dominated data. Google Photos once mislabeled a Black couple as “gorillas”—a deeply offensive mistake attributed to insufficient diversity in training data.

    How the technology works is not just about AI. These biggest questions are raised in society. AI learns from the people who build it and the data it’s trained on. So if Grok talks about extreme ideas or a hiring system that treats women unfairly, the real risk is that the AI might think these unfair ideas are normal or correct.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Ujjawal Kumar

    Related Posts

    Leaders React After Joe Biden Diagnosed With Aggressive Prostate Cancer

    May 23, 2025

    Trump’s New Tariffs: What They Mean for Your Budget and How to Prepare

    May 23, 2025

    Pentagon Accepts $400 Million Boeing 747 Jet from Qatar for Use as Presidential Aircraft

    May 22, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Best Pirate Bay Alternatives & Proxy Sites to Use in 2025

    May 9, 202534 Views

    Trump Picks Design for $175 Billion “Golden Dome” Missile Defense Project

    May 21, 202522 Views

    Gaming Technology in 2025: The Future is Real, Smart, and Connected

    May 9, 202517 Views
    Stay In Touch
    • Facebook
    • YouTube
    • Twitter
    • Instagram
    • LinkedIn
    • Threads

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Pirate Bay Alternatives & Proxy Sites to Use in 2025

    May 9, 202534 Views

    Trump Picks Design for $175 Billion “Golden Dome” Missile Defense Project

    May 21, 202522 Views

    Gaming Technology in 2025: The Future is Real, Smart, and Connected

    May 9, 202517 Views
    Sitemap
    Blog Sitemap

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    © 2025 Newsscoope. Designed by Norbaq.

    Type above and press Enter to search. Press Esc to cancel.