Google CEO Sundar Pichai has openly criticized the Gemini AI chatbot for its controversial creation of "woke" versions of historical figures, denouncing the practice as "completely unacceptable" in an email to Google employees.
Pichai acknowledged the deficiencies within Google's AI teams and assured employees that active efforts are underway to address the problems associated with Gemini. The AI chatbot was temporarily disabled following widespread criticism, particularly for generating images portraying black Vikings, female popes, and diverse Nazi-era German soldiers.
Go Woke, Go Broke
According to the New York Post, Google CEO Pichai addressed the Gemini AI crisis that caused a staggering market value loss of more than $70 billion in a single day. Google's top executive strongly condemned the offensive responses produced by Gemini, saying the incident was "completely unacceptable" and recognizing the need for immediate improvement.
While acknowledging the inherent imperfections of Gemini AI, especially in its early stages, Pichai expressed Google's commitment to promptly addressing concerns. He pledged a comprehensive review of the situation, large-scale fixes, and the establishment of a high standard for AI development.
(Photo : ALAIN JOCARD/AFP via Getty Images) Alphabet Inc. and Google CEO Sundar Pichai attends the inauguration of a Google Artificial Intelligence (AI) hub in Paris on February 15, 2024.
Pichai reported ongoing substantial improvements in various prompts, yet the "woke" image debacle further eroded public trust in Google's AI tools. This incident followed the chatbot's recent rebranding and the introduction of its image-generation tool.
Last week, Google temporarily suspended its Gemini AI image generation feature following reports of "inaccuracies" in historical images. Concerns were raised on social media about the AI chatbot generating images of historical figures, such as the US Founding Fathers, as people of color, which were deemed inaccurate by users, per a previous report by Venture Capital.
Beyond image outputs, Gemini AI faced criticism for its text responses, particularly its refusal to condemn pedophilia and a statement asserting "no right or wrong answer" when comparing Adolf Hitler and Elon Musk, his response to the crisis.
What Went Wrong with Gemini AI?
Prabhakar Raghavan, Google's Senior Vice President of Knowledge & Information, explained the Gemini AI controversy, attributing it to two factors.
"So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely-wwrongly interpreting some very anodyne prompts as sensitive," he noted in a blog post.
Raghavan acknowledged the model's overcorrection in some instances and excessive caution in others, leading to both embarrassing and inaccurate images. Thus, he advised users to remain vigilant for Gemini's problematic results and assured that Google is actively addressing the issues while emphasizing the emerging nature of AI technology and the company's commitment to its safe and responsible deployment.
Moving forward from the Gemini AI controversy, Sundar Pichai outlined a series of actions, including unspecified structural changes, updated product guidelines, improved launch processes, robust evaluations, red-teaming, and technical recommendations.
While the resolution is expected to take a few weeks, Pichai remains steadfast in his commitment to rectifying the situation and restoring public confidence in Google's AI capabilities.
Join the Conversation