Google updates Gemini AI, apologizes for 'woke' inaccurate imagery

Amid social media uproar over inaccuracies in historical image generation, Google rolls out updates to its Gemini AI model, including new features like Python code editing and enterprise access to advanced models.
Amid social media uproar over inaccuracies in historical image generation, Google rolls out updates to its Gemini AI model, including new features like Python code editing and enterprise access to advanced models.

Google released two updates to its artificial intelligence (AI) model Gemini amid a storm of controversy on social media over “woke” inaccurate imagery produced by the model. 

On Feb. 20 and then again on Feb. 21, Gemini received two new upgrades to its system. The first applies to Gemini Advanced, allowing it to now edit and run Python code snippets directly in its user interface. Users with access to the advanced version can experiment with Python-based code to see how the code works prior to copying it, claiming to save time and verify functionality.

The second update hit Gemini business and enterprise plans, which now give users access to 1.0 Ultra — one of Google’s most advanced models, and employs enterprise-grade data protection. This prevents Gemini models from using conversations for training purposes.

This was previously a concern of major companies, such as mobile services provider Samsung, who banned employees from using ChatGPT-like AI tools on Samsung-owned devices and internal networks in May 2023.

These two updates followed the most recent one on Feb. 8, through which Google rebranded its chatbot from Bard to Gemini and included many upgraded features and capabilities. 

However, neither of the updates mentioned any change to Gemini’s image output. This topic has recently caused an uproar on social media due to “woke” imagery produced by the model, creating inaccurate depictions of history.

Google Gemini AI developer Jack Krawczyk posted on X that the company is “aware that Gemini is offering inaccuracies in some historical image generation depictions” and said the team is trying to implement “immediate” solutions.

Related: AI Cupid: ChatGPT offers relationship advice for Valentine’s Day

Meanwhile, the image inaccuracies of Gemini have been rubbing social media users the wrong way, with one user who claims to be a developer working at Google posting that they’ve “never been so embarrassed” to work for the company.

Another user also pointed out a similar bias in the popular AI chatbot ChatGPT, developed by OpenAI, when prompting it to create specific images. 

The user questioned why ChatGPT is getting a “free pass” from internet scrutiny for the same issue.

In response, Elon Musk posted about his own AI model Grok’ being “so important” as it prepares to release new upgraded versions in the coming weeks.

Nonetheless, Grok has been at the center of attention for another reason: having the same name, albeit a different spelling, as the AI language processing united (LPU) chip Groq that was trademarked and developed in 2016. 

Groq gained internet hype overnight after its first public benchmark tests emerged outperforming the top models by other Big Tech companies.

Magazine: Google to fix diversity-borked Gemini AI, ChatGPT goes insane: AI Eye