The report claims CEO Sundar Pichai has declared a “code red” and accelerated AI development. Google is reportedly preparing to show off at least 20 AI-powered products and a chatbot for its search engine this year, with at least some set to debut at its I/O conference in May.
According to a slide deck viewed by the Times, among the AI projects Google is working on are an image generation tool, an upgraded version of AI Test Kitchen (an app used to test prototypes), a TikTok-style green screen mode for YouTube and a tool that can generate videos to summarize other clips. Also in the pipeline are a feature titled Shopping Try-on (perhaps akin to one Amazon has been developing), a wallpaper creator for Pixel phones and AI-driven tools that could make it easier for developers to create Android apps.
Pichai reportedly brought in Google founders Larry Page and Sergey Brin last month to meet with current leaders, review AI plans and offer input. The duo hasn’t had much day-to-day involvement with the company since 2019, as they’re focusing on other projects.
Google has attempted to speed up product approval processes, including checks to ensure that AI-driven tech is fair and ethical, it’s claimed. In addition, the company is said to be adjusting the risk levels it’s prepared to take on as it rolls out such tech. Priorities for a demo of the search chatbot seemingly include safety, accuracy and blocking out misinformation. However, for the other products and tools Google is working on, it has “a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them,” the Times reported.
Of late, Google has exercised some caution when it comes to unveiling new products. The slide deck reportedly mentioned “copyright, privacy and antitrust” as the main risks of AI tech. It’s said to have noted that solutions were needed to keep out copyrighted material and prevent personally identifiable information from being shared.
Over the last few years, there has been a backlash against Google’s handling of AI ethics. Timnit Gebru and Margaret Mitchell, two leading AI ethics researchers, said Google fired them. Gebru and Mitchell accused Google of censoring research that criticizes AI language-learning models, including concerns that they encode biases found in training data. That can result “in models that encode stereotypical and derogatory associations along gender, race, ethnicity, and disability status,” the researchers wrote in a paper. Training datasets can include false information as well. Two other prominent ethics researchers left Google early last year, after Gebru and Mitchell’s departures.
It’s not difficult to understand why Google is said to be in panic mode over ChatGPT. For one thing, earlier this month, reports suggested that Microsoft (an OpenAI investor) plans to incorporate some of the tech powering ChatGPT into Bing. The company said this week that it will soon integrate ChatGPT into the Azure OpenAI Service.
The latest report on Google’s response to ChatGPT comes just after the company announced it’s laying off 12,000 people. “I am confident about the huge opportunity in front of us thanks to the strength of our mission, the value of our products and services, and our early investments in AI,” Pichai wrote in a memo to staff. “To fully capture it, we’ll need to make tough choices.”
The CEO added that the company is preparing to unveil “some entirely new experiences for users, developers and businesses. We have a substantial opportunity in front of us with AI across our products and are prepared to approach it boldly and responsibly.”
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.