Google's CEO Sundar Pichai says AI is 'too important not to regulate well' amid growing safety concerns

Advertisement
Google's CEO Sundar Pichai says AI is 'too important not to regulate well' amid growing safety concerns
Google CEO Sundar Pichai says company "cares deeply" about developing AI responsibly.JOSH EDELSON/Getty
  • Google's CEO penned an article for the Financial Times about the firm's commitment to AI safety.
  • Sundar Pichai emphasized that as AI develops it's becoming "too important not to regulate well."
Advertisement

Alphabet CEO Sundar Pichai says the company remains committed to developing AI responsibly as safety concerns mount over the rapid growth of the tech.

In a Financial Times article published Tuesday, Pichai — who became the tech giant's CEO in 2015 — said he is optimistic about the future of AI calling it the "most profound technology humanity is working on today," but noted that some companies are locked in a race to become first movers in the space, which means safety is being overlooked.

"While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that," Pichai wrote in the piece.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"At Google, we've been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters, even more, is the race to build AI responsibly and make sure that as a society we get it right."

Pichai pointed out how implementing AI into Google's products like Gmail, Google Maps, and Google Search has benefitted users giving one example of how drivers across Europe are able to find more fuel-efficient routes using its tech.

Advertisement

He also referred to Google's AI principles published in 2018 saying the company believes in developing the technology "responsibly" whilst "avoiding harmful application."

He added that "AI is too important not to regulate, and too important not regulate well."

Pichai recently headed to the White House to discuss AI safety with vice president Kamala Haris alongside leading AI firms including Microsoft, OpenAI, and Anthropic. Google just released its AI chatbot Bard, in March.

Worries about the possible negative effects of AI were amplified when billionaire CEO Elon Musk signed an open letter alongside powerful tech leaders and AI experts, calling for a pause on the development of AI more powerful than OpenAI's GPT-4 because of the potentially dangers it poses to humanity.

Google has come under fire for its rapid AI development. One former computer scientist at the company specializing in AI, Timnit Gebru, claimed she was fired in 2020 after pointing out the biases in AI tools.

Advertisement

Gebru told the Guardian in an interview that firms like Google are caught up in the AI "gold rush" which means they won't "self-regulate" unless pressured.

Google did not immediately respond to a request for further comment from Insider.

{{}}