Google's CEO Sundar Pichai warns against 'rushing' into regulating AI, which happens to be vital to Google's future growth

Advertisement
Google's CEO Sundar Pichai warns against 'rushing' into regulating AI, which happens to be vital to Google's future growth

sundar pichai

Carsten Koall/Getty Images

Google CEO Sundar Pichai

Advertisement
  • Google CEO Sundar Pichai told The Financial Times that governments should be wary of "rushing" into broad AI regulation.
  • The Google chief instead argued existing laws could be repurposed to regulate AI sector by sector and cautioned that hastily drawn-up laws could hinder "innovation and research."
  • There's a good reason Pichai probably wants regulators to stay away wide-reaching caps on AI. The technology is a key plank for his company's future growth, from its core advertising business to its fast-growing cloud division and its research arm DeepMind.
  • Visit Business Insider's homepage for more stories.

Google CEO Sundar Pichai struck a cautionary tone in an interview with The Financial Times talking about the future regulation of AI.

Speaking in Helsinki, Pichai warned against broad regulation of AI, arguing instead that existing laws could be repurposed sector by sector "rather than assuming that everything you have to do is new."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"It is such a broad cross-cutting technology, so it's important to look at [regulation] more in certain vertical situations," said Pichai.

Read more: A former Google engineer warned that robot weapons could cause accidental mass killings

Advertisement

"Rather than rushing into it in a way that prevents innovation and research, you actually need to solve some of the difficult problems," he added, citing known issues with the technology such as algorithmic bias and accountability.

Pichai's comments are set against the backdrop of Google's own rocky relationship with artificial intelligence technology, and backlash over some of its projects.

The company canned an autonomous drone contract with the Department of Defense - codenamed Project Maven - after fierce employee backlash, with thousands signing a petition and multiple resignations. Engineers felt the project was an unethical use of AI, and Google allowed its contract to elapse in March of this year.

Artificial intelligence remains a core future growth plank for Google more generally.

Pichai said in a first-quarter earnings call this year that almost three-quarters of Google's advertising clients are using automated ad technologies which make use of machine learning and AI. The firm makes the bulk of its revenue from advertising, reporting $33 billion in revenue for the second quarter. Google's fast-growing cloud business also uses AI. And its new health division is part of the firm's efforts to commercialise research produced by its AI lab, DeepMind.

Advertisement

Laura Nolan, a former Google engineer who was recruited to Project Maven and subsequently resigned over ethical concerns, told Business Insider via email that while Pichai may be speaking out of self-interest, she agrees that blanket AI regulation may be too broad an approach.

"Laws shouldn't be about specific technological implementations, and AI is a really fuzzy term anyway," she said.

Professor Sandra Wachter of the Oxford Internet Institute also told Business Insider that the contexts in which AI is used change how it should be governed.

"I think it makes a lot of sense to look at sectorial regulation rather than just an overarching AI framework. Risks and benefits of AI will differ depending on the sector, the application and the context. For example, AI used in criminal justice will pose different challenges than AI used in the health sector. It is crucial to assess if all the laws are fit for purpose," said Wachter. She said that in some cases existing laws could be tweaked, such as data protection and non-discrimination laws.

However, both Nolan and Wachter cited specific areas which may require new laws to be written - such as facial recognition.

Advertisement

"Algorithmic decision-making is definitely an area that I believe the US needs to regulate," Nolan said. "Another major area seems to be automated biometric-based identification (i.e. facial recognition but also other biometrics like gait recognition and so on). These are areas where we see clear problems emerging due to new technological capabilities, and not to regulate would be to knowingly allow harms to occur."

"The use of AI in warfare absolutely needs to be regulated - it is brittle and unpredictable, and it would be very dangerous to build weapons that use AI to perform the critical function of target selection," she added.

Get the latest Google stock price here.

{{}}