Microsoft's CEO on one of the biggest philosophical questions about AI and whether it's manipulating us

Advertisement
Microsoft's CEO on one of the biggest philosophical questions about AI and whether it's manipulating us
Satya Nadella received the 2023 Axel Springer Award in honor of his achievements in tech, business, and life.Axel Springer
  • Satya Nadella, CEO of Microsoft, weighed in on AI's capacity for empathy and manipulation during an interview on Tuesday.
  • Major AI players have come out recently to highlight the dangers of tech that's constantly learning.
Advertisement

Microsoft CEO Satya Nadella said that artificial intelligence has the potential to be dangerous if humans lose control of it during a conversation with Mathias Döpfner, the CEO of Insider's parent company Axel Springer.

Nadella and Döpfner discussed the "ethical issues" of AI developing human emotions like empathy or a sense of humor on Tuesday. Although tech leaders Elon Musk and the godfathers of AI have called out the dangers of machine learning, Nadella gave a more layered response to the existential question.

"A good thing for people to worry about is if there is a very powerful new technology that we lose control of, then that's a problem," Nadella said.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Among those ethical issues is AI's empathy and how it can be used for or against humans. Nadella said its potential to use social skills to manipulate humanity is an issue of philosophy just as much as it is technology.

"We need some moral philosophers to guide us on how to think about this technology and deploying this technology," the Microsoft CEO said.

Advertisement

There are "a lot of steps along the way" before humans have to seriously worry about what AI will decide to do with all of its intelligence, according to Nadella. The task falls on AI developers to consider the "unintended consequences" of the technology before they arise.

He compared the future AI to that of cars and airplanes that operate under rules and laws every day.

"We've figured out as human beings, how to use very powerful technology with lots of rules, lots of regulations, and a lot of safety standards," Nadella said.

Microsoft is currently rolling out its own AI assistant, known as Copilot, designed to help with Microsoft 365. Its corporate partners have already begun using the program to help with writing emails and more.

In May, Nadella denied Musk's claims that Microsoft was controlling ChatGPT creator OpenAI.

Advertisement

Disclosure: Axel Springer is Business Insider's parent company.

{{}}