Apple cofounder Steve Wozniak dismisses AI concerns raised by the likes of Stephen Hawking and Nick Bostrom

Advertisement

Steve Wozniak at Festival of Marketing

Festival of Marketing

Apple cofounder Steve Wozniak.

PayPal billionaire Elon Musk, Microsoft cofounder Bill Gates, and renowned scientist Stephen Hawking have called out artificial intelligence (AI) as one of the biggest threats to humanity's very existence.

Advertisement

But Apple cofounder Steve Wozniak told Business Insider in an interview this week that he's not concerned about AI. At least, not anymore. He said he reversed his thinking on AI for several reasons.

"One being that Moore's Law isn't going to make those machines smart enough to think really the way a human does," said Wozniak. "Another is when machines can out think humans they can't be as intuitive and say what will I do next and what is an approach that might get me there. They can't figure out those sorts of things.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"We aren't talking about artificial intelligence actually getting to that point. [At the moment] It's sort of like it magically might arise on its own. These machines might become independent thinkers. But if they do, they're going to be partners of humans over all other species just forever."

Nick Bostrom

SRF

University of Oxford philosopher Nick Bostrom.

Wozniak's comments contrast with what Swedish philosopher Nick Bostrom said at the IP Expo tech conference in London on the same day.

Advertisement

The academic believes that machines will achieve human-level artificial intelligence in the coming decades, before quickly going on to acquire what he describes as "superintelligence," which is also the title of a book he authored.

Bostrom, who heads the Future of Humanity Institute at the University of Oxford, thinks that humans could one day become slaves to a superior race of artificially intelligent machines. This doomsday scenario can be avoided, he says, if self-thinking machines are developed from the very beginning in a way that ensures they're going to act in the interest of humans.

Commenting on how this can be achieved, Bostrom said this doesn't mean we have to "tie its hands behind its back and hold a big stick over it in the hope we can force it to our way." Instead, he thinks developers and tech companies must "build it [AI] in such a way that it's on our side and wants the same things as we do."

NOW WATCH: Here's why the time is always 9:41 in Apple product photos