Researchers At MIT Think We Need To Let Computers Make More Mistakes

Advertisement

Google Chrome Error

Screenshot

Researchers at MIT think that we should make computers of the future faster by letting them make more mistakes. The reason? Quantum physics.

Advertisement

Historically, making the transistors that make up the processors in our computers smaller has been one of the most reliable ways to make our devices faster and use less power.

Unfortunately, it seems that we're quickly approaching a point where that will no longer be true - as those transistors reach the size of individual molecules, some weird effects of quantum mechanics lead to unreliable behavior.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

That could mean computers have to stop getting faster at the rates they have been for the last few decades. But Martin Rinard, a professor at MIT's Department of Electrical Engineering and Computer Science, thinks that a better option would be to figure which aspects of our programs wouldn't be hurt by a little unreliability.

That's why he built Rely, a system that lets programmers tag which errors are tolerable in their code and see the probability that the resulting program will still function properly.

Advertisement

In a release from the school's news office, Rinard states, "Rather than making it a problem, we'd like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program."

In the future, the framework will let programs waste less time on making sure output is "just right."

Take 4K video playback, for instance. With 8.3 million pixels on a screen, even a few thousand inaccurately-decoded pixels per frame will go unnoticed by your average viewer.

Rinard's framework would let a programmer set the threshold for "acceptable" failure rates - for example, telling a video app that 97% pixel accuracy is good enough.

The Rely framework would then look at the code and modify it to maintain that quality of output while running faster or more efficiently.

Advertisement

The way researchers talk about dealing with unreliability sounds a lot like the language used to describe the move to multi-threaded programming in the middle of the last decade. Back then, chip makers struggling to come up with better architectures were basically throwing multiple CPUs together and calling them dual-core.

For most applications, the speed boosts didn't really improve performance because no one was making software that took advantage of more than one core. Eventually though, research and new software frameworks emerged that made it easy for a coder to utilize the multi-core architectures seen in most gadgets today.

That's pretty similar to the coming reliability problem. Chip makers, while figuring out ways to engineer around their weird quantum issues, are going to release some unreliable parts. The MIT news release quotes Dan Grossman, an associate professor at the University of Washington, stating: "The increased efficiency in the hardware is very, very tempting. We need software work like this work in order to make that hardware usable for software developers."