This one quote sums up why pre-crime terrorist policies are futile

Advertisement

The Communist Party of China has announced it will begin developing software that can comb through huge swaths of data in order to determine which citizens pose serious terrorist threats, Bloomberg reports.

Advertisement

The move recalls the 2002 sci-fi thriller "Minority Report," in which images of the future let the police stop crimes before they happen.

It's an attractive idea for law enforcement, if only because it replaces legwork with educated guesswork.

But as Jim Harper, a senior fellow at the libertarian think tank Cato Institute, explained to Bloomberg's Shai Oster, pre-crime policies are still largely untenable.

"There are not enough examples of terrorist activity to model what it looks like in data, and that's true no matter how much data you have," Harper says. "You need yeast to make bread. You can't make up for a lack of yeast by adding more flour."

Advertisement

Harper's analogy refers, essentially, to the problem of small sample sizes.

In scientific research, a sample can only be deemed representative of the general population if it includes a reasonably large and diverse amount of people.

That's how you end up with reliable empirical evidence, which is what China's government and other branches of law enforcement want to gather when they measure pre-crime. They want to look at a big set of information and compare it to normal and abnormal patterns, so they can see which bears a closer similarity to the person of interest.

The issue is that terrorism isn't all that common. In 2010, for example, 13,000 people died from terrorist attacks. As Harper explains, we don't have enough of those abnormal patterns to know what threatening behavior looks like. There's simply no way for a country like China, with its billion-plus population, to extrapolate from such a small cluster of localized activity.

But that might not necessarily dissuade countries from giving programs a try, especially if they believe there's a credible threat of terrorism. That could yield some dangerous false-positives: people getting charged and potentially convicted for crimes they never even planned to commit.

Advertisement

China's program may be the most open and sophisticated of its kind, but it isn't the first.

Earlier this January, a similar program launched in Fresno, California. Culling data from the internet, the deep Web, arrest records, vehicle registrations, address databases, property records, tweets, and Facebook posts, the Fresno Police Department can assign green, yellow, or red threat levels to individual citizens and homes.

Less than two months into the program's rollout, some residents have already criticized the measure. At one recent city council meeting, councilman Clinton J. Olivier demanded the department run his name through the software. His threat level turned up green, although his house turned up yellow.

"Even though it's not me that's the yellow guy, your officers are going to treat whoever comes out of that house in his boxer shorts as the yellow guy," Olivier said, according to The Washington Post. "That may not be fair to me."

A mistake like that could be innocuous, or it could be dire. Especially under China's new program, which seems far more pervasive than a small operation like Fresno's, people could risk losing much more than just their sense of fairness.

Advertisement

NOW WATCH: Here's what the FBI wants Apple to do to break into the San Bernardino iPhone