"The Secretive Company That Might End Privacy as We Know It" was the title of the piece, and it revealed the stunning details of what Clearview's tech could do: "You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants."
It was the first look the public got at a company that, until then, was operating in secrecy.
What the Times piece revealed, beyond the functionality of Clearview's tools, was stunning: The company had scraped billions of publicly available image from major social media platforms like Twitter, Facebook, and YouTube. Moreover, it put those images into a searchable database then sold that tool to American law enforcement.
"More than 600 law enforcement agencies have started using Clearview in the past year," the piece pointed out. Contracts to use the service cost as much as $50,000 for a two-year deal.
Soon after the piece ran, social media giants began sending cease-and-desist letters to Clearview AI.
In the initial report on Clearview, a slew of major tech platforms were named as targets for their scraping efforts: Facebook, Twitter, YouTube, and Venmo.
Clearview mined each of those platforms for user photos, and then added them to Clearview's database, which it sells. And that process — lifting user photos from social platforms, then selling those photos — breaches the terms of service of every platform from which the photos were taken.
"YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person," YouTube spokesperson Alex Joseph told Business Insider in an email in early February. "Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter."
Similar sentiments were shared by all the major social platforms, from Facebook to Twitter to LinkedIn and Venmo.
Clearview AI CEO Hoan Ton-That defended the company in an interview on "CBS This Morning."
As major tech companies openly pushed back against Clearview's method for building an image database, Clearview's chief executive, Hoan Ton-That, went on the defensive. He argued that his company's software isn't doing anything illegal, and doesn't need to delete any of the images it has stored, because it's protected under US law.
As for his response to the cease-and-desist letters? "Our legal counsel has reached out to them, and are handling it accordingly."
Clearview AI's lawyer, Tor Ekeland, told Business Insider in an emailed statement, "Clearview is a photo search engine that only uses publicly available data on the Internet. It operates in much the same way as Google's search engine. We are in receipt of Google and YouTube's letter and will respond accordingly."
Check out the full interview with Clearview AI CEO Hoan Ton-That right here:
Then, in late February, Clearview disclosed a stunning error: The company's entire client list was leaked in a data breach, which led to the revelation that Clearview was selling its services to lots of clients outside of law enforcement.
As if that weren't problematic enough, the list of clients included some particularly notable institutions that fall pretty far outside the realm of law enforcement: Macy's, Kohl's, Walmart, and the NBA, among others, Buzzfeed News first reported.
The full client list spells out just how many people have access to Clearview's tech — people in more than 2,200 law enforcement departments, government agencies, and companies across 27 countries.
This directly contradicts how Clearview describes itself: "Clearview is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes." On Clearview's website, applying for the software means clicking a "Request Access" button directly below a sign that reads, "Available now for Law Enforcement."
Then, in March, another New York Times piece on the company revealed another stunning detail: The company's founders casually gave access to the software to potential investors and friends, who immediately abused it.
When John Catsimatidis was finishing dinner in October 2018 at Cipriani in downtown Manhattan, he spotted something amiss: His daughter was also eating dinner there, on a date with an unknown man.
"I wanted to make sure he wasn't a charlatan," Catsimatidis, the billionaire owner of the Gristedes chain of supermarkets, told The New York Times.
He asked the waiter to snap a photo of the man without their knowing, then used his smartphone to instantly identify him using a secretive facial-recognition app. He then texted the man's biography to his daughter.
"My date was very surprised," his daughter, Andrea, said.
And indeed he should have been; John Catsimatidis was using Clearview AI's software — supposedly intended for law enforcement — as a way to freak out his daughter.
According to The Times, Catsimatidis was one of several prospective investors who were given access to the app; he said he had access through a friend who cofounded the company. Peter Thiel, David Scalzo, Hal Lambert, and the actor turned investor Ashton Kutcher were also listed in the report as either having access or being suspected of having access to the app.