More and more people are using deepfakes to apply for remote tech jobs, FBI says
- People are using deepfake technology to pose as someone else in job interviews, the FBI said.
- They seem to focus on IT roles that would grant them access to sensitive data, the agency said.
More and more people are using deepfake technology to pose as someone else in interviews for remote jobs, the FBI said on Tuesday.
In its public announcement, the FBI said it has received an uptick in complaints about people superimposing videos, images, or audio recordings of another person onto themselves during live job interviews. The complaints were tied to remote tech roles that would have granted successful candidates access to sensitive data, including "customer PII (Personally Identifiable Information), financial data, corporate IT databases and/or proprietary information," the agency said.
Equally concerning is the harm that private individuals could face from being targeted by deepfakes, as in the cases highlighted by the FBI on Tuesday. "The use of the technology to harass or harm private individuals who do not command public attention and cannot command resources necessary to refute falsehoods should be concerning," the Department of Home Security warned in a 2019 report about
Fraudulent applicants for tech jobs are nothing new. In a November 2020 LinkedIn post, one recruiter wrote that some candidates hire external help to assist them during the interviews in real time, and that the trend seems to have gotten worse during the pandemic. In May, recruiters found that North Korean scammers were posing as American job interviewees for crypto and Web3 startups.
What's new in the FBI's Tuesday announcement is the use of AI-powered deepfake technology to help people get hired. The FBI did not say how many incidents it has recorded.
Anti-deepfake technologies are far from perfect
In 2020, the number of known online deepfake videos reached 145,227, nine times more than a year earlier, according to a 2020 report by Sentinel, an Estonian threat-intelligence agency.
Technologies and processes that weed out deepfake videos are far from foolproof. A report from Sensity, a threat-intelligence company based in Amsterdam, found that 86% of the time, anti-deepfake technologies accepted deepfakes videos as real.
However, there are some telltale signs of deepfakes, including abnormal blinking, an unnaturally soft focus around skin or hair, and unusual lighting.
In its announcement, the FBI also offered a tip for spotting voice deepfake technology. "In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually," the agency wrote.
The FBI said people or companies who have identified deepfake attempts should report it the cases to its complaint website.
- Here is a list of bank holidays in October 2022
- An Apple executive is leaving the company after being filmed joking about fondling 'big-breasted women' in a viral TikTok video
- RBI hikes repo rate by 50 bps, taking it to a three-year high of 5.9% – Das says inflation at ‘alarmingly high levels’ across jurisdictions
- Best camera phone under ₹15000
- Best 5G mobile phones under ₹20000
- 22 lakh Indian IT professionals likely to leave jobs by 2025: Report
- OnePlus says successfully geared up for 5G launch with 5G-ready smartphone portfolio
- This website generates videos from a single image and text for free – here’s all you need to know