Hiring employees remotely through video calls has a new risk. The FBI has warned about an increase in cases in which users use deepfakes to, on some occasions, supplant the identity of other users and apply for that remote job, which is normally related to the technology sector.
The FBI assures that it has been the companies that have denounced different cases of people who use systems to change the face when conducting interviews. Specifically, they are companies seeking talent in areas such as “information technology and computer programming, database and other software-related job functions,” according to the administration. Some of these positions also control a large amount of private data, such as customer information, patents, financial data, etc.
The deepfake, which, let us remember, consists of the implantation of a virtual face with a practically real appearance, is accompanied by the use of voice spoofing or “deep voice fakes” that users who carry out this tactic use during interviews. The FBI, however, warns that it is possible to detect if someone is using a false identity. For example, if the audio is not coordinated with the gestures the person makes when speaking or when other “auditory actions”, such as sneezing or coughing, are also not aligned with the visual representation. This poorly processed or inaccurate perception of traits is known as ‘uncanny valley’.
Deepfakes are getting harder to detect
However, spotting a fake video might not be such an easy task. Especially considering that deepfakes have advanced considerably, and that it could currently be extremely difficult to differentiate a real face from a virtual one. In fact, according to a study published in the journal PNASmost people are more confident about a virtual face than a real one, not knowing how to identify which is generated by artificial intelligence, and which is a simple photograph of a person.
The study also warns of the dangers of advanced technology capable of generating deepfakes. Also easy to use, as there are tools that can generate virtual faces really easily. Among the risks of this practice, it stands out, above all, identity theft or the distribution of false content. For example, the video of a deepfake of the president of Ukraine that Facebook had to remove.