Deepfake uses deep learning artificial intelligence to replace one person's likeness with another in video and other digital media. One of the most obvious manifestations of what is now called "synthetic media" is images, sounds and videos that appear to have been created using traditional methods, but are actually created with deep learning and artificial intelligence using complex algorithms. Most of the time, content created with this technology is indistinguishable from reality.
How Deepfake Technology Works
“Deepfakes” refers to the technology underlying “deep
learning,” a type of artificial intelligence. With the help of deep learning
algorithms that learn how to solve problems based on large amounts of data,
fake media can be made to look realistic.
Deepfakes can be created in various ways. One of the
most common includes deep neural networks and autoencoders that use the face
swapping method. The first thing you need is a target video for the deepfake,
as well as a collection of clips of the person you want to target. Videos may
be completely unrelated; the target may be a clip from a Hollywood movie, and
the videos of the chosen subject may be random clips from YouTube.
An autoencoder is an artificial intelligence program
created to examine video clips to determine how a person looks from different
angles and in various weather conditions. It is then used to find similarities
and match that person to the person in the target video.
How to Use Deepfake?
There are some interesting
applications (as in movies and games) of automating the swapping of faces to
produce synthetic videos that look reliable and realistic. That's because
deepfake technology was first applied to create synthetic pornography. In fact,
according to Deeptrace, 96% of deepfake videos found online in 2019 had
pornography as the main content.
Since then, the technology
has been developed for use in leading national figures.
Is Deepfake Just Video?
Deepfake technology is not
limited to videos. Audio is a rapidly growing field with a wide variety of
applications.
With deep learning algorithms, realistic voice spoofs
can be made from only a few hours (or in some cases, minutes) of the voice of
the person whose voice is cloned. After a voice model is made, that person can
be made to say everything.
How to Detect Deepfake?
As deepfake becomes more common and online users now
gain experience in detecting other types of fake news, society is likely to
adapt to detecting deep fake videos. But in cybersecurity, detecting and
preventing deep fraudulent technology often requires more innovation.
There are several indicators that allow deepfake to be
detected:
·
Existing deepfakes have
difficulty recreating faces realistically, resulting in videos where the person
does not blink at all or blinks too often or unnaturally. But after University
of Albany researchers published a study that detected the blinking abnormality,
new videos were released that didn't have this problem.
·
Look for skin or hair
problems or faces that look more blurred than their surroundings. The focus may
appear unnaturally soft.
·
Does the lighting feel
natural to you? Deepfake algorithms typically preserve the lighting of clips
that are used as models for fake video and that do not match the lighting in
the target video.
·
In some cases, the audio may
not match the person, especially if the video is fake but the original audio has
not been carefully modified.
Combating Deepfake Technology
As techniques improve, deepfake scams will become more
realistic, but you're not completely vulnerable when it comes to fighting them.
Some startups have developed and continue to develop methods to detect these
scams.
For example, Sensority has developed an antivirus-like
platform for deepfake videos that alerts users via email when watching
something bearing the obvious fingerprints of artificial intelligence-generated
synthetic media.
Operation Minerva identifies deepfakes in a simpler
way. Operation Minerva's algorithm compares potential deep frauds with known
"digitally fingerprinted" videos. For example, Operation Minerva may
identify that a video it has already cataloged is a modified version of it.
Deepfake Video and Audio Detection
AWS, Facebook, Microsoft, the AI Media Integrity
Steering Committee and academics came together to create the Deepfake Detection
Challenge (DFDC). The goal of the challenge was to encourage researchers around
the world to create innovative new technologies that can help detect deepfakes
and manipulated media.
Because as this technology advances, it will become
more and more difficult to determine what is real and what is not. That's why
it's so important that we don't trust and verify what we see online before
posting any video on social media.
Experts predict that as technology improves, deep
frauds will become more sophisticated and pose a greater threat to people, such
as election interference, political tensions and different criminal activities.
Comments
Post a Comment