When the authors refer to the use of deepfake in education (...
Próximas questões
Com base no mesmo assunto
Q2321999
Inglês
Texto associado
Is It Live, or Is It Deepfake?
It’s been four decades since society was in awe of the quality of
recordings available from a cassette recorder tape. Today we
have something new to be in awe of: deepfakes. Deepfakes
include hyperrealistic videos that use artificial intelligence (AI) to
create fake digital content that looks and sounds real. The word is
a portmanteau of “deep learning” and “fake.” Deepfakes are
everywhere: from TV news to advertising, from national election
campaigns to wars between states, and from cybercriminals’
phishing campaigns to insurance claims that fraudsters file. And
deepfakes come in all shapes and sizes — videos, pictures, audio,
text, and any other digital material that can be manipulated with
AI. One estimate suggests that deepfake content online is
growing at the rate of 400% annually.
There appear to be legitimate uses of deepfakes, such as in the
medical industry to improve the diagnostic accuracy of AI
algorithms in identifying periodontal disease or to help medical
professionals create artificial patients (from real patient data) to
safely test new diagnoses and treatments or help physicians
make medical decisions. Deepfakes are also used to entertain, as
seen recently on America’s Got Talent, and there may be future
uses where deepfake could help teachers address the personal
needs and preferences of specific students.
Unfortunately, there is also the obvious downside, where the
most visible examples represent malicious and illegitimate uses.
Examples already exist.
Deepfakes also involve voice phishing, also known as vishing,
which has been among the most common techniques for
cybercriminals. This technique involves using cloned voices over
the phone to exploit the victim’s professional or personal
relationships by impersonating trusted individuals. In March
2019, cybercriminals were able to use a deepfake to fool the CEO
of a U.K.-based energy firm into making a US$234,000 wire
transfer. The British CEO who was victimized thought that the
person speaking on the phone was the chief executive of the
firm’s German parent company. The deepfake caller asked him to
transfer the funds to a Hungarian supplier within an hour,
emphasizing that the matter was extremely urgent. The
fraudsters used AI-based software to successfully imitate the
German executive’s voice. […]
What can be done to combat deepfakes? Could we create
deepfake detectors? Or create laws or a code of conduct that
probably would be ignored?
There are tools that can analyze the blood flow in a subject’s face
and then compare it to human blood flow activity to detect a
fake. Also, the European Union is working on addressing
manipulative behaviors.
There are downsides to both categories of solutions, but clearly
something needs to be done to build trust in this emerging and
disruptive technology. The problem isn’t going away. It is only
increasing.
Authors
Nit Kshetri, Bryan School of Business and Economics, University of
North Carolina at Greensboro, Greensboro, NC, USA
Joanna F. DeFranco, Software Engineering, The Pennsylvania
State University, Malvern, PA, USA
Jeffrey Voas, NIST, USA
Adapted from:
https://www.computer.org/csdl/magazine/co/2023/07/10154234/
1O1wTOn6ynC
When the authors refer to the use of deepfake in education (2nd
paragraph), they state that ultimately teachers may find it: