Questões de Inglês - Interpretação de texto | Reading comprehension para Concurso

Foram encontradas 6.243 questões

Q2321997 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
Based on the text, mark the statements below as true (T) or false (F).

( ) Deepfakes are circumscribed to certain areas of action.
( ) The sole aim of deepfake technology is to spread misinformation.
( ) Evidence shows that even high-ranking executives can be easy targets to vishing techniques.

The statements are, respectively:
Alternativas
Q2320214 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

According to the text, attacks, scams and data theft are actions that should be:
Alternativas
Q2320213 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

The newspaper headline expresses the agency’s:
Alternativas
Q2320212 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

Based on the text, mark the statements below as true (T) or false (F).

( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.

The statements are, respectively:
Alternativas
Q2320146 Inglês
The Audio-Lingual Method, like the Direct Method is also an oral-based approach. However, it is very different in that rather than emphasizing vocabulary acquisition through exposure to its use in situations, the Audio-Lingual Method drills students in the use of grammatical sentence patterns. It also, unlike the Direct Method, has a strong theoretical base in linguistics and psychology. Charles Fries (1945) of the University of Michigan led the way in applying principles from structural linguistics in developing the method. and for this reason, it has sometimes been referred to as the 'Michigan Method'. Later in its development, principles from behavior al psychology (Skinner 1957) were incorporated. It was thought that the way to acquire the sentence parterns of the target language was through conditioning- helping learners to respond correctly to stimuli through shaping and reinforcement. Learners could over come the habits of their native language and form the new habits required to be target language speakers.


LARSEN-FREEMAN, Diane. Techniques and Principles in Language Teaching. 3rd ed. Oxford ; New York: Oxford University Press, 2011.
About the Audio-Lingual Method, its the typical features are:
Alternativas
Respostas
1371: E
1372: C
1373: C
1374: D
1375: A