Questões de Concurso Comentadas sobre inglês
Foram encontradas 12.328 questões
( ) Deepfakes are circumscribed to certain areas of action.
( ) The sole aim of deepfake technology is to spread misinformation.
( ) Evidence shows that even high-ranking executives can be easy targets to vishing techniques.
The statements are, respectively:
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.
The statements are, respectively:
I. child > children
II. country > countries
III. sheep > sheep
IV. day > days
V. stereo > stereos
Which combination of words below follow the same rules, in the same order?
I. “What will you have been doing?” is in the simple future and future perfect tense.
II. “I was studying English when you called yesteday” is in the past continuous.
III. “She wrote last night” is in simple past.
IV. “Have they ever been abroad?” is in the past perfect.
V. “What are you doing now?” is in the present continuous.
Which ones are correct?
Identify the type of Figurative Language used in the following sentences:
I. I am a deeply superficial person.
II. Round the rugged rocks the ragged rascal ran.
III. Mellow wedding bells.
IV. The mind is an ocean.
Select the alternative that identifies correctly them:
• Get enough sleep. Good sleep improves your brain performance, mood and overall health. Consistently poor sleep is associated with anxiety, depression, and other mental health conditions.
I - “Mental illnesses are disorders, ranging from mild to severe, that affect a person’s thinking, mood, and/or behavior.”
II - "According to the National Institute of Mental Health, nearly one-in-five adults live with a mental illness."
• Get enough sleep. Good sleep improves your brain performance, mood and overall health. Consistently poor sleep is associated with anxiety, depression, and other mental health conditions.
• Get enough sleep. Good sleep improves your brain performance, mood and overall health. Consistently poor sleep is associated with anxiety, depression, and other mental health conditions.
I. The structure “there are new tools” is in the Simple Past Tense.
II. The structure “evidence-based treatments”is a nominal group connected to “social support systems that help people feel better” and the headnoun is “systems”.
III. The word “pursue” can be replace by “seek”.
IV. In the expression “that help people feel better” it refers to “social support systems”, “evidence-based treatments” and “new tools”.
Which ones are correct?
• Get enough sleep. Good sleep improves your brain performance, mood and overall health. Consistently poor sleep is associated with anxiety, depression, and other mental health conditions.
Julgue o item subsequente.
Intonation, the rise and fall of pitch in speech, plays a
crucial role in conveying the speaker's attitude, mood, and
intended meaning in American English. Different
intonation patterns can distinguish between statements,
questions, and exclamations, contributing significantly to
effective communication.