Questões de Concurso
Sobre interpretação de texto | reading comprehension em inglês
Foram encontradas 9.434 questões
( ) Deepfakes are circumscribed to certain areas of action.
( ) The sole aim of deepfake technology is to spread misinformation.
( ) Evidence shows that even high-ranking executives can be easy targets to vishing techniques.
The statements are, respectively:
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.
The statements are, respectively:
• Get enough sleep. Good sleep improves your brain performance, mood and overall health. Consistently poor sleep is associated with anxiety, depression, and other mental health conditions.
I. The structure “there are new tools” is in the Simple Past Tense.
II. The structure “evidence-based treatments”is a nominal group connected to “social support systems that help people feel better” and the headnoun is “systems”.
III. The word “pursue” can be replace by “seek”.
IV. In the expression “that help people feel better” it refers to “social support systems”, “evidence-based treatments” and “new tools”.
Which ones are correct?
• Get enough sleep. Good sleep improves your brain performance, mood and overall health. Consistently poor sleep is associated with anxiety, depression, and other mental health conditions.
Julgue o item subsequente.
Proficient interpretation of texts necessitates a
meticulous analysis of contextual elements. Unraveling
the intricacies of the setting, cultural background, and
historical context enhances readers' ability to discern
implicit meanings, tones, and underlying messages within
diverse written materials.
Julgue o item subsequente.
A nuanced understanding of literary devices is paramount
in text interpretation. Identifying and comprehending
metaphors, similes, and symbolism enriches the reader's
experience by revealing layers of meaning beneath the
surface of the text. This skillful interpretation unveils the
author's artistic choices and contributes to a deeper
appreciation of the written work.
Julgue o item subsequente.
Central to effective text interpretation is the discernment
of the author's purpose and perspective. Scrutinizing
linguistic choices, tonal variations, and structural
elements provides valuable insights into the author's
intentions, allowing readers to engage meaningfully with
the material and appreciate the text's overarching
significance.
Read Text II and answer the question that follows.
Text II
June 15, 2023 - Debates over Diversity, Equity and Inclusion (DEI) efforts are currently thriving, including debates over the degree to which corporate diversity efforts are valuable, whether chief diversity officers can succeed, and whether corporate diversity commitments can produce lasting change.
Over the past year, at least a dozen U.S. state legislatures have proposed or passed laws targeting DEI efforts, including laws aimed at limiting DEI roles and efforts in businesses and higher education and laws eliminating DEI spending, trainings, and statements at public institutions. Moreover, with the U.S. Supreme Court poised to address affirmative action in two cases involving the consideration of race in higher education admissions this summer, debates in the U.S. regarding DEI initiatives are likely far from over.
At the same time, DEI-related legal requirements continue to grow in other jurisdictions, and with global financial institutions facing expanding environmental, social, and governance (ESG)- related trends and regulations in the EU and other jurisdictions, as well as global expectations regarding their role in ESG, including DEI-related corporate developments and initiatives, these matters are likely to continue to work their way into capital allocations and the costs of doing business, as well as into the expectations of certain stakeholders.
This widening gap between global expectations and regulation regarding DEI-related matters and the concerns of some constituents in the U.S. over the role of DEI in corporate decision-making is likely to continue growing for the foreseeable future, putting companies between the proverbial rock and hard place.
What these developments make clear is that corporate DEI efforts are, and likely have been for some time, riskier than many companies may initially appreciate. And the risks associated with DEI initiatives are only positioned to grow and expand as companies look to thread the DEI needle and make a broader and potentially more divergent set of stakeholders happy, or at least less annoyed, with their DEI-related commitments and initiatives. In this article, we discuss the top four legal risks that companies often fail to address in their DEI efforts.
[…]
(From https://www.reuters.com/legal/legalindustry/diversity-matters-four-scarylegal-risks-hiding-your-dei-program-2023-06-15/)
Read Text II and answer the question that follows.
Text II
June 15, 2023 - Debates over Diversity, Equity and Inclusion (DEI) efforts are currently thriving, including debates over the degree to which corporate diversity efforts are valuable, whether chief diversity officers can succeed, and whether corporate diversity commitments can produce lasting change.
Over the past year, at least a dozen U.S. state legislatures have proposed or passed laws targeting DEI efforts, including laws aimed at limiting DEI roles and efforts in businesses and higher education and laws eliminating DEI spending, trainings, and statements at public institutions. Moreover, with the U.S. Supreme Court poised to address affirmative action in two cases involving the consideration of race in higher education admissions this summer, debates in the U.S. regarding DEI initiatives are likely far from over.
At the same time, DEI-related legal requirements continue to grow in other jurisdictions, and with global financial institutions facing expanding environmental, social, and governance (ESG)- related trends and regulations in the EU and other jurisdictions, as well as global expectations regarding their role in ESG, including DEI-related corporate developments and initiatives, these matters are likely to continue to work their way into capital allocations and the costs of doing business, as well as into the expectations of certain stakeholders.
This widening gap between global expectations and regulation regarding DEI-related matters and the concerns of some constituents in the U.S. over the role of DEI in corporate decision-making is likely to continue growing for the foreseeable future, putting companies between the proverbial rock and hard place.
What these developments make clear is that corporate DEI efforts are, and likely have been for some time, riskier than many companies may initially appreciate. And the risks associated with DEI initiatives are only positioned to grow and expand as companies look to thread the DEI needle and make a broader and potentially more divergent set of stakeholders happy, or at least less annoyed, with their DEI-related commitments and initiatives. In this article, we discuss the top four legal risks that companies often fail to address in their DEI efforts.
[…]
(From https://www.reuters.com/legal/legalindustry/diversity-matters-four-scarylegal-risks-hiding-your-dei-program-2023-06-15/)
(Adapted from https://www.theguardian.com/environment/2023/sep/03/its-
dangerous-work-new-generation-of-indigenous-activists-battle-to-save-the-amazon)
(Adapted from https://www.theguardian.com/environment/2023/sep/03/its-
dangerous-work-new-generation-of-indigenous-activists-battle-to-save-the-amazon)
(Adapted from https://www.theguardian.com/environment/2023/sep/03/its-
dangerous-work-new-generation-of-indigenous-activists-battle-to-save-the-amazon)
(Adapted from https://www.theguardian.com/environment/2023/sep/03/its-
dangerous-work-new-generation-of-indigenous-activists-battle-to-save-the-amazon)
( ) Indigenous reporters have been currently keen on providing their eye-witness accounts.
( ) The patrollers put themselves in jeopardy when they undertake their fact-finding missions.
( ) The activist journalist mentioned is incognizant of modern surveillance technology.
The statements are, respectively
Read the text to answer question
The Use of Emerging Technologies in Teaching English as an Applied Language
Today’s digital age and its emerging technologies, with the latest achievements of artificial intelligence and big data processing, have unprecedently affected education processes and pedagogy, including the strategies and approaches related to foreign language (FL) teaching and learning. Present day graduates belong to Generation Z, who are known for being digitally literate, technologically savvy, and having grown up with digital tools. In addition, they are going to be soon followed by Generation Alpha, whose members are characterized as permanently connected and who are able to make their own decisions based on the use of technologies and also being able to manage their digital identities or visuals. Thus, the present day foreign language education should be technology-based since technology has become an integral part of the life of the current generation, and also, today’s language learning environment which is no longer solely constricted to the traditional or formal school learning environment. In this respect, foreign language teachers face a serious challenge in integrating different kinds of technologies into their teaching realities as they have to satisfy the learning needs of the two generations. However, in order to keep up with their digitally informed students and engage them in learning a foreign language, they must use recent technologies, such as chatbots or virtual reality. They also have to evaluate which of these technologies could generate some impact in their classes, analyze their potential, and utilize all of the benefits they bring. Moreover, they ought to assess the potential risks these technologies could pose.
In addition, the teachers must always consider the added value of the selected tools for the students’ learning and their learning outcomes, which is not an easy task as the research into the practical utilizations of digital technologies with clear pedagogical outcomes is, surprisingly, scarce. It must not be forgotten that FL teachers should also promote not only the students’ knowledge acquisition in various learning contexts, but they should also enhance the skills that appear to be crucial for the 21st century, such as critical thinking, creativity, communication, or collaboration skills. Moreover, to be able to motivate their students to use these technologies in FL learning, they themselves must have a positive attitude to their use in FL classrooms, as well as possess the relevant subject, technological, and pedagogical knowledge.
(Available on: https://www.mdpi.com/2079-8954/11/1/42. Adapted.)
I. Teachers should have total command of technological tools in order to turn their employment reliable and worthwhile.
II. Current teaching of English as an applied language is always technology grounded.
III. Generations Z and Alpha partake the feature of being digitally literate.
Read the text to answer question
We real cool
(Gwendolyn Brooks.)
The Pool Players.
Seven at the Golden Shovel.
We real cool. We
Left school. We
Lurk late. We
Strike straight. We
Sing sin. We
Thin gin. We
Jazz June. We
Die soon.
(Available on: https://www.poetryfoundation.org/poetrymagazine/poems/28112/we-real-cool.)
“We real cool” was issued in 1960, but it reveals very contemporary aspects though, EXCEPT:
Group 1: coal – festival – meal – litoral – petal – seal – vial – goal
Group 2: arrival – denial – refusal – burial – betrayal – approval – survival – referral
The teacher’s word choice had a didactic intention as he/she planned to ask questions aiming at guiding students’ observations and insights. Choose the item displaying the criterion that justifiesthe teacher’s word choice in both groups.