Questões de Concurso
Para auxiliar de fiscalização
Foram encontradas 634 questões
Resolva questões gratuitamente!
Junte-se a mais de 4 milhões de concurseiros!
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.
The statements are, respectively:
A quantidade mínima de números inteiros diferentes nessa lista é:
A altura do cone mede:
Considere o conjunto A = {2, 3, 4, 6, 7, 8}.
O número de subconjuntos de A com 3 elementos, sendo pelo
menos um elemento ímpar, é:
A vez seguinte em que esses médicos estiveram juntos no plantão foi dia:
O valor de N é:
A soma de todos esses números é:
Paula tem 11 moedas de R$ 0,50 a mais do que de R$ 0,10 e o valor total de suas 100 moedas é R$ 63,70.
A quantidade de moedas de R$ 1,00 que Paula tem a mais do que moedas de R$ 0,10 é:
As faces coladas possuem mesmo número e a soma das seis faces laterais visíveis é 18.
O número das faces coladas é:
Luiza ganha 50% a mais do que Márcia, e Joana ganha 20% a mais do que Márcia.
Assim, Luiza ganha p% a mais do que Joana.
O valor de p é:
Alberto e Roberto possuem o mesmo IMC. Alberto tem 70 kg de peso e 1,70 m de altura e Roberto tem 1,90 m de altura.
O peso de Roberto é de, aproximadamente:
A frase em que isso foi feito de forma adequada, é:
Nesse caso, a linguagem é veículo de mentiras porque:
“Mal colocou o papel na máquina, o menino começou a empurrar uma cadeira pela sala, fazendo um barulho infernal.
̶- Para com esse barulho, meu filho – falou, sem se voltar.
Com três anos já sabia reagir como homem ao impacto das grandes injustiças paternas: não estava fazendo barulho, estava só empurrando uma cadeira.
̶- Pois então para de empurrar a cadeira.
̶- Eu vou embora – foi a resposta.”
Sobre os componentes desse segmento, é correto afirmar que: