Questões de Concurso
Comentadas para tce-sp
Foram encontradas 961 questões
Resolva questões gratuitamente!
Junte-se a mais de 4 milhões de concurseiros!
Nesse contexto, os parâmetros modificados são:
Nesse contexto, o conjunto de circuitos integrados de apoio ao processador localizados na placa-mãe é o:
Daniel NÃO pode inverter, na fonte de alimentação AT, os conectores:
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.
The statements are, respectively:
As faces coladas possuem mesmo número e a soma das seis faces laterais visíveis é 18.
O número das faces coladas é:
Nesse caso, a linguagem é veículo de mentiras porque:
I. O alinhamento da TI com o Negócio não requer a adoção de um Modelo Organizacional. II. Se a organização não tem uma estratégia de negócio bem definida, não há saída para a TI. III. Não basta elaborar um plano estratégico alinhado. Para implementá-lo é preciso acompanhá-lo. Com respeito ao PETI, é correto o que consta em
Pode ocorrer a existência de um backdoor não associada a uma invasão, na situação de: I. instalação por meio de um cavalo de tróia. II. inclusão como consequência da instalação e má configuração de um programa de administração remota, por exemplo, backdoor incluído em um produto de software de um fabricante. , III. A ocorrência do backdoor é restrita ao sistema operacional Windows. Está correto o que consta em
“ou todas as operações da transação são refletidas corretamente no banco de dados ou nenhuma o será” é a
I. Utiliza o protocolo CIFS sobre rede Ethernet/TCP-IP. II. Bloco de dados é o tipo de informação que trafega na rede entre servidores e storage. III. Independe do sistema operacional do servidor; é responsabilidade do próprio storage formatar, particionar e distribuir informações nos seus discos. IV. A segurança dos dados na rede é alta, implementada numa rede não compartilhada. As características de arquiteturas de armazenamento de dados apresentadas nos itens I a IV, correspondem correta e respectivamente, a