Questões de Concurso
Para auxiliar de fiscalização
Foram encontradas 634 questões
Resolva questões gratuitamente!
Junte-se a mais de 4 milhões de concurseiros!
Os tipos dos servidores A e B e o nome do processo pelo qual o servidor B é atualizado são, respectivamente:
O servidor DHCP está configurado em modo não autoritativo e, portanto, o servidor DHCP:
Os tipos dos servidores proxy A e B são, respectivamente:
A alocação de endereço IP da rede do TCE SP é:
Atualmente, a tecnologia em baterias para computadores portáteis que apresenta uma densidade maior do que a das outras tecnologias, além de ser mais leve, o que é altamente desejável em computadores portáteis, é:
Nesse contexto, utilizando o código de cores dos conectores de áudio analógico, Andrew ordenou, respectivamente, os dispositivos e seus conectores nas cores: rosa, azul, verde, preta, laranja e cinza, que significam, nesta ordem:
Moacir resolveu o problema utilizando o(a):
Sobre coolers, é correto afirmar que:
De acordo com o menu Iniciar do Windows 11, Jony afirma que o menu é composto por:
1. é um chip separado na placa-mãe;
2. o Windows 11 requer a versão 2.0; e
3. é usado por serviços como Criptografia de unidade BitLocker e Windows Hello para criar e armazenar chaves criptográficas com segurança para confirmar se o sistema operacional e o firmware do seu dispositivo são o que deveriam ser e se não foram adulterados.
A tecnologia que possui as características listadas é o(a):
De acordo com as características dos SSDs, está correto quando JJ afirma que os SSDs:
O período de tempo decorrido entre duas operações sucessivas de acesso a memória, sejam de escrita ou de leitura, é o(a):
Ferreira estava se referindo ao:
Nesse contexto, o padrão originalmente criado por fabricantes de hardwares para definir modos de economia de energia para o computador é o(a):
Nesse contexto, conforme as regras de instalação de ventoinhas para gabinetes torre, as ventoinhas devem ser instaladas corretamente, respectivamente, no painel frontal, no painel traseiro e no painel lateral, nos modos de:
Nesse contexto, os parâmetros modificados são:
Nesse contexto, o conjunto de circuitos integrados de apoio ao processador localizados na placa-mãe é o:
Daniel NÃO pode inverter, na fonte de alimentação AT, os conectores:
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.
READ THE TEXT AND ANSWER THE QUESTION:
Chatbots could be used to steal data, says cybersecurity agency
The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.
The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.
The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.
The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”
The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.
Adapted from: The Guardian, Wednesday 30 August 2023, page 4.