Questões de Concurso Comentadas para tce-sp

Foram encontradas 961 questões

Resolva questões gratuitamente!

Junte-se a mais de 4 milhões de concurseiros!

Q2320219 Arquitetura de Computadores
A solução tradicional para armazenar grande volume de dados é observar a hierarquia das tecnologias de memória computacional. À medida que descemos na hierarquia, alguns parâmetros modificam-se.

Nesse contexto, os parâmetros modificados são:
Alternativas
Q2320218 Arquitetura de Computadores
A placa-mãe é a maior placa de circuito impresso dentro do computador e serve como base para a conexão de todos os dispositivos do computador. Na placa-mãe há muitos circuitos integrados e diversos componentes eletrônicos.

Nesse contexto, o conjunto de circuitos integrados de apoio ao processador localizados na placa-mãe é o:
Alternativas
Q2320217 Arquitetura de Computadores
Daniel, técnico em hardware, foi designado para trocar uma fonte de alimentação, do tipo AT, em um computador que servia para treinamento dos novos técnicos. Se Daniel for desatento e inverter a posição dos conectores de encaixe na placa-mãe, ele pode promover a queima do equipamento.

Daniel NÃO pode inverter, na fonte de alimentação AT, os conectores:
Alternativas
Q2320216 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

“If” in “if they find a combination of words” (5th paragraph) signals a:
Alternativas
Q2320215 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

In “Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard” (4th paragraph), “such as” introduces a(n):
Alternativas
Q2320214 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

According to the text, attacks, scams and data theft are actions that should be:
Alternativas
Q2320213 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

The newspaper headline expresses the agency’s:
Alternativas
Q2320212 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

Based on the text, mark the statements below as true (T) or false (F).

( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.

The statements are, respectively:
Alternativas
Q2320204 Raciocínio Lógico
Em um dado comum, a soma dos números de duas faces opostas é sempre 7. A figura abaixo mostra dois dados sobre um plano colados por uma face.

Imagem associada para resolução da questão


As faces coladas possuem mesmo número e a soma das seis faces laterais visíveis é 18.
O número das faces coladas é:
Alternativas
Q2320199 Português
Uma afirmação contida em um livro bastante antigo dizia: “Se quiser expressar a verdade, cale, porque quem fala, mente!”

Nesse caso, a linguagem é veículo de mentiras porque:
Alternativas
Q2320198 Português
Em todas as frases abaixo ocorre a substituição do termo sublinhado por nova palavra com um prefixo; a frase em que essa troca ocorre de forma adequada, é:
Alternativas
Q2320195 Português
Em todas as opções abaixo há duas frases, que foram reescritas em uma só frase, subordinando-se a segunda à primeira; a frase em que isso foi feito de forma correta, é:
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1229147 Redes de Computadores
Um endereço IP localizado entre 240.0.0.0 e 247.255.255.255 é da classe
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1229118 Governança de TI
Considere:  
I. O alinhamento da TI com o Negócio não requer a adoção de um Modelo Organizacional.   II. Se a organização não tem uma estratégia de negócio bem definida, não há saída para a TI.   III. Não basta elaborar um plano estratégico alinhado. Para implementá-lo é preciso acompanhá-lo.    Com respeito ao PETI, é correto o que consta em
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1228981 Banco de Dados
O SGBD deve incluir software de controle de concorrência ao acesso dos dados, garantindo, em qualquer tipo de situação, a escrita/leitura de dados sem erros. Tal característica do SGBD é denominada
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1228739 Segurança da Informação
Considere:  
Pode ocorrer a existência de um backdoor não associada a uma invasão, na situação de:     I. instalação por meio de um cavalo de tróia.   II. inclusão como consequência da instalação e má configuração de um programa de administração remota, por exemplo, backdoor incluído em um produto de software de um fabricante. ,   III. A ocorrência do backdoor é restrita ao sistema operacional Windows.   Está correto o que consta em
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1228599 Banco de Dados
A propriedade das transações de um SGBD que garante:  
“ou todas as operações da transação são refletidas corretamente no banco de dados ou nenhuma o será” é a
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1223542 Redes de Computadores
Considere:  
I. Utiliza o protocolo CIFS sobre rede Ethernet/TCP-IP.   II. Bloco de dados é o tipo de informação que trafega na rede entre servidores e storage.   III. Independe do sistema operacional do servidor; é responsabilidade do próprio storage formatar, particionar e distribuir informações nos seus discos.   IV. A segurança dos dados na rede é alta, implementada numa rede não compartilhada.   As características de arquiteturas de armazenamento de dados apresentadas nos itens I a IV, correspondem correta e respectivamente, a
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1223514 Redes de Computadores
No ambiente SAN (Storage Area Network), o modelo do Fibre Channel define uma arquitetura de múltiplas camadas para o transporte dos dados pela rede, numeradas de FC-0 a FC-4. A camada que define como os blocos de dados enviados dos aplicativos de nível superior serão segmentados em uma sequência de quadros, a fim de serem repassados para a camada de transporte, incluindo ainda classes de serviços e mecanismo para controle de fluxo, é a camada
Alternativas
Ano: 2009 Banca: FCC Órgão: TCE-SP
Q1223487 Redes de Computadores
A arquitetura de armazenamento de dados NAS (Network Attached Storage) caracteriza-se pela
Alternativas
Respostas
101: D
102: E
103: A
104: B
105: A
106: C
107: C
108: D
109: E
110: C
111: C
112: A
113: E
114: C
115: A
116: A
117: A
118: A
119: C
120: E