Questões de Concurso Público TCE-SP 2023 para Auxiliar Técnico da Fiscalização - TI

Foram encontradas 80 questões

Q2320207 Matemática
Considere um dado cúbico com as faces numeradas de 1 a 6 tal que, quando lançado, todas as faces têm a mesma probabilidade de ocorrer. Quando esse dado é lançado 3 vezes consecutivas, a probabilidade de que a soma dos números sorteados seja igual a 7 é N / 216

O valor de N é:
Alternativas
Q2320208 Matemática
Em um hospital há vários médicos plantonistas, entre os quais A, B e C. O médico A dá plantão de 6 em 6 dias, o médico B dá plantão de 5 em 5 dias, e o médico C, de 4 em 4 dias. Esses três médicos estiveram de plantão juntos no dia 20 de julho.

A vez seguinte em que esses médicos estiveram juntos no plantão foi dia:
Alternativas
Q2320209 Raciocínio Lógico

Considere o conjunto A = {2, 3, 4, 6, 7, 8}.


O número de subconjuntos de A com 3 elementos, sendo pelo menos um elemento ímpar, é:

Alternativas
Q2320210 Matemática
Um cilindro e um cone têm o mesmo volume. O cilindro tem 60 cm de altura e o raio da base do cone é o dobro do raio da base do cilindro.

A altura do cone mede:
Alternativas
Q2320211 Matemática
Uma lista com 2023 números inteiros positivos tem moda única que ocorre exatamente 23 vezes.

A quantidade mínima de números inteiros diferentes nessa lista é:
Alternativas
Q2320212 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

Based on the text, mark the statements below as true (T) or false (F).

( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.

The statements are, respectively:
Alternativas
Q2320213 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

The newspaper headline expresses the agency’s:
Alternativas
Q2320214 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

According to the text, attacks, scams and data theft are actions that should be:
Alternativas
Q2320215 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

In “Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard” (4th paragraph), “such as” introduces a(n):
Alternativas
Q2320216 Inglês

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

“If” in “if they find a combination of words” (5th paragraph) signals a:
Alternativas
Q2320217 Arquitetura de Computadores
Daniel, técnico em hardware, foi designado para trocar uma fonte de alimentação, do tipo AT, em um computador que servia para treinamento dos novos técnicos. Se Daniel for desatento e inverter a posição dos conectores de encaixe na placa-mãe, ele pode promover a queima do equipamento.

Daniel NÃO pode inverter, na fonte de alimentação AT, os conectores:
Alternativas
Q2320218 Arquitetura de Computadores
A placa-mãe é a maior placa de circuito impresso dentro do computador e serve como base para a conexão de todos os dispositivos do computador. Na placa-mãe há muitos circuitos integrados e diversos componentes eletrônicos.

Nesse contexto, o conjunto de circuitos integrados de apoio ao processador localizados na placa-mãe é o:
Alternativas
Q2320219 Arquitetura de Computadores
A solução tradicional para armazenar grande volume de dados é observar a hierarquia das tecnologias de memória computacional. À medida que descemos na hierarquia, alguns parâmetros modificam-se.

Nesse contexto, os parâmetros modificados são:
Alternativas
Q2320220 Arquitetura de Computadores
Ventoinhas são pequenos ventiladores que melhoram o fluxo de ar dentro do computador, trazendo ar frio para dentro do computador e removendo o ar quente de dentro dele. O calor em excesso diminui a vida útil de componentes e pode fazer com que componentes apresentem falhas de funcionamento. Quando uma ventoinha é instalada jogando ar de fora do gabinete para dentro, dizemos que ela está operando em modo de ventilação. Quando uma ventoinha é instalada puxando o ar de dentro do gabinete para fora, dizemos que ela está operando em modo de exaustão.

Nesse contexto, conforme as regras de instalação de ventoinhas para gabinetes torre, as ventoinhas devem ser instaladas corretamente, respectivamente, no painel frontal, no painel traseiro e no painel lateral, nos modos de:
Alternativas
Q2320221 Arquitetura de Computadores
Em um cenário ideal, queremos o processador com a maior velocidade possível, frio e não consumindo quase nada. Obviamente, aumentar o desempenho do processador quase sempre implica aumentar a quantidade de calor gerada e o consumo elétrico. De alguns anos para cá, os fabricantes passaram a se preocupar também com a dissipação térmica e com o consumo do processador.

Nesse contexto, o padrão originalmente criado por fabricantes de hardwares para definir modos de economia de energia para o computador é o(a):
Alternativas
Q2320222 Arquitetura de Computadores
Ferreira, técnico de TI experiente, irá demonstrar seus conhecimentos em hardware ministrando uma palestra sobre um arranjo de discos RAID (do inglês, redundant array of independent disks — array redundante de discos independentes). O RAID é um meio de se criar um subsistema de armazenamento composto por vários discos individuais, com a finalidade de ganhar segurança pela redundância de dados e desempenho. Ferreira enfatizou que existe um padrão de RAID, relativamente novo, suportado por apenas algumas controladoras, porém, usa o dobro de bits de paridade, garantindo a integridade dos dados no caso de até 2 dos discos falharem ao mesmo tempo.

Ferreira estava se referindo ao:
Alternativas
Q2320223 Arquitetura de Computadores
Há duas operações que podem ser realizadas em uma memória: escrita ou gravação e leitura. Para que a informação possa ser armazenada em uma memória (operação de escrita) é necessário que seja definido um local disponível, identificado de forma precisa e única (um número, por exemplo). Nesse contexto, as memórias possuem alguns parâmetros.

O período de tempo decorrido entre duas operações sucessivas de acesso a memória, sejam de escrita ou de leitura, é o(a):
Alternativas
Q2320224 Arquitetura de Computadores
JJ, técnico de TI com vasta experiência, afirmou que um SSD (Solid State Drive) desempenha a mesma função de um disco rígido, mas, em vez de usar partes mecânicas em seu interior, usa componentes eletrônicos (memórias do tipo flash NAND ou 3D XPoint/Optane) e, por isso, é muito mais rápido do que discos rígidos.

De acordo com as características dos SSDs, está correto quando JJ afirma que os SSDs:
Alternativas
Q2320225 Arquitetura de Computadores
Observe as seguintes características de uma tecnologia computacional:

1. é um chip separado na placa-mãe;
2. o Windows 11 requer a versão 2.0; e
3. é usado por serviços como Criptografia de unidade BitLocker e Windows Hello para criar e armazenar chaves criptográficas com segurança para confirmar se o sistema operacional e o firmware do seu dispositivo são o que deveriam ser e se não foram adulterados.

A tecnologia que possui as características listadas é o(a):
Alternativas
Q2320226 Sistemas Operacionais
Jony, técnico de TI, recebeu a determinação para implantar o layout do menu Iniciar, personalizado, em seus dispositivos Windows 11. Personalizar o layout do Iniciar é comum quando você tem dispositivos semelhantes usados por muitos usuários ou deseja fixar aplicativos específicos.

De acordo com o menu Iniciar do Windows 11, Jony afirma que o menu é composto por:
Alternativas
Respostas
21: B
22: A
23: A
24: C
25: C
26: D
27: C
28: C
29: A
30: B
31: A
32: E
33: D
34: A
35: B
36: E
37: E
38: A
39: D
40: D