Questões de Concurso Comentadas para analista de controle externo - tecnologia da informação

Foram encontradas 467 questões

Resolva questões gratuitamente!

Junte-se a mais de 4 milhões de concurseiros!

Q2524549 Redes de Computadores
Normalmente as LANs não operam de forma isolada. Elas são conectadas entre si ou à Internet. Para interligar LANs, ou segmentos de LANs, usamos dispositivos de conexão, que podem operar em diferentes camadas da arquitetura TCP/IP.
Relacione o dispositivo de conexão à respectiva camada na qual ele opera.
1. Física 2. Enlace 3. Rede

( ) Hub ( ) Roteador ( ) Switch ( ) Ponte
Assinale a opção que indica a relação correta na ordem apresentada.
Alternativas
Q2524548 Redes de Computadores

A respeito do protocolo IGMP, analise os itens a seguir:


I. Auxilia um roteador multicast a criar e manter atualizada uma lista de membros fiéis para cada interface do roteador.

II. No IGMP, uma mensagem membership report é enviada duas vezes, uma após a outra.

III. Um pacote IP que transporte um pacote IGMP tem um valor igual a 2 no seu campo TTL.


Está correto o que se afirma em

Alternativas
Q2524547 Sistemas Operacionais
Com relação à criação de links entre arquivos, em ambientes operacionais Linux, assinale V para afirmativa verdadeira e F para a falsa.
( ) Não é possível criar links físicos para um diretório ou arquivo especial, apenas para arquivos regulares. ( ) Links simbólicos só podem ser usados se ambos os arquivos estiverem no mesmo sistema de arquivos. ( ) Mesmo que o arquivo original seja excluído, o conteúdo do arquivo ainda estará disponível, desde que pelo menos um link físico exista.
As afirmativas são, respectivamente:
Alternativas
Q2524546 Engenharia de Software
Avalie as seguintes afirmativas no contexto de práticas e metodologias de deploy em desenvolvimento mobile:
I. A aplicação de Continuous Integration (CI) em aplicativos mobile responsivos é uma prática que visa a fusão e teste do código apenas ao final de cada sprint; II. Continuous Delivery (CD) é parte da filosofia "Mobile First" e permite que toda mudança de código seja automaticamente disponibilizada para os usuários finais, sem a necessidade de aprovação das equipes de operações; III. DevSecOps é a integração da segurança no processo de desenvolvimento de software desde o início sem comprometer a velocidade de entrega.
Está correto o que se afirma em 
Alternativas
Q2524544 Banco de Dados

Considere um banco de dados relacional de um tribunal, na qual nenhum usuário, a menos do DBA, possua algum privilégio sobre.

O DBA concedeu direito de criação de tabelas ao usuário USR_0010, que, por sua vez, criou as tabelas DADOS_PROCESSO e DADOS_PARTE.

Logo após a criação, o usuário USR_0010 executou os seguintes comandos da DCL (Data Control Language) da linguagem SQL no sistema gerenciador de banco de dados, referentes aos usuários

USR_0011 e USR_0100:

GRANT SELECT, UPDATE ON DADO_PARTE TO USR_0011;

GRANT SELECT ON DADOS_PARTE TO USR_0100;

GRANT SELECT, INSERT, DELETE, UPDATE ON

DADOS_PROCESSO TO USR_0011;

GRANT SELECT, UPDATE ON DADOS_PROCESSO TO USR_0100

WITH GRANT OPTION;


Na sequência, o usuário USR_0100 executou o seguinte comando:


GRANT UPDATE ON DADOS_PROCESSO TO USR_00101;


Por fim, o DBA executou o comando:


REVOKE UPDATE ON DADOS_PROCESSO FROM USR_0100;


Considerando esse cenário, qual situação é válida para as permissões referentes às tabelas DADOS_PROCESSO e DADOS_PARTE?

Alternativas
Q2524543 Banco de Dados
O comando EXPLAIN no sistema PostgreSQL, versão 16, desempenha um papel crucial na análise e otimização do desempenho das consultas SQL. Compreender o funcionamento e a saída desse comando é essencial para otimizar o desempenho do sistema.
Neste contexto, o comando EXPLAIN
Alternativas
Q2524542 Programação
Na programação estruturada, os laços de repetição são fundamentais para executar uma determinada sequência de instruções várias vezes, facilitando a automação de tarefas repetitivas. Dois dos laços mais comuns são os comandos "for" e "while", cada um com suas características específicas.
Assinale a opção que descreve corretamente as diferenças entre os laços "for" e "while" na programação estruturada.
Alternativas
Q2524540 Governança de TI
Assinale a opção que descreve corretamente um ponto principal característico de COBIT 2019, ITIL v4 e PMBOK.
Alternativas
Q2524539 Governança de TI
Assinale a opção que, em uma organização, descreve o objetivo principal dos controles de segregação das funções nos processos de definição, implantação e gestão de políticas de TI.
Alternativas
Q2524538 Governança de TI
Considere as práticas de atribuição de funções de TI na gestão de responsabilidades sob as perspectivas do COBIT, ITIL e PMBOK.
Na aplicação dessas práticas é correto que se deve
Alternativas
Q2524537 Governança de TI
Quando uma organização busca implementar um Planejamento Estratégico de Tecnologia da Informação (PETI), é essencial considerar o alinhamento estratégico entre a área de TI e os negócios.
Marque a ação que garante esse alinhamento e apoia a governança de TI.
Alternativas
Q2524536 Governança de TI
Uma empresa de TI está considerando a terceirização de algumas de suas funções de TI para melhorar a eficiência e reduzir custos.
Assinale a opção que descreve corretamente uma etapa crítica que tem que ser abordada antes de implementar a terceirização de TI.
Alternativas
Q2524535 Governança de TI
Considerando as metodologias de gestão de projetos e de TI, COBIT, ITIL e PMBOK possuem abordagens distintas para a gestão de riscos.
Assinale a opção que descreve corretamente a diferença fundamental entre esses três frameworks.
Alternativas
Q2517170 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
The text ends in a note of
Alternativas
Q2517169 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
The word “roughly” in “Over the past roughly 50,000 years” (5th paragraph) indicates a(n)
Alternativas
Q2517168 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
According to the text, the word that “this extraordinary gift” (5th paragraph) refers to is our
Alternativas
Q2517167 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
The opposite of “the smartest” (4th paragraph) is
Alternativas
Q2517166 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
In the second paragraph, “on the flip side” means
Alternativas
Q2517165 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
The expression “such as” in “such as climate change” (2nd paragraph) can be replaced without significant change in meaning by
Alternativas
Q2517164 Inglês
READ THE TEXT AND ANSWER QUESTION:


Artificial intelligence and the future of humanity

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?


Adapted from https://www.aspistrategist.org.au/artificialintelligence-and-the-future-of-humanity/
The first sentence presents a
Alternativas
Respostas
21: A
22: B
23: D
24: C
25: C
26: B
27: A
28: A
29: B
30: B
31: E
32: C
33: A
34: D
35: B
36: E
37: C
38: E
39: A
40: D