Questões de Concurso

Foram encontradas 19.558 questões

Resolva questões gratuitamente!

Junte-se a mais de 4 milhões de concurseiros!

Q2322012 Redes de Computadores
Marcos, analista do TCE SP, foi informado de que o estagiário Mário não consegue enviar um arquivo para o seu encarregado, Jair. Após alguns testes, Marcos identificou que:

• As máquinas que estão conectadas no mesmo comutador já se comunicam entre si;

• A máquina do Mário está na Vlan 20, no switch Alfa e porta 8;

• A máquina do Jair se conecta através da porta 10, do switch Bravo e Vlan 20;

• Ambos os switches possuem máquinas na Vlan 20 e Vlan200; e

• A porta para interligação entre os switches é a porta 24 de cada equipamento.

Para resolver o problema, Marcos deve configurar a porta:
Alternativas
Q2322011 Redes de Computadores
Luiza deseja iniciar uma chamada VoIP com João. Então sua mensagem SIP foi:

INVITE sip: [email protected] SIP/2.0 Via: SIP/2.0/UDP 167.180.112.24 From: sip:[email protected] To: sip:[email protected] CALL-ID: [email protected] Content-Type: application/sdp Content-Length: 885
c=IN IP4 167.180.112.24 m=audio 38060 RTP/AVP 0

Acerca da mensagem acima, é correto afirmar que:
Alternativas
Q2322010 Redes de Computadores
O SNMP é usado para transmitir informações e comandos entre uma entidade gerenciadora e um agente que os executa em nome da entidade dentro de um dispositivo de rede gerenciado. Em alguns casos, ocorre uma sobrecarga devido às múltiplas mensagens que devem ser enviadas.

Para evitar essa sobrecarga, deve ser utilizada a PDU:
Alternativas
Q2322009 Redes de Computadores
O analista João identificou que a máquina ALFA com IPv6 ::ffff:c0a8:acf5 está gerando um tráfego anômalo e necessita identificar o endereço MAC da máquina ALFA para tomar as medidas necessárias.

Considerando a utilização do IPv6, João deve utilizar:
Alternativas
Q2322008 Redes de Computadores
Um determinado roteador está interligando diversos enlaces, cada um rodando diferentes protocolos da camada de enlace com diferentes MTUs. Ao receber um datagrama IPv4 de um enlace, o roteador identificou que o enlace de saída tem uma MTU menor do que o comprimento do datagrama IP.

Nesse cenário, o roteador deve:
Alternativas
Q2322007 Segurança da Informação
Renata presta serviço para a área de redes para a empresa B. Renata identificou que muitos usuários ainda faziam uso do telnet para efetuar login remoto e iniciou a implementação do SSH. Este garante segurança no login remoto, além de oferecer uma capacidade cliente/servidor mais genérica, e pode ser usado para funções de rede como transferência de arquivos e e-mail. Para a implementação correta, o SSH faz uso de vários protocolos.

De forma a multiplexar o túnel encriptado para diversos canais lógicos existentes, Renata deverá usar o protocolo:
Alternativas
Q2322006 Segurança da Informação
O Ataque “Ping da morte" é um ataque de negação de serviço (DoS) em que o invasor visa interromper uma máquina alvo enviando um pacote maior que o tamanho máximo permitido, fazendo com que a máquina visada trave ou falhe. De forma a evitar esse ataque, o Departamento de Segurança do Tribunal de Contas do Estado de São Paulo (TCE SP) resolveu bloquear as requisições de PING, impedindo que outro computador utilize esse comando para obter informações sobre o servidor. Para isso, efetuará o bloqueio com o Iptables.

Sabendo-se que a interface do servidor é eth0, o comando de aplicação da regra de bloqueio no IPTABLES é: 
Alternativas
Q2322005 Segurança da Informação
Wallace, servidor público no Tribunal de Contas do Estado de São Paulo (TCE SP), está implementando um sistema de controle de acesso baseado em políticas. No Linux, o módulo que faz essa implementação é o SELinux. Wallace está atribuindo rótulos de segurança a cada processo e recursos do sistema. Após a aplicação das diretrizes de segurança, Wallace elaborou um relatório com o conteúdo das políticas implementadas e com seus benefícios para a segurança do sistema operacional.

Dentre os benefícios, tem-se que:
Alternativas
Q2322004 Segurança da Informação
Martin é um criptoanalista contratado pela empresa Z para identificar o ataque ocorrido em sua rede. Após um tempo, Martin tinha as seguintes informações:

- algoritmo de encriptação
- texto cifrado
- mensagem de texto claro escolhida pelo criptoanalista, com seu respectivo texto cifrado produzido pela chave secreta
- texto cifrado escolhido pelo criptoanalista, com seu respectivo texto claro decriptado produzido pela chave secreta

Com base nas informações obtidas pelo criptoanalista, pode-se identificar o ataque como: 
Alternativas
Q2322003 Noções de Informática
Durante uma verificação de rotina, a Divisão de Segurança da Informação do Tribunal de Contas do Estado de São Paulo (TCE SP) identificou um ataque por vírus e que este estava ocorrendo e efetuando o apagamento de arquivos no servidor. Em virtude da forma de backup utilizada, notou-se também que os mesmos arquivos também eram apagados no backup. Logo, o TCE SP decidiu trocar o tipo de backup usado pelo backup em nuvem. O tipo de backup executado pelo TCE SP, antes da implementação do backup em nuvem, é o backup:
Alternativas
Q2322002 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
The aim of the last paragraph is to:
Alternativas
Q2322001 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
The word “downsides” in “There are downsides to both categories” (7th paragraph) means: 
Alternativas
Q2322000 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
In the question “Or create laws or a code of conduct that probably would be ignored?” (5th paragraph), the authors imply that these laws and code of conduct may be:
Alternativas
Q2321999 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
When the authors refer to the use of deepfake in education (2nd paragraph), they state that ultimately teachers may find it:
Alternativas
Q2321998 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
In the 1st sentence (“It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape”), the reaction of society is described as being one of: 
Alternativas
Q2321997 Inglês
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
Based on the text, mark the statements below as true (T) or false (F).

( ) Deepfakes are circumscribed to certain areas of action.
( ) The sole aim of deepfake technology is to spread misinformation.
( ) Evidence shows that even high-ranking executives can be easy targets to vishing techniques.

The statements are, respectively:
Alternativas
Q2321989 Português
O radical grego logia tanto pode significar “linguagem” como “estudo”; a opção em que esse radical mostra o significado de “linguagem”, é:
Alternativas
Q2321988 Português
A pontuação é um elemento importante na organização sintática das frases; a opção abaixo em que todos os sinais de pontuação são vistos como de uso adequado, é:
Alternativas
Q2321986 Português
À frase “O candidato a chef saboreava o manjar que lhe fora servido” foi acrescentado o termo “lenta e delicadamente”.

A opção em que esse acréscimo foi feito de forma inadequada, é:
Alternativas
Q2321981 Português
A frase em que o emprego ou ausência da preposição mostra adequação e correção, é:
Alternativas
Respostas
1381: B
1382: E
1383: C
1384: B
1385: D
1386: B
1387: C
1388: C
1389: E
1390: D
1391: A
1392: C
1393: B
1394: A
1395: D
1396: E
1397: D
1398: B
1399: D
1400: D