Questões de Concurso Para analista de informação

Foram encontradas 246 questões

Resolva questões gratuitamente!

Junte-se a mais de 4 milhões de concurseiros!

Q379799 Segurança da Informação
O sistema de detecção de intrusão (IDS) é um componente essencial em um ambiente cooperativo, por sua capacidade de detectar diversos ataques e intrusões. O IDS, que pode ser de vários tipos, deve ter sua localização definida com cuidado. O tipo HIDS (sistema de detecção de intrusão baseado em host) apresenta, como um de seus pontos fortes, a:
Alternativas
Q379798 Redes de Computadores
Firewall representa um dispositivo instalado entre a rede interna de uma organização e a internet para implementar segurança da rede. A figura que segue ilustra um firewall proxy.

imagem-001.jpg
Quando o processo de cliente-usuário envia uma mensagem, o firewall proxy executa um processo de servidor na camada X, para receber a solicitação. O servidor abre o pacote e determina se a solicitação é legítima. Se for, o servidor atua como um processo de cliente e envia a mensagem para o verdadeiro servidor na empresa. Se não for legítima, essa mensagem é eliminada, e é enviada uma outra mensagem de erro para o usuário externo. Dessa maneira, as solicitações dos usuários são filtradas na camada X, tomando-se por base o conteúdo. A camada X é conhecida como de:
Alternativas
Q379797 Redes de Computadores
802.1X diz respeito ao controle de acesso a redes baseado em portas. Nesse contexto, EAP é uma sigla para Extensible Authentication Protocol, conforme a RFC 3748, e representa um protocolo que funciona como:
Alternativas
Q379796 Redes de Computadores
WAP2 é uma sigla que representa um padrão para redes Wireless, criado como uma evolução natural para o WEP, empregando um sistema de criptografia com chave simétrica considerado complexo, mas seguro. A referência do WAP2 no IEEE e o sistema de criptografia empregado são, respectivamente:
Alternativas
Q379795 Redes de Computadores
Multi-Protocol Label Switching (MPLS) é uma tecnologia criada pelo IETF, que usa hardware de comutação de alta velocidade para transpor- tar datagramas IP, visando padronizar uma tecnologia base que integre o paradigma de encaminhamento de rótulos com o roteamento de camada de rede. VPNs representam uma aplicação importante para a MPLS, tendo em vista segurança e custo, sendo os rótulos ou etiquetas utilizados com a principal finalidade:
Alternativas
Q379794 Redes de Computadores
O padrão TCP/IP para gerenciamento de rede é o Simple Network Management Protocol (SNMP). Na estrutura de gerenciamento, MIB constitui uma sigla que se refere a:
Alternativas
Q379793 Redes de Computadores
Uma rede de computadores utiliza, dentre as máquinas conectadas, um microcomputador do tipo “diskless”. Essa máquina realiza o boot a partir do servidor de rede e, nesse processo, emprega um protocolo da arquitetura TCP/IP, fornecendo o endereço MAC para receber em resposta o endereço IP. Esse protocolo opera na camada internet e é conhecido pela sigla:
Alternativas
Q379792 Redes de Computadores
A rede de computadores do TCMRJ integra 7 sub-redes, configuradas por meio do esquema de máscara de tamanho fixo. A alternativa que apresenta a faixa total de endereços de host que inclu o IP 143.226.120.195 com uma máscara representativa de 12 bits de sub-rede é:
Alternativas
Q379791 Inglês

Even though it makes lots of sense, implementing encryption in the enterprise has its drawbacks, ranging from performance degradation, a false sense of security to complexity and cost. These potential obstacles in turn, make many businesses balk. They find themselves faced with a serious and complex dilemma. If encryption is used, costs increase, performance suffers, and the network is saddled with numerous complexities, making it very difficult to manage. If encryption is not used, costs are lower; however, the network is extremely vulnerable.

One advantage to encryption is that it separates the security of data from the security of the device where the data resides or the medium through which data is transmitted. When data itself is encrypted, it allows administrators to use unsecured means to store and transport data, since security is encompassed in the encryption. Other key advantages to implementing encryption include the elimination of the pain that co- mes with data breach disclosures, the provision of strong protection for intellectual property, and the fulfillment of myriad regulatory compliance requirements. Nevertheless, just a cursory look at the intricacies behind encryption algorithms and keys is all that is needed to rapidly understand that this is about as close to rocket science.

Take encryption keys. One of the main drawbacks of encryption is the fact that management of encryption keys must be an added administrative task for often overburdened IT staff. In fact, the security of data becomes the security of the encryption key. “Lose that key, and you effectively lose your data”
An adequate title for the three paragraphs above, which would summarize their content, could be:
Alternativas
Q379790 Inglês

Even though it makes lots of sense, implementing encryption in the enterprise has its drawbacks, ranging from performance degradation, a false sense of security to complexity and cost. These potential obstacles in turn, make many businesses balk. They find themselves faced with a serious and complex dilemma. If encryption is used, costs increase, performance suffers, and the network is saddled with numerous complexities, making it very difficult to manage. If encryption is not used, costs are lower; however, the network is extremely vulnerable.

One advantage to encryption is that it separates the security of data from the security of the device where the data resides or the medium through which data is transmitted. When data itself is encrypted, it allows administrators to use unsecured means to store and transport data, since security is encompassed in the encryption. Other key advantages to implementing encryption include the elimination of the pain that co- mes with data breach disclosures, the provision of strong protection for intellectual property, and the fulfillment of myriad regulatory compliance requirements. Nevertheless, just a cursory look at the intricacies behind encryption algorithms and keys is all that is needed to rapidly understand that this is about as close to rocket science.

Take encryption keys. One of the main drawbacks of encryption is the fact that management of encryption keys must be an added administrative task for often overburdened IT staff. In fact, the security of data becomes the security of the encryption key. “Lose that key, and you effectively lose your data”
The implication of the phrase “Lose the key and you effectively lose your data!”, is that the:
Alternativas
Q379789 Inglês

Even though it makes lots of sense, implementing encryption in the enterprise has its drawbacks, ranging from performance degradation, a false sense of security to complexity and cost. These potential obstacles in turn, make many businesses balk. They find themselves faced with a serious and complex dilemma. If encryption is used, costs increase, performance suffers, and the network is saddled with numerous complexities, making it very difficult to manage. If encryption is not used, costs are lower; however, the network is extremely vulnerable.

One advantage to encryption is that it separates the security of data from the security of the device where the data resides or the medium through which data is transmitted. When data itself is encrypted, it allows administrators to use unsecured means to store and transport data, since security is encompassed in the encryption. Other key advantages to implementing encryption include the elimination of the pain that co- mes with data breach disclosures, the provision of strong protection for intellectual property, and the fulfillment of myriad regulatory compliance requirements. Nevertheless, just a cursory look at the intricacies behind encryption algorithms and keys is all that is needed to rapidly understand that this is about as close to rocket science.

Take encryption keys. One of the main drawbacks of encryption is the fact that management of encryption keys must be an added administrative task for often overburdened IT staff. In fact, the security of data becomes the security of the encryption key. “Lose that key, and you effectively lose your data”
The last paragraph claims that encryption “is about as close to rocket science” because the process:
Alternativas
Q379788 Inglês

Even though it makes lots of sense, implementing encryption in the enterprise has its drawbacks, ranging from performance degradation, a false sense of security to complexity and cost. These potential obstacles in turn, make many businesses balk. They find themselves faced with a serious and complex dilemma. If encryption is used, costs increase, performance suffers, and the network is saddled with numerous complexities, making it very difficult to manage. If encryption is not used, costs are lower; however, the network is extremely vulnerable.

One advantage to encryption is that it separates the security of data from the security of the device where the data resides or the medium through which data is transmitted. When data itself is encrypted, it allows administrators to use unsecured means to store and transport data, since security is encompassed in the encryption. Other key advantages to implementing encryption include the elimination of the pain that co- mes with data breach disclosures, the provision of strong protection for intellectual property, and the fulfillment of myriad regulatory compliance requirements. Nevertheless, just a cursory look at the intricacies behind encryption algorithms and keys is all that is needed to rapidly understand that this is about as close to rocket science.

Take encryption keys. One of the main drawbacks of encryption is the fact that management of encryption keys must be an added administrative task for often overburdened IT staff. In fact, the security of data becomes the security of the encryption key. “Lose that key, and you effectively lose your data”
The expression ‘data breach disclosures’ as used in Paragraph 2 means a/an:
Alternativas
Q379787 Inglês

Even though it makes lots of sense, implementing encryption in the enterprise has its drawbacks, ranging from performance degradation, a false sense of security to complexity and cost. These potential obstacles in turn, make many businesses balk. They find themselves faced with a serious and complex dilemma. If encryption is used, costs increase, performance suffers, and the network is saddled with numerous complexities, making it very difficult to manage. If encryption is not used, costs are lower; however, the network is extremely vulnerable.

One advantage to encryption is that it separates the security of data from the security of the device where the data resides or the medium through which data is transmitted. When data itself is encrypted, it allows administrators to use unsecured means to store and transport data, since security is encompassed in the encryption. Other key advantages to implementing encryption include the elimination of the pain that co- mes with data breach disclosures, the provision of strong protection for intellectual property, and the fulfillment of myriad regulatory compliance requirements. Nevertheless, just a cursory look at the intricacies behind encryption algorithms and keys is all that is needed to rapidly understand that this is about as close to rocket science.

Take encryption keys. One of the main drawbacks of encryption is the fact that management of encryption keys must be an added administrative task for often overburdened IT staff. In fact, the security of data becomes the security of the encryption key. “Lose that key, and you effectively lose your data”
The dilemma referred to in the Paragraph 1 involves at least a choice between:
Alternativas
Q379786 Inglês
Text 1: Software That Fixes Itself

A professor of computer science at the Massachusetts Institute of Technology (MIT) has claimed to have developed software that can find and fix certain types of software bugs within a matter of minutes. Normally when a potentially harmful vulnerability is discovered in a piece of software, it usually takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems. The professor, however, hopes that the new software, called Fixer, will speed this process up, making software significantly more resilient against failure or attack.

Fixer works without assistance from humans and without access to a program’s underlying source code. Instead, the system monitors the behavior of a binary. By observing a program’s normal behavior and assigning a set of rules, Fixer detects certain types of errors, particularly those caused when an attacker injects malicious input into a program. When something goes wrong, Fixer throws up the anomaly and identifies the rules that have been violated. It then comes up with several potential patches designed to push the software into following the violated rules. (The patches are applied directly to the binary, bypassing the source code.) Fixer analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. If additional rules are violated, or if a patch causes the system to crash, Fixer rejects it and tries another.

Fixer is particularly effective when installed on a group of machines running the same software. In that case, what Fixer learns from errors on one machine, is used to fix all the others. Because it doesn’t require access to source code, Fixer could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained.

But Fixer’s approach could result in some hiccups for the user. For example, if a Web browser had a bug that made it unable to handle URLs past a certain length, Fixer’s patch might protect the system by clipping off the ends of URLs that were too long. By preventing the program from failing, it would also put a check on it working full throttle.

Paragraph 4 suggests that a possible hiccup inherent to Fixer’s is that its:
Alternativas
Q379785 Inglês
Text 1: Software That Fixes Itself

A professor of computer science at the Massachusetts Institute of Technology (MIT) has claimed to have developed software that can find and fix certain types of software bugs within a matter of minutes. Normally when a potentially harmful vulnerability is discovered in a piece of software, it usually takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems. The professor, however, hopes that the new software, called Fixer, will speed this process up, making software significantly more resilient against failure or attack.

Fixer works without assistance from humans and without access to a program’s underlying source code. Instead, the system monitors the behavior of a binary. By observing a program’s normal behavior and assigning a set of rules, Fixer detects certain types of errors, particularly those caused when an attacker injects malicious input into a program. When something goes wrong, Fixer throws up the anomaly and identifies the rules that have been violated. It then comes up with several potential patches designed to push the software into following the violated rules. (The patches are applied directly to the binary, bypassing the source code.) Fixer analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. If additional rules are violated, or if a patch causes the system to crash, Fixer rejects it and tries another.

Fixer is particularly effective when installed on a group of machines running the same software. In that case, what Fixer learns from errors on one machine, is used to fix all the others. Because it doesn’t require access to source code, Fixer could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained.

But Fixer’s approach could result in some hiccups for the user. For example, if a Web browser had a bug that made it unable to handle URLs past a certain length, Fixer’s patch might protect the system by clipping off the ends of URLs that were too long. By preventing the program from failing, it would also put a check on it working full throttle.

In Paragraph 2, the phrase “It then comes up with several potential patches...” can be understood as “The Fixer:
Alternativas
Q379784 Inglês
Text 1: Software That Fixes Itself

A professor of computer science at the Massachusetts Institute of Technology (MIT) has claimed to have developed software that can find and fix certain types of software bugs within a matter of minutes. Normally when a potentially harmful vulnerability is discovered in a piece of software, it usually takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems. The professor, however, hopes that the new software, called Fixer, will speed this process up, making software significantly more resilient against failure or attack.

Fixer works without assistance from humans and without access to a program’s underlying source code. Instead, the system monitors the behavior of a binary. By observing a program’s normal behavior and assigning a set of rules, Fixer detects certain types of errors, particularly those caused when an attacker injects malicious input into a program. When something goes wrong, Fixer throws up the anomaly and identifies the rules that have been violated. It then comes up with several potential patches designed to push the software into following the violated rules. (The patches are applied directly to the binary, bypassing the source code.) Fixer analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. If additional rules are violated, or if a patch causes the system to crash, Fixer rejects it and tries another.

Fixer is particularly effective when installed on a group of machines running the same software. In that case, what Fixer learns from errors on one machine, is used to fix all the others. Because it doesn’t require access to source code, Fixer could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained.

But Fixer’s approach could result in some hiccups for the user. For example, if a Web browser had a bug that made it unable to handle URLs past a certain length, Fixer’s patch might protect the system by clipping off the ends of URLs that were too long. By preventing the program from failing, it would also put a check on it working full throttle.

According to Paragraph 2, Fixer works:
Alternativas
Q379783 Inglês
Text 1: Software That Fixes Itself

A professor of computer science at the Massachusetts Institute of Technology (MIT) has claimed to have developed software that can find and fix certain types of software bugs within a matter of minutes. Normally when a potentially harmful vulnerability is discovered in a piece of software, it usually takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems. The professor, however, hopes that the new software, called Fixer, will speed this process up, making software significantly more resilient against failure or attack.

Fixer works without assistance from humans and without access to a program’s underlying source code. Instead, the system monitors the behavior of a binary. By observing a program’s normal behavior and assigning a set of rules, Fixer detects certain types of errors, particularly those caused when an attacker injects malicious input into a program. When something goes wrong, Fixer throws up the anomaly and identifies the rules that have been violated. It then comes up with several potential patches designed to push the software into following the violated rules. (The patches are applied directly to the binary, bypassing the source code.) Fixer analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. If additional rules are violated, or if a patch causes the system to crash, Fixer rejects it and tries another.

Fixer is particularly effective when installed on a group of machines running the same software. In that case, what Fixer learns from errors on one machine, is used to fix all the others. Because it doesn’t require access to source code, Fixer could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained.

But Fixer’s approach could result in some hiccups for the user. For example, if a Web browser had a bug that made it unable to handle URLs past a certain length, Fixer’s patch might protect the system by clipping off the ends of URLs that were too long. By preventing the program from failing, it would also put a check on it working full throttle.

The word ‘resilient’ in “making software significantly more resilient against failure or attack” (Paragraph 1) could best be replaced by :
Alternativas
Q379782 Inglês
Text 1: Software That Fixes Itself

A professor of computer science at the Massachusetts Institute of Technology (MIT) has claimed to have developed software that can find and fix certain types of software bugs within a matter of minutes. Normally when a potentially harmful vulnerability is discovered in a piece of software, it usually takes nearly a month on average for human engineers to come up with a fix and to push the fix out to affected systems. The professor, however, hopes that the new software, called Fixer, will speed this process up, making software significantly more resilient against failure or attack.

Fixer works without assistance from humans and without access to a program’s underlying source code. Instead, the system monitors the behavior of a binary. By observing a program’s normal behavior and assigning a set of rules, Fixer detects certain types of errors, particularly those caused when an attacker injects malicious input into a program. When something goes wrong, Fixer throws up the anomaly and identifies the rules that have been violated. It then comes up with several potential patches designed to push the software into following the violated rules. (The patches are applied directly to the binary, bypassing the source code.) Fixer analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. If additional rules are violated, or if a patch causes the system to crash, Fixer rejects it and tries another.

Fixer is particularly effective when installed on a group of machines running the same software. In that case, what Fixer learns from errors on one machine, is used to fix all the others. Because it doesn’t require access to source code, Fixer could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained.

But Fixer’s approach could result in some hiccups for the user. For example, if a Web browser had a bug that made it unable to handle URLs past a certain length, Fixer’s patch might protect the system by clipping off the ends of URLs that were too long. By preventing the program from failing, it would also put a check on it working full throttle.

In the first paragraph the professor claims that Fixer can:
Alternativas
Q379781 Português

Leia o texto abaixo e responda, a seguir, a questão proposta:


    Muita gente deu risada, nos últimos dias, com uma pesquisa italiana a respeito do melhor jeito de dar promoções no trabalho. Os três cientistas da Universidade de Catânia concluíram que promover funcionários com base no mérito não é a melhor estratégia. Em vez disso, a empresa que quisesse acumular a maior “quantidade” possível de competência deveria promover os funcionários na louca, aleatoriamente. Num dos cenários do estudo, também seria bom negócio promover sempre os piores funcionários.
    Talvez você esteja pensando “opa, meu trabalho já aplica isso aí…” Pois é, os italianos tocaram num tema interessante no tal estudo. Como eles chegaram a essa conclusão? Eles aplicaram uma piadinha de administração dos anos 60, o “Princípio de Peter”. Segundo esse princípio, o funcionário vai sendo promovido (ou seja, removido) enquanto for competente, até chegar a um nível em que é incompetente (e ali permanece, pelo menos por um tempo, antes de ser demitido). Em outras palavras, a empresa sempre tira as pessoas dos cargos em que elas são boas e as leva para outros, em que elas podem ser péssimas. O autor dessa sacada, o psicólogo canadense Laurence Peter, considerava que as habilidades exigidas numa empresa não se acumulavam e que as habilidades de um nível hierárquico não eram semelhantes às exigidas no nível abaixo. Ou seja, só porque você era o melhor vendedor da empresa, não quer dizer que será um bom coordenador de vendedores - o que faz muito sentido.
    Os pesquisadores italianos Alessandro Pluchino, Andrea Rapisarda e Cesare Garofalo testaram o Princípio de Peter num modelo matemático, para simular uma empresa com 180 funcionários e seis níveis hierárquicos. Eles experimentaram a lógica do senso comum, de que a pessoa leva com ela a maior parte da competência mostrada no cargo anterior, e a lógica do Princípio de Peter, de que mostrar competência no novo cargo não tem nada a ver com o cargo anterior. Para cada lógica, experimentaram três diferentes políticas de promoção dos funcionariozinhos virtuais: promover sempre os melhores, promover sempre os piores e promover aleatoriamente.
    Na média dos seis resultados, a promoção aleatória foi a melhor para acumular competência na empresa. Também foi possível, pela lógica de Peter, ter bom resultado promovendo sempre os piores (já que os melhores continuavam fazendo o que faziam bem) e intercalando promoções dos melhores e dos piores.
    Organizações e equipes de todos os tamanhos deveriam tentar fugir do Princípio de Peter. Até as empresas já têm algumas ideias novas: testar o funcionário com desafios do cargo que ele deve assumir, antes de promovê-lo; permitir que o funcionário que já mostrou competência possa ser testado em mais de uma função; sair da estrutura de pirâmide e tentar jeitos novos de se organizar, com menos hierarquia e mais flexibilidade; ter caminhos variados para o funcionário avançar na carreira.
    O trabalho recebeu em Harvard um prêmio Ig Nobel, dado a pesquisas excêntricas “que fazem pessoas rir antes de pensar”, como é definido pela entidade que o concede, a Improbable Research (“Pesquisa Improvável”).
    E você, acha que o mundo consegue escapar do Princípio de Peter?


(Giffon, Carlosi. A empresa em que os piores funcionários ganham as promoções. In: Época. 2.10.2010)

“O autor dessa sacada, o psicólogo canadense Laurence Peter, considerava que as habilidades exigidas numa empresa não se acumulavam e que as habilidades de um nível hierárquico não eram semelhantes às exigidas no nível abaixo.

” Reescreve-se essa frase do texto em cada alter- nativa abaixo. A nova redação é gramaticalmente inaceitável em:
Alternativas
Q379780 Português

Leia o texto abaixo e responda, a seguir, a questão proposta:


    Muita gente deu risada, nos últimos dias, com uma pesquisa italiana a respeito do melhor jeito de dar promoções no trabalho. Os três cientistas da Universidade de Catânia concluíram que promover funcionários com base no mérito não é a melhor estratégia. Em vez disso, a empresa que quisesse acumular a maior “quantidade” possível de competência deveria promover os funcionários na louca, aleatoriamente. Num dos cenários do estudo, também seria bom negócio promover sempre os piores funcionários.
    Talvez você esteja pensando “opa, meu trabalho já aplica isso aí…” Pois é, os italianos tocaram num tema interessante no tal estudo. Como eles chegaram a essa conclusão? Eles aplicaram uma piadinha de administração dos anos 60, o “Princípio de Peter”. Segundo esse princípio, o funcionário vai sendo promovido (ou seja, removido) enquanto for competente, até chegar a um nível em que é incompetente (e ali permanece, pelo menos por um tempo, antes de ser demitido). Em outras palavras, a empresa sempre tira as pessoas dos cargos em que elas são boas e as leva para outros, em que elas podem ser péssimas. O autor dessa sacada, o psicólogo canadense Laurence Peter, considerava que as habilidades exigidas numa empresa não se acumulavam e que as habilidades de um nível hierárquico não eram semelhantes às exigidas no nível abaixo. Ou seja, só porque você era o melhor vendedor da empresa, não quer dizer que será um bom coordenador de vendedores - o que faz muito sentido.
    Os pesquisadores italianos Alessandro Pluchino, Andrea Rapisarda e Cesare Garofalo testaram o Princípio de Peter num modelo matemático, para simular uma empresa com 180 funcionários e seis níveis hierárquicos. Eles experimentaram a lógica do senso comum, de que a pessoa leva com ela a maior parte da competência mostrada no cargo anterior, e a lógica do Princípio de Peter, de que mostrar competência no novo cargo não tem nada a ver com o cargo anterior. Para cada lógica, experimentaram três diferentes políticas de promoção dos funcionariozinhos virtuais: promover sempre os melhores, promover sempre os piores e promover aleatoriamente.
    Na média dos seis resultados, a promoção aleatória foi a melhor para acumular competência na empresa. Também foi possível, pela lógica de Peter, ter bom resultado promovendo sempre os piores (já que os melhores continuavam fazendo o que faziam bem) e intercalando promoções dos melhores e dos piores.
    Organizações e equipes de todos os tamanhos deveriam tentar fugir do Princípio de Peter. Até as empresas já têm algumas ideias novas: testar o funcionário com desafios do cargo que ele deve assumir, antes de promovê-lo; permitir que o funcionário que já mostrou competência possa ser testado em mais de uma função; sair da estrutura de pirâmide e tentar jeitos novos de se organizar, com menos hierarquia e mais flexibilidade; ter caminhos variados para o funcionário avançar na carreira.
    O trabalho recebeu em Harvard um prêmio Ig Nobel, dado a pesquisas excêntricas “que fazem pessoas rir antes de pensar”, como é definido pela entidade que o concede, a Improbable Research (“Pesquisa Improvável”).
    E você, acha que o mundo consegue escapar do Princípio de Peter?


(Giffon, Carlosi. A empresa em que os piores funcionários ganham as promoções. In: Época. 2.10.2010)

Há equívoco quanto à conjugação verbal na seguinte alternativa:
Alternativas
Respostas
201: E
202: B
203: C
204: B
205: D
206: E
207: B
208: D
209: C
210: B
211: D
212: E
213: B
214: C
215: A
216: E
217: C
218: D
219: B
220: D