Questões de Concurso
Sobre interpretação de texto | reading comprehension em inglês
Foram encontradas 10.134 questões
There's lots of talk about network virtualization benefits, but are virtual network appliances all they're cracked up to be? Only in some scenarios.
Network virtualization benefits can be plentiful, but only in certain scenarios. Learn where virtual network appliances can work -- and where they can't.
If virtualization enables servers to be spun up and down on demand for cost efficiency and agility, wouldn't it make sense to implement virtual network components too? After all, virtual servers need to communicate inbound and outbound and still be firewall-protected and load balanced. That would seem to be best addressed by virtual network appliances that can be spun-up on demand, right? Only in some scenarios.
Many networking vendors have already begun to minimize development cost by using Intel-based platforms and commodity hardware. Examples of this range from the Cisco ASA firewall to F5 load balancers and Vyatta routers. The obvious next step for some of these vendors has been to offer their products in virtual appliance packaging. F5 took a small step forward with the Local Traffic Manager - Virtual Edition (LTM VE), while Vyatta claims to offer a full range of virtual appliance solutions. VMware was somewhat late to the game, but it also offers virtualized firewalls (vShield Zones and vShield App) and routers/load balancers (vShield Edge).
Virtual network appliances: What's the catch?
The problem is that unlike servers, networking appliances commonly perform I/O-intensive tasks, moving large amounts of data between network interfaces with minimal additional processing, relying heavily on dedicated hardware. All high-speed routing and packet forwarding, as well as encryption (both IPsec and SSL) and load balancing, rely on dedicated silicon. When a networking appliance is repackaged into a virtual machine format, the dedicated hardware is gone, and all these tasks must now be performed by the general- purpose CPU, sometimes resulting in extreme reduction in performance.
Implementing routers, switches or firewalls in a virtual appliance would just burn the CPU cycles that could be better used elsewhere -- unless, of course, you’ve over-provisioned your servers and have plenty of idle CPU cycles, in which case something has gone seriously wrong with your planning.
To make matters worse, the hypervisor software used in server virtualization solutions also virtualizes the network interfaces. That means that every I/O access path to virtualized hardware from the networking appliance results in a context switch to higher privilege software (the hypervisor), which uses numerous CPU cycles to decode what needs to be done and emulate the desired action. Also, data passed between virtual machines must be copied between their address spaces, adding further latency to the process.
There is some help in that the VMware hypervisor has the DVFilter API, which allows a loadable kernel module to inspect and modify network traffic either within the hypervisor (vNetwork Data Path Agent) or in combination with a virtual machine (vNetwork Control Path Agent). The loadable kernel module significantly reduces the VM context switching overhead.
Where virtual network appliances can work?
There are some use cases in which virtual network appliances make perfect sense. For instance, you could virtualize an appliance that performs lots of CPU-intensive processing with no reliance on dedicated hardware. Web application firewalls (WAFs) and complex load balancers are perfect examples (no wonder they’re commonly implemented as loadable modules in Apache Web servers or as Squid reverse proxy servers).
Also, if you’re planning to roll out multi-tenant cloud, the flexibility gained by treating networking appliances as click-to-deploy Lego bricks might more than justify the subpar performance. This is especially so if you charge your users by their actual VM/CPU usage, in which case you don’t really care how much CPU they’re using.
Virtualized networking also makes sense when firewall and routing functions are implemented as part of the virtual switch in each hypervisor. This could result in optimal traffic flow between virtual machines (regardless of whether they belong to the same IP subnet or not) and solve the problem of traffic trombones. Unfortunately, it seems that Cisco is still the only vendor that extends the VMware hypervisor switch using the Virtual Ethernet Module (VEM) functionality. While numerous security solutions already deploy the VMsafe APIs, the networking appliances I’ve seen so far (including the vShield Edge from VMware) rely on virtual machines to forward traffic between virtual (or physical) LANs.
Obviously the networking vendors have a very long way to go before reaching the true potential of virtualized networking.
Disponível em: http://searchnetworking.techtarget.com/tip/Virtual-network-appliances-Benefits-and- drawbacks
Search Networking - Tech Target - Texto de Ivan Pepelnjak (Março de 2011)
O requisito de hardware necessário e o desempenho esperado das aplicações são, respectivamente:
There's lots of talk about network virtualization benefits, but are virtual network appliances all they're cracked up to be? Only in some scenarios.
Network virtualization benefits can be plentiful, but only in certain scenarios. Learn where virtual network appliances can work -- and where they can't.
If virtualization enables servers to be spun up and down on demand for cost efficiency and agility, wouldn't it make sense to implement virtual network components too? After all, virtual servers need to communicate inbound and outbound and still be firewall-protected and load balanced. That would seem to be best addressed by virtual network appliances that can be spun-up on demand, right? Only in some scenarios.
Many networking vendors have already begun to minimize development cost by using Intel-based platforms and commodity hardware. Examples of this range from the Cisco ASA firewall to F5 load balancers and Vyatta routers. The obvious next step for some of these vendors has been to offer their products in virtual appliance packaging. F5 took a small step forward with the Local Traffic Manager - Virtual Edition (LTM VE), while Vyatta claims to offer a full range of virtual appliance solutions. VMware was somewhat late to the game, but it also offers virtualized firewalls (vShield Zones and vShield App) and routers/load balancers (vShield Edge).
Virtual network appliances: What's the catch?
The problem is that unlike servers, networking appliances commonly perform I/O-intensive tasks, moving large amounts of data between network interfaces with minimal additional processing, relying heavily on dedicated hardware. All high-speed routing and packet forwarding, as well as encryption (both IPsec and SSL) and load balancing, rely on dedicated silicon. When a networking appliance is repackaged into a virtual machine format, the dedicated hardware is gone, and all these tasks must now be performed by the general- purpose CPU, sometimes resulting in extreme reduction in performance.
Implementing routers, switches or firewalls in a virtual appliance would just burn the CPU cycles that could be better used elsewhere -- unless, of course, you’ve over-provisioned your servers and have plenty of idle CPU cycles, in which case something has gone seriously wrong with your planning.
To make matters worse, the hypervisor software used in server virtualization solutions also virtualizes the network interfaces. That means that every I/O access path to virtualized hardware from the networking appliance results in a context switch to higher privilege software (the hypervisor), which uses numerous CPU cycles to decode what needs to be done and emulate the desired action. Also, data passed between virtual machines must be copied between their address spaces, adding further latency to the process.
There is some help in that the VMware hypervisor has the DVFilter API, which allows a loadable kernel module to inspect and modify network traffic either within the hypervisor (vNetwork Data Path Agent) or in combination with a virtual machine (vNetwork Control Path Agent). The loadable kernel module significantly reduces the VM context switching overhead.
Where virtual network appliances can work?
There are some use cases in which virtual network appliances make perfect sense. For instance, you could virtualize an appliance that performs lots of CPU-intensive processing with no reliance on dedicated hardware. Web application firewalls (WAFs) and complex load balancers are perfect examples (no wonder they’re commonly implemented as loadable modules in Apache Web servers or as Squid reverse proxy servers).
Also, if you’re planning to roll out multi-tenant cloud, the flexibility gained by treating networking appliances as click-to-deploy Lego bricks might more than justify the subpar performance. This is especially so if you charge your users by their actual VM/CPU usage, in which case you don’t really care how much CPU they’re using.
Virtualized networking also makes sense when firewall and routing functions are implemented as part of the virtual switch in each hypervisor. This could result in optimal traffic flow between virtual machines (regardless of whether they belong to the same IP subnet or not) and solve the problem of traffic trombones. Unfortunately, it seems that Cisco is still the only vendor that extends the VMware hypervisor switch using the Virtual Ethernet Module (VEM) functionality. While numerous security solutions already deploy the VMsafe APIs, the networking appliances I’ve seen so far (including the vShield Edge from VMware) rely on virtual machines to forward traffic between virtual (or physical) LANs.
Obviously the networking vendors have a very long way to go before reaching the true potential of virtualized networking.
Disponível em: http://searchnetworking.techtarget.com/tip/Virtual-network-appliances-Benefits-and- drawbacks
Search Networking - Tech Target - Texto de Ivan Pepelnjak (Março de 2011)
Barack Obama just became the first US president to write a line of computer code (assuming George W. Bush never secretly indulged in PHP). At the White House yesterday, Obama sat down with students who were learning the fundamentals of JavaScript, the popular programming language used to create most web pages.
The line he wrote was:
moveForward(100);
“So I make the F in higher case?” Obama asked, correctly observing that JavaScript is case sensitive. “Semicolon?” (That semicolon is optional, but Obama apparently has a knack for recognizing JavaScript best practices.)
Obama was playing with a Code.org tutorial based on the popular Disney movie Frozen. In his line of code, the President called a function-moveForward-pre-defined by Code.org for the exercise.
Calling a function in JavaScript is simple: write its name exactly as it has been defined, followed by parentheses that contain its “arguments.” In this case, a single argument tells the program how many pixels to move a Frozen character forward. Because it’s measured in pixels, the argument has to be a number. If Obama had written moveForward(“three steps”), the program would have failed, offering only a cryptic error message and exposing the president to the near-perpetual state of frustration most software developers live in.
“This is Elsa?” Obama asked, referring to the movie’s main character.
Obama was promoting Computer Science Education Week and Code.org’s Hour of Code campaign, which encourages kids to try programming for at least one hour. “It turns out the concepts are not that complicated,” Obama told the students at the White House, though his attempt to explain it suggested otherwise:
“The basic concept behind coding is that you take zeros and ones, you take two numbers, yes or no, and those can be translated into electrical messages that then run through the computer…. So all it’s doing is it’s saying yes or no over and over again, and the computer’s powerful enough that it can read a really long set of instructions really quickly.”
Something like that.
Disponível em: http://qz.com/308904/heres-the-first-line-of-code-ever-written-by-a-us-president/
Quartz (9 de Dezembro de 2014) - Texto de Zachary M. Seward
Barack Obama just became the first US president to write a line of computer code (assuming George W. Bush never secretly indulged in PHP). At the White House yesterday, Obama sat down with students who were learning the fundamentals of JavaScript, the popular programming language used to create most web pages.
The line he wrote was:
moveForward(100);
“So I make the F in higher case?” Obama asked, correctly observing that JavaScript is case sensitive. “Semicolon?” (That semicolon is optional, but Obama apparently has a knack for recognizing JavaScript best practices.)
Obama was playing with a Code.org tutorial based on the popular Disney movie Frozen. In his line of code, the President called a function-moveForward-pre-defined by Code.org for the exercise.
Calling a function in JavaScript is simple: write its name exactly as it has been defined, followed by parentheses that contain its “arguments.” In this case, a single argument tells the program how many pixels to move a Frozen character forward. Because it’s measured in pixels, the argument has to be a number. If Obama had written moveForward(“three steps”), the program would have failed, offering only a cryptic error message and exposing the president to the near-perpetual state of frustration most software developers live in.
“This is Elsa?” Obama asked, referring to the movie’s main character.
Obama was promoting Computer Science Education Week and Code.org’s Hour of Code campaign, which encourages kids to try programming for at least one hour. “It turns out the concepts are not that complicated,” Obama told the students at the White House, though his attempt to explain it suggested otherwise:
“The basic concept behind coding is that you take zeros and ones, you take two numbers, yes or no, and those can be translated into electrical messages that then run through the computer…. So all it’s doing is it’s saying yes or no over and over again, and the computer’s powerful enough that it can read a really long set of instructions really quickly.”
Something like that.
Disponível em: http://qz.com/308904/heres-the-first-line-of-code-ever-written-by-a-us-president/
Quartz (9 de Dezembro de 2014) - Texto de Zachary M. Seward
Barack Obama just became the first US president to write a line of computer code (assuming George W. Bush never secretly indulged in PHP). At the White House yesterday, Obama sat down with students who were learning the fundamentals of JavaScript, the popular programming language used to create most web pages.
The line he wrote was:
moveForward(100);
“So I make the F in higher case?” Obama asked, correctly observing that JavaScript is case sensitive. “Semicolon?” (That semicolon is optional, but Obama apparently has a knack for recognizing JavaScript best practices.)
Obama was playing with a Code.org tutorial based on the popular Disney movie Frozen. In his line of code, the President called a function-moveForward-pre-defined by Code.org for the exercise.
Calling a function in JavaScript is simple: write its name exactly as it has been defined, followed by parentheses that contain its “arguments.” In this case, a single argument tells the program how many pixels to move a Frozen character forward. Because it’s measured in pixels, the argument has to be a number. If Obama had written moveForward(“three steps”), the program would have failed, offering only a cryptic error message and exposing the president to the near-perpetual state of frustration most software developers live in.
“This is Elsa?” Obama asked, referring to the movie’s main character.
Obama was promoting Computer Science Education Week and Code.org’s Hour of Code campaign, which encourages kids to try programming for at least one hour. “It turns out the concepts are not that complicated,” Obama told the students at the White House, though his attempt to explain it suggested otherwise:
“The basic concept behind coding is that you take zeros and ones, you take two numbers, yes or no, and those can be translated into electrical messages that then run through the computer…. So all it’s doing is it’s saying yes or no over and over again, and the computer’s powerful enough that it can read a really long set of instructions really quickly.”
Something like that.
Disponível em: http://qz.com/308904/heres-the-first-line-of-code-ever-written-by-a-us-president/
Quartz (9 de Dezembro de 2014) - Texto de Zachary M. Seward
The Naval Nuclear Propulsion Program (NNPP) started in 1948. Since that time, the NNPP has provided safe and effective propulsion systems to power submarines, surface combatants, and aircraft carriers. Today, nuclear propulsion enables virtually undetectable US Navy submarines, including the sea-based leg of the strategic triad, and provides essentially inexhaustible propulsion power independent of forward logistical support to both our submarines and aircraft carriers. Over forty percent of the Navy's major combatant ships are nuclear-powered, and because of their demonstrated safety and reliability, these ships have access to seaports throughout the world. The NNPP has consistently sought the best way to affordably meet Navy requirements by evaluating, developing, and delivering a variety of reactor types, fuel systems, and structural materials. The Program has investigated many different fuel systems and reactor design features, and has designed, built, and operated over thirty different reactor designs in over twenty plant types to employ the most promising of these developments in practical applications. Improvements in naval reactor design have allowed increased power and energy to keep pace with the operational requirements of the modern nuclear fleet, while maintaining a conservative design approach that ensures reliability and safety to the crew, the public, and the environment. As just one example of the progress that has been made, the earliest reactor core designs in the NAUTILUS required refueling after about two years while modern reactor cores can last the life of a submarine, or over thirty years without refueling. These improvements have been the result of prudent, conservative engineering, backed by analysis, testing, and prototyping. The NNPP was also a pioneer in developing basic technologies and transferring technology to the civilian nuclear electric power industry. For example, the Program demonstrated the feasibility of commercial nuclear power generation in this country by designing, constructing and operating the Shipping port Atomic Power Station in Pennsylvania and showing the feasibility of a thorium-based breeder reactor.
In: Report on Low Enriched Uranium for Naval Reactor Cores. Page 1. Report to Congress, January 2014. Office of Naval Reactors. US Dept. of Energy. DC 2058 http://fissilematerials.org/library/doe14.pdf
I. investigates more efficient fuels and reactors for the Navy.
II. is concerned about how to spend the financial resources received.
III. has also contributed with the civilian power industry.
The correct assertion(s) is(are)
The Naval Nuclear Propulsion Program (NNPP) started in 1948. Since that time, the NNPP has provided safe and effective propulsion systems to power submarines, surface combatants, and aircraft carriers. Today, nuclear propulsion enables virtually undetectable US Navy submarines, including the sea-based leg of the strategic triad, and provides essentially inexhaustible propulsion power independent of forward logistical support to both our submarines and aircraft carriers. Over forty percent of the Navy's major combatant ships are nuclear-powered, and because of their demonstrated safety and reliability, these ships have access to seaports throughout the world. The NNPP has consistently sought the best way to affordably meet Navy requirements by evaluating, developing, and delivering a variety of reactor types, fuel systems, and structural materials. The Program has investigated many different fuel systems and reactor design features, and has designed, built, and operated over thirty different reactor designs in over twenty plant types to employ the most promising of these developments in practical applications. Improvements in naval reactor design have allowed increased power and energy to keep pace with the operational requirements of the modern nuclear fleet, while maintaining a conservative design approach that ensures reliability and safety to the crew, the public, and the environment. As just one example of the progress that has been made, the earliest reactor core designs in the NAUTILUS required refueling after about two years while modern reactor cores can last the life of a submarine, or over thirty years without refueling. These improvements have been the result of prudent, conservative engineering, backed by analysis, testing, and prototyping. The NNPP was also a pioneer in developing basic technologies and transferring technology to the civilian nuclear electric power industry. For example, the Program demonstrated the feasibility of commercial nuclear power generation in this country by designing, constructing and operating the Shipping port Atomic Power Station in Pennsylvania and showing the feasibility of a thorium-based breeder reactor.
In: Report on Low Enriched Uranium for Naval Reactor Cores. Page 1. Report to Congress, January 2014. Office of Naval Reactors. US Dept. of Energy. DC 2058 http://fissilematerials.org/library/doe14.pdf
NASA Researchers Studying Advanced Nuclear Rocket Technologies
January 9, 2013
By using an innovative test facility at NASA’s Marshall Space Flight Center in Huntsville, Ala., researchers are able to use non-nuclear materials to simulate nuclear thermal rocket fuels - ones capable of propelling bold new exploration missions to the Red Planet and beyond. The Nuclear Cryogenic Propulsion Stage team is tackling a three-year project to demonstrate the viability of nuclear propulsion system technologies. A nuclear rocket engine uses a nuclear reactor to heat hydrogen to very high temperatures, which expands through a nozzle to generate thrust. Nuclear rocket engines generate higher thrust and are more than twice as efficient as conventional chemical rocket engines.
The team recently used Marshall’s Nuclear Thermal Rocket Element Environmental Simulator, or NTREES, to perform realistic, non-nuclear testing of various materials for nuclear thermal rocket fuel elements. In an actual reactor, the fuel elements would contain uranium, but no radioactive materials are used during the NTREES tests. Among the fuel options are a graphite composite and a “cermet” composite - a blend of ceramics and metals. Both materials were investigated in previous NASA and U.S. Department of Energy research efforts.
Nuclear-powered rocket concepts are not new; the United States conducted studies and significant ground testing from 1955 to 1973 to determine the viability of nuclear propulsion systems, but ceased testing when plans for a crewed Mars mission were deferred.
The NTREES facility is designed to test fuel elements and materials in hot flowing hydrogen, reaching pressures up to 1,000 pounds per square inch and temperatures of nearly 5,000 degrees Fahrenheit - conditions that simulate space-based nuclear propulsion systems to provide baseline data critical to the research team.
“This is vital testing, helping us reduce risks and costs associated with advanced propulsion technologies and ensuring excellent performance and results as we progress toward further system development and testing,” said Mike Houts, project manager for nuclear systems at Marshall.
A first-generation nuclear cryogenic propulsion system could propel human explorers to Mars more efficiently than conventional spacecraft, reducing crews’ exposure to harmful space radiation and other effects of long-term space missions. It could also transport heavy cargo and science payloads. Further development and use of a first-generation nuclear system could also provide the foundation for developing extremely advanced propulsion technologies and systems in the future - ones that could take human crews even farther into the solar system.
Building on previous, successful research and using the NTREES facility, NASA can safely and thoroughly test simulated nuclear fuel elements of various sizes, providing important test data to support the design of a future Nuclear Cryogenic Propulsion Stage. A nuclear cryogenic upper stage - its liquid- hydrogen propellant chilled to super-cold temperatures for launch - would be designed to be safe during all mission phases and would not be started until the spacecraft had reached a safe orbit and was ready to begin its journey to a distant destination. Prior to startup in a safe orbit, the nuclear system would be cold, with no fission products generated from nuclear operations, and with radiation below significant levels.
“The information we gain using this test facility will permit engineers to design rugged, efficient fuel elements and nuclear propulsion systems,” said NASA researcher Bill Emrich, who manages the NTREES facility at Marshall. “It’s our hope that it will enable us to develop a reliable, cost-effective nuclear rocket engine in the not-too-distant future."
The Nuclear Cryogenic Propulsion Stage project is part of the Advanced Exploration Systems program, which is managed by NASA’s Human Exploration and Operations Mission Directorate and includes participation by the U.S. Department of Energy. The program, which focuses on crew safety and mission operations in deep space, seeks to pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future vehicle development and human missions beyond Earth orbit.
Marshall researchers are partnering on the project with NASA’s Glenn Research Center in Cleveland, Ohio; NASA’s Johnson Space Center in Houston; Idaho National Laboratory in Idaho Falls; Los Alamos National Laboratory in Los Alamos, N.M.; and Oak Ridge National Laboratory in Oak Ridge, Tenn.
The Marshall Center leads development of the Space Launch System for NASA. The Science & Technology Office at Marshall strives to apply advanced concepts and capabilities to the research, development and management of a broad spectrum of NASA programs, projects and activities that fall at the very intersection of science and exploration, where every discovery and achievement furthers scientific knowledge and understanding, and supports the agency’s ambitious mission to expand humanity’s reach across the solar system. The NTREES test facility is just one of numerous cutting-edge space propulsion and science research facilities housed in the state-of- the-art Propulsion Research & Development Laboratory at Marshall, contributing to development of the Space Launch System and a variety of other NASA programs and missions.
Available in: http://www.nasa.gov
“Nuclear-powered rocket concepts are not new.”
Choose the alternative in which the extract is in the same verb tense as the one above.
NASA Researchers Studying Advanced Nuclear Rocket Technologies
January 9, 2013
By using an innovative test facility at NASA’s Marshall Space Flight Center in Huntsville, Ala., researchers are able to use non-nuclear materials to simulate nuclear thermal rocket fuels - ones capable of propelling bold new exploration missions to the Red Planet and beyond. The Nuclear Cryogenic Propulsion Stage team is tackling a three-year project to demonstrate the viability of nuclear propulsion system technologies. A nuclear rocket engine uses a nuclear reactor to heat hydrogen to very high temperatures, which expands through a nozzle to generate thrust. Nuclear rocket engines generate higher thrust and are more than twice as efficient as conventional chemical rocket engines.
The team recently used Marshall’s Nuclear Thermal Rocket Element Environmental Simulator, or NTREES, to perform realistic, non-nuclear testing of various materials for nuclear thermal rocket fuel elements. In an actual reactor, the fuel elements would contain uranium, but no radioactive materials are used during the NTREES tests. Among the fuel options are a graphite composite and a “cermet” composite - a blend of ceramics and metals. Both materials were investigated in previous NASA and U.S. Department of Energy research efforts.
Nuclear-powered rocket concepts are not new; the United States conducted studies and significant ground testing from 1955 to 1973 to determine the viability of nuclear propulsion systems, but ceased testing when plans for a crewed Mars mission were deferred.
The NTREES facility is designed to test fuel elements and materials in hot flowing hydrogen, reaching pressures up to 1,000 pounds per square inch and temperatures of nearly 5,000 degrees Fahrenheit - conditions that simulate space-based nuclear propulsion systems to provide baseline data critical to the research team.
“This is vital testing, helping us reduce risks and costs associated with advanced propulsion technologies and ensuring excellent performance and results as we progress toward further system development and testing,” said Mike Houts, project manager for nuclear systems at Marshall.
A first-generation nuclear cryogenic propulsion system could propel human explorers to Mars more efficiently than conventional spacecraft, reducing crews’ exposure to harmful space radiation and other effects of long-term space missions. It could also transport heavy cargo and science payloads. Further development and use of a first-generation nuclear system could also provide the foundation for developing extremely advanced propulsion technologies and systems in the future - ones that could take human crews even farther into the solar system.
Building on previous, successful research and using the NTREES facility, NASA can safely and thoroughly test simulated nuclear fuel elements of various sizes, providing important test data to support the design of a future Nuclear Cryogenic Propulsion Stage. A nuclear cryogenic upper stage - its liquid- hydrogen propellant chilled to super-cold temperatures for launch - would be designed to be safe during all mission phases and would not be started until the spacecraft had reached a safe orbit and was ready to begin its journey to a distant destination. Prior to startup in a safe orbit, the nuclear system would be cold, with no fission products generated from nuclear operations, and with radiation below significant levels.
“The information we gain using this test facility will permit engineers to design rugged, efficient fuel elements and nuclear propulsion systems,” said NASA researcher Bill Emrich, who manages the NTREES facility at Marshall. “It’s our hope that it will enable us to develop a reliable, cost-effective nuclear rocket engine in the not-too-distant future."
The Nuclear Cryogenic Propulsion Stage project is part of the Advanced Exploration Systems program, which is managed by NASA’s Human Exploration and Operations Mission Directorate and includes participation by the U.S. Department of Energy. The program, which focuses on crew safety and mission operations in deep space, seeks to pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future vehicle development and human missions beyond Earth orbit.
Marshall researchers are partnering on the project with NASA’s Glenn Research Center in Cleveland, Ohio; NASA’s Johnson Space Center in Houston; Idaho National Laboratory in Idaho Falls; Los Alamos National Laboratory in Los Alamos, N.M.; and Oak Ridge National Laboratory in Oak Ridge, Tenn.
The Marshall Center leads development of the Space Launch System for NASA. The Science & Technology Office at Marshall strives to apply advanced concepts and capabilities to the research, development and management of a broad spectrum of NASA programs, projects and activities that fall at the very intersection of science and exploration, where every discovery and achievement furthers scientific knowledge and understanding, and supports the agency’s ambitious mission to expand humanity’s reach across the solar system. The NTREES test facility is just one of numerous cutting-edge space propulsion and science research facilities housed in the state-of- the-art Propulsion Research & Development Laboratory at Marshall, contributing to development of the Space Launch System and a variety of other NASA programs and missions.
Available in: http://www.nasa.gov
NASA Researchers Studying Advanced Nuclear Rocket Technologies
January 9, 2013
By using an innovative test facility at NASA’s Marshall Space Flight Center in Huntsville, Ala., researchers are able to use non-nuclear materials to simulate nuclear thermal rocket fuels - ones capable of propelling bold new exploration missions to the Red Planet and beyond. The Nuclear Cryogenic Propulsion Stage team is tackling a three-year project to demonstrate the viability of nuclear propulsion system technologies. A nuclear rocket engine uses a nuclear reactor to heat hydrogen to very high temperatures, which expands through a nozzle to generate thrust. Nuclear rocket engines generate higher thrust and are more than twice as efficient as conventional chemical rocket engines.
The team recently used Marshall’s Nuclear Thermal Rocket Element Environmental Simulator, or NTREES, to perform realistic, non-nuclear testing of various materials for nuclear thermal rocket fuel elements. In an actual reactor, the fuel elements would contain uranium, but no radioactive materials are used during the NTREES tests. Among the fuel options are a graphite composite and a “cermet” composite - a blend of ceramics and metals. Both materials were investigated in previous NASA and U.S. Department of Energy research efforts.
Nuclear-powered rocket concepts are not new; the United States conducted studies and significant ground testing from 1955 to 1973 to determine the viability of nuclear propulsion systems, but ceased testing when plans for a crewed Mars mission were deferred.
The NTREES facility is designed to test fuel elements and materials in hot flowing hydrogen, reaching pressures up to 1,000 pounds per square inch and temperatures of nearly 5,000 degrees Fahrenheit - conditions that simulate space-based nuclear propulsion systems to provide baseline data critical to the research team.
“This is vital testing, helping us reduce risks and costs associated with advanced propulsion technologies and ensuring excellent performance and results as we progress toward further system development and testing,” said Mike Houts, project manager for nuclear systems at Marshall.
A first-generation nuclear cryogenic propulsion system could propel human explorers to Mars more efficiently than conventional spacecraft, reducing crews’ exposure to harmful space radiation and other effects of long-term space missions. It could also transport heavy cargo and science payloads. Further development and use of a first-generation nuclear system could also provide the foundation for developing extremely advanced propulsion technologies and systems in the future - ones that could take human crews even farther into the solar system.
Building on previous, successful research and using the NTREES facility, NASA can safely and thoroughly test simulated nuclear fuel elements of various sizes, providing important test data to support the design of a future Nuclear Cryogenic Propulsion Stage. A nuclear cryogenic upper stage - its liquid- hydrogen propellant chilled to super-cold temperatures for launch - would be designed to be safe during all mission phases and would not be started until the spacecraft had reached a safe orbit and was ready to begin its journey to a distant destination. Prior to startup in a safe orbit, the nuclear system would be cold, with no fission products generated from nuclear operations, and with radiation below significant levels.
“The information we gain using this test facility will permit engineers to design rugged, efficient fuel elements and nuclear propulsion systems,” said NASA researcher Bill Emrich, who manages the NTREES facility at Marshall. “It’s our hope that it will enable us to develop a reliable, cost-effective nuclear rocket engine in the not-too-distant future."
The Nuclear Cryogenic Propulsion Stage project is part of the Advanced Exploration Systems program, which is managed by NASA’s Human Exploration and Operations Mission Directorate and includes participation by the U.S. Department of Energy. The program, which focuses on crew safety and mission operations in deep space, seeks to pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future vehicle development and human missions beyond Earth orbit.
Marshall researchers are partnering on the project with NASA’s Glenn Research Center in Cleveland, Ohio; NASA’s Johnson Space Center in Houston; Idaho National Laboratory in Idaho Falls; Los Alamos National Laboratory in Los Alamos, N.M.; and Oak Ridge National Laboratory in Oak Ridge, Tenn.
The Marshall Center leads development of the Space Launch System for NASA. The Science & Technology Office at Marshall strives to apply advanced concepts and capabilities to the research, development and management of a broad spectrum of NASA programs, projects and activities that fall at the very intersection of science and exploration, where every discovery and achievement furthers scientific knowledge and understanding, and supports the agency’s ambitious mission to expand humanity’s reach across the solar system. The NTREES test facility is just one of numerous cutting-edge space propulsion and science research facilities housed in the state-of- the-art Propulsion Research & Development Laboratory at Marshall, contributing to development of the Space Launch System and a variety of other NASA programs and missions.
Available in: http://www.nasa.gov
I. Engines powered by expanded hydrogen work better than regular chemical engines.
II. A CERMET composite is made of ceramics, metal and graphite.
III. The Nuclear Cryogenic Propulsion Stage created the technology that took human crews to Mars.
According to the text, the correct assertion(s) is(are)
Read the text below and answer the questions that follow.
How Telecommuting Works
Telecommuting, which is growing in popularity, allows employees to avoid long commutes.
“Brring,” the alarm startles you out of a deep sleep. It’s 8 a.m. on Monday morning. Time to head to the office. You roll out of bed, brush your teeth and stumble your way to the kitchen to grab some coffee.
Moments later, you head to the office, still wearing your pajamas and fluffy slippers. Luckily for you, you don’t have to go far – you work at home. Telecommuting, or working at home, has grown in popularity over the last 20 years.
On an increasing basis, workers are saying “no” to long commutes and opting to work at home. In fact, the U.S. Census Bureau reports that the number of employees working from home grew by 23 percent from 1990 to 2000.
Telecommuting workers revel in making their own schedule – allowing them to schedule work around family and personal commitments. With the ready availability of technology tools, like the Internet and home computers, companies are more willing to let employees work from home.
(Adapted from: http://home.howstuffworks.com/telecommuting.htm Access on 18 January, 2014)
The technology tools mentioned in the text refer to:
Read the text below and answer the questions that follow.
How Telecommuting Works
Telecommuting, which is growing in popularity, allows employees to avoid long commutes.
“Brring,” the alarm startles you out of a deep sleep. It’s 8 a.m. on Monday morning. Time to head to the office. You roll out of bed, brush your teeth and stumble your way to the kitchen to grab some coffee.
Moments later, you head to the office, still wearing your pajamas and fluffy slippers. Luckily for you, you don’t have to go far – you work at home. Telecommuting, or working at home, has grown in popularity over the last 20 years.
On an increasing basis, workers are saying “no” to long commutes and opting to work at home. In fact, the U.S. Census Bureau reports that the number of employees working from home grew by 23 percent from 1990 to 2000.
Telecommuting workers revel in making their own schedule – allowing them to schedule work around family and personal commitments. With the ready availability of technology tools, like the Internet and home computers, companies are more willing to let employees work from home.
(Adapted from: http://home.howstuffworks.com/telecommuting.htm Access on 18 January, 2014)
According to the text, workers: