Thursday, May 08, 2008

"BLUEFIRE", Power 575 Hydro- Cluster, Delivered To The National Center for Atmospheric Research (NCAR)

16 Dual Core POWER6 CPU's Planked By 64 DIMMs
BOULDER—The National Center for Atmospheric Research (NCAR) has taken delivery of a new IBM supercomputer that will advance research into severe weather and the future of Earth's climate. The supercomputer, known as a Power 575 Hydro- Cluster, is the first in a highly energy-efficient class of machines to be shipped anywhere in the world.

Scientists at NCAR and across the country will use the new system to accelerate research into climate change, including future patterns of precipitation and drought around the world, changes to agriculture and growing seasons, and the complex influence of global warming on hurricanes. Researchers also will use it to improve weather forecasting models so society can better anticipate where and when dangerous storms may strike.

Named "bluefire," the new supercomputer has a peak speed of more than 76 teraflops (76 trillion floating-point operations per second). When fully operational, it is expected to rank among the 25 most powerful supercomputers in the world and will more than triple NCAR's sustained computing capacity.

"Bluefire is on the leading edge of high-performance computing technology," says Tom Bettge, director of operations and services for NCAR's Computational and Information Systems Laboratory. "Increasingly fast machines are vital to research into such areas as climate change and the formation of hurricanes and other severe storms. Scientists will be able to conduct breakthrough calculations, study vital problems at much higher resolution and complexity, and get results more quickly than before."

Researchers will rely on bluefire to generate the climate simulations necessary for the next report on global warming by the Intergovernmental Panel on Climate Change (IPCC), which conducts detailed assessments under the auspices of the United Nations. The IPCC was a recipient of the 2007 Nobel Peace Prize.

"NCAR has a well-deserved reputation for excellence in deploying supercomputing resources to address really difficult challenges," says Dave Turek, vice president of deep computing at IBM. "Bluefire will substantially expand the organization's ability to investigate climate change, severe weather events, and other important subjects."

Bluefire by the numbers

Bluefire is the second phase of a system called the Integrated Computing Environment for Scientific Simulation (ICESS) at NCAR. After undergoing acceptance testing, it will begin full-scale operations in August. Bluefire, which replaces three supercomputers with an aggregate peak speed of 20 teraflops, will provide supercomputing support for researchers at NCAR and other organizations through 2011.

An IBM Power 575 supercomputer, bluefire houses the new POWER6 microprocessor, which has a clock speed of 4.7 gigahertz. The system consists of 4,064 processors, 12 terabytes of memory, and 150 terabytes of FAStT DS4800 disk storage.

Bluefire relies on a unique, water-based cooling system that is 33 percent more energy efficient than traditional air-cooled systems. Heat is removed from the electronics by water-chilled copper plates mounted in direct contact with each POWER6 microprocessor chip. As a result of this water-cooled system and POWER6 efficiencies, bluefire is three times more energy efficient per rack than its predecessor.

"We're especially pleased that bluefire provides dramatically increased performance with much greater energy efficiency," Bettge says.

The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under primary sponsorship by the National Science Foundation (NSF). Opinions, findings, conclusions, or recommendations expressed in this document are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, NASA, or other funding agencies.

Related sites on the World Wide Web

Bluefire Home Page (includes fact sheets and additional images)

GridTags: , , , , , ,

Climate Computer To Consume less Than 4 Megawatts Of Power And Achieve A Peak Performance Of 200 Petaflops.


BERKELEY, Calif. — Three researchers from the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have proposed an innovative way to improve global climate change predictions by using a supercomputer with low-power embedded microprocessors, an approach that would overcome limitations posed by today’s conventional supercomputers.

In a paper published in the May issue of the International Journal of High Performance Computing Applications, Michael Wehner and Lenny Oliker of Berkeley Lab’s Computational Research Division, and John Shalf of the National Energy Research Scientific Computing Center (NERSC) lay out the benefit of a new class of supercomputers for modeling climate conditions and understanding climate change. Using the embedded microprocessor technology used in cell phones, iPods, toaster ovens and most other modern day electronic conveniences, they propose designing a cost-effective machine for running these models and improving climate predictions.

In April, Berkeley Lab signed a collaboration agreement with Tensilica®, Inc. to explore such new design concepts for energy-efficient high-performance scientific computer systems. The joint effort is focused on novel processor and systems architectures using large numbers of small processor cores, connected together with optimized links, and tuned to the requirements of highly-parallel applications such as climate modeling.

Understanding how human activity is changing global climate is one of the great scientific challenges of our time. Scientists have tackled this issue by developing climate models that use the historical data of factors that shape the earth’s climate, such as rainfall, hurricanes, sea surface temperatures and carbon dioxide in the atmosphere. One of the greatest challenges in creating these models, however, is to develop accurate cloud simulations.

Although cloud systems have been included in climate models in the past, they lack the details that could improve the accuracy of climate predictions. Wehner, Oliker and Shalf set out to establish a practical estimate for building a supercomputer capable of creating climate models at 1-kilometer (km) scale. A cloud system model at the 1-km scale would provide rich details that are not available from existing models.

To develop a 1-km cloud model, scientists would need a supercomputer that is 1,000 times more powerful than what is available today, the researchers say. But building a supercomputer powerful enough to tackle this problem is a huge challenge.

Historically, supercomputer makers build larger and more powerful systems by increasing the number of conventional microprocessors — usually the same kinds of microprocessors used to build personal computers. Although feasible for building computers large enough to solve many scientific problems, using this approach to build a system capable of modeling clouds at a 1-km scale would cost about $1 billion. The system also would require 200 megawatts of electricity to operate, enough energy to power a small city of 100,000 residents.

In their paper, “Towards Ultra-High Resolution models of Climate and Weather,” the researchers present a radical alternative that would cost less to build and require less electricity to operate. They conclude that a supercomputer using about 20 million embedded microprocessors would deliver the results and cost $75 million to construct. This “climate computer” would consume less than 4 megawatts of power and achieve a peak performance of 200 petaflops.

“Without such a paradigm shift, power will ultimately limit the scale and performance of future supercomputing systems, and therefore fail to meet the demanding computational needs of important scientific challenges like the climate modeling,” Shalf said.

The researchers arrive at their findings by extrapolating performance data from the Community Atmospheric Model (CAM). CAM, developed at the National Center for Atmospheric Research in Boulder, Colorado, is a series of global atmosphere models commonly used by weather and climate researchers.

The “climate computer” is not merely a concept. Wehner, Oliker and Shalf, along with researchers from UC Berkeley, are working with scientists from Colorado State University to build a prototype system in order to run a new global atmospheric model developed at Colorado State.

“What we have demonstrated is that in the exascale computing regime, it makes more sense to target machine design for specific applications,” Wehner said. “It will be impractical from a cost and power perspective to build general-purpose machines like today’s supercomputers.”

Under the agreement with Tensilica, the team will use Tensilica’s Xtensa LX extensible processor cores as the basic building blocks in a massively parallel system design. Each processor will dissipate a few hundred milliwatts of power, yet deliver billions of floating point operations per second and be programmable using standard programming languages and tools. This equates to an order-of-magnitude improvement in floating point operations per watt, compared to conventional desktop and server processor chips. The small size and low power of these processors allows tight integration at the chip, board and rack level and scaling to millions of processors within a power budget of a few megawatts.

Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California. Visit our Website at www.lbl.gov.