Supercomputers to Solve Giant Problems

Today's supercomputers are the marvels of computational power. These computers are used to solve some of the world's biggest scientific problems.

Current models are tens of thousands of times faster than the average desktop PCs. These supercomputers achieve these speeds from parallel processing, in which many of their processors perform computations simultaneously.

Cost tens of millions of dollars, they fill enormous rooms, which are cooled to prevent their thousands of microprocessor cores from overheating when they perform trillions, or even thousands of trillions, of calculations per second. These bodybuilders of the computer world are used for everything from forecasting weather, uncovering the origins of the universe, testing nuclear weapons, to delivering patterns of protein folding that make life possible.

What sets supercomputers apart is the size and difficulty of the tasks they can tackle and solve, said Jack Wells, director of science at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory in Tennessee.

"Supercomputers can do supersize problems," Wells said.

Supercomputers are often built from the same components as regular computers, but their components are integrated so they can work together," Wells continued

The first supercomputers were developed in the 1960s, designed by electrical engineer Seymour Cray from Control Data Corporation (CDC). In 1964, the company released the CDC 6600, often considered to be the world's first supercomputer. Cray later formed his own company, which made the Cray-1 in 1976 and Cray-2 in 1985.

Cray's and early supercomputers had only a few processors, but by the 1990s, the United States and Japan were making ones with thousands of processors. Fujitsu's Numerical Wind Tunnel became the fastest supercomputer in 1994 with 166 processors, followed by the Hitachi SR2201, in 1996, with more than 2,000 processors. The Intel Paragon edged into the lead in 1993. As of June 2013, China's Tianhe-2 was the world's fastest supercomputer.

Supercomputer performance is measured in "FLOPS," short for floating-point operations per second. Today's machines can achieve speeds in petaFLOPS - quadrillions of FLOPS.

China’s Tianhe-2 achieves 33.86 petaFLOPS, while the Cray Titan reaches 17.59 petaFLOPS, and IBM's Sequoia ranks third at 17.17 petaFLOPS.

Solving Big Problems

Researchers have harnessed the number-crunching power of supercomputers to work on complex problems in fields ranging from astrophysics to neuroscience.

These giant powered computer behemoths have been used to answer questions about the creation of the universe during the Big Bang. Researchers at the Texas Advanced Computing Center (TACC) simulated how the first galaxies formed, and scientists at NASA Ames Research Center in Mountain View, California, simulated the birth of stars. Using computers like IBM's Roadrunner at Los Alamos National Laboratory, physicists have probed the mysteries of dark matter, the mysterious substance that makes up roughly 25 percent of the mass of the universe.

Weather forecasting is another area that relies heavily on supercomputing. Forecasters used the TACC supercomputer Ranger to determine the path of Hurricane Ike in 2008, improving the five-day hurricane forecast by 15 percent. Climate scientists use supercomputers to model global climate change, a challenging task involving hundreds of variables.

Testing nuclear weapons has been banned in the United States since 1992, but supercomputer simulations ensure that the nation's nukes remain safe and functional. IBM's Sequoia supercomputer at Lawrence Livermore National Laboratory in California is designed to replace testing of nuclear explosions with improved simulations. The Sequoia was introduced in 2002.

Neuroscientists have turned their attention to the daunting task of modeling the human brain. The Blue Brain project at the École Polytechnique Fédérale de Lausanne in Switzerland, led by Henry Markram, aims to create a complete, virtual human brain. The project scientists are using an IBM Blue Gene supercomputer to simulate the molecular structures of real mammalian brains. In 2006, Blue Brain successfully simulated a complete column of neurons in the rat brain.

Sharing the Computing Load

Supercomputer typically consists of large data centers filled with machines that are physically linked together. Linking them enable the distribution of computing, and could also be considered a form of supercomputing; it consists of many individual computers connected by a network (such as the internet) that devote some portion of their processing power to a large problem.

A well-known example is the SETI@home (Search for Extraterrestrial Intelligence at home) project, in which millions of people run a program on their computers that looks for signs of intelligent life in radio signals.

"In the future, supercomputers will edge toward exascale (exaFLOPS) capabilities, or about 50 times faster than current systems," Wells said. This will require greater energy, so energy efficiency will likely become an important goal of future systems. Another trend will be integrating large amounts of data for applications like discovering new materials and biotechnologies, closed Wells.