The saga of supercomputing started with Seymour Cray who designed the CDC 6600 series in the sixties, starting his own company Cray Research Inc. in 1972
Supercomputers are means of achieving high performance by spreading calculations among a large number of processors.
The saga of supercomputing started with Seymour Cray who designed the CDC 6600 series in the sixties, starting his own company, Cray Research Inc., in 1972.
The delivery of the first Cray 1 vector computer in 1976 to the Los Alamos Scientific Laboratory saw the beginning of the modern era of ‘Supercomputing’ that marked an advantage of scalar systems.
The introduction of the Cray 1 saw the advent of vector computers in the seventies and early eighties. Performance imperatives soon became juxtaposed with applications, price parameters and development environments.
Then came the mini-supercomputer that caught the fancy of industrial environments by targeting the gap between traditional scalar mainframes and the vector systems of the Cray class.
The nineties brought in Massive Parallel Systems (MPP) and microprocessor-based symmetrical multiprocessor systems (SMP).
The idea was to leverage distributed memory by creating parallel systems without the obvious limitations in processor number.
High volumes, high-scale applications and massively parallel servers came in vogue before the cluster concept embarked at the onset of 2000s.
Cluster of workstations (COW) and PC clusters reshaped the architecture scenario in HPC (High Performance Computing).
Supercomputers, their architecture, platforms and applications kept expanding in scope. From scientific to financial, the area of leveraging the power and performance of supercomputers with regards to mathematical calculation-cum-complexity is now quite broad and deep.
With giants like Earth Simulator (by NEC) IBM BlueGene and the DARPA High Productivity Computing Systems (HPCS) program, the direction is now towards petaflop levels.
Supercomputers have traveled from monolithic architectures to clusters successfully, supported in turn by large supporting industrial and commercial business market and the usage of standard components.
Super computers have also been facing challenges on big-scale space and power requirements and programmability (from software perspective) that can extend the domain of their applications.
The use of distributed memory systems has led to the introduction of new programming models. Today with the MPP, the so-called performance gap between supercomputers and mainstream computers is closing continuously.
May be that explains DARPA’s (Defense Advanced Research Projects Agency) avowed goal of High Productivity Computing Systems (HPCS) that puts thrust on new computer architectures by the end of the decade with high performance and productivity.
The performance goal of installing a system by 2009, which can sustain Petaflop/s performance levels on real applications is certainly the next step of a new era in supercomputing.
The what and how of TOP500
The influential TOP500 list is a compilation of the world’s most powerful supercomputers, which currently has IBM in the top four positions.
In 1993 Professor Hans Werner Meuer started the TOP500 project together with Erich Strohmaier and Jack Dongarra. In the TOP500 list the most powerful computers in the world are listed ranked by their performance on the Linpack Benchmark.
The list is generated and released twice a year.
Earlier, taking stock of supercomputing progress and market penetration was done by Mannheim statistics. Different new approaches were tried to compile better statistics about supercomputers, which included counting systems and processors and compiling different lists of systems.
The outcome of these studies was the TOP500 project. This was borne out of the basic idea of giving any type of system the possibility to be counted as a supercomputer if it could demonstrate performance levels worthy of such a label.
The actual performance level necessary for this label would have to be adjusted over time as general performance levels increased.
With this the TOP500 came into life. Ever since June 1993, the ranking assembles a list of the 500 most powerful computer systems installed twice a year.
The TOP500 is based on information obtained from manufacturers, customers and users of such systems. The latest list shows five new entrants in the Top 10, which includes sites in the United States, Germany, India and Sweden.
The 30th edition of the TOP500 list was released in Nov, 2007 at SC ’07, the international conference on high performance computing, networking, storage and analysis, in Reno, Nevada.
The No. 1 position was again claimed by the BlueGene/L System, a joint development of IBM and the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA).
At No. 2 is a brand-new first installation of a newer version of the same type of IBM system.
The No. 3 system is not only new, but also the first system for a new supercomputing centre, the New Mexico Computing Applications Centre (NMCAC) in Rio Rancho, N.M. And of course, for the first time ever, India placed a system in the Top 10 with Hewlett-Packard Cluster Platform 3000 BL460c system by the Computational Research Laboratories, a wholly owned subsidiary of Tata Sons Ltd in Pune, India.
Business Quotient – More flops mean more hits
During the last few years, a new geographical trend with respect to the countries using supercomputers is emerging.
An increasing number of supercomputers are being installed in upcoming Asian countries such as China, South Korea and India.
While this can be interpreted as a reflection of increasing economical stamina of these countries, it also portends that such countries can buy or even build cluster-based systems themselves as pointed out in a report by Erich Strohmaier, Future Technology Group, Lawrence Berkeley National Laboratory, May 2005.
Three decades after the introduction of the Cray 1, the HPC market has changed quite a bit, the report adds. It used to be a market for systems clearly different from any other computer systems.
HPC ceases to be an isolated niche market for specialized systems. Market and cost pressure have driven the majority of customers away from specialized highly integrated traditional supercomputers towards using clustered systems built using commodity components.
The overall market for the very high-end systems itself is relatively small. It cannot easily support specialized niche market manufacturers, which poses a problem for customers with applications requiring highly integrated supercomputers.
Together with reduced system efficiencies, reduced productivity, and a lack of supporting software-infrastructure, there is a strong interest in new computer architectures.
(Source: 20 Years Supercomputer Market Analysis by Erich Strohmaier, Future Technology Group, Lawrence Berkeley National Laboratory and www. top500.org)