Supercomputers, the best in history and their components -> SEO

Supercomputers are what we colloquially know as “NASA computer systems”, referring to a particularly highly effective pc that’s distinctive in the world always. Its monumental computing energy is used to unravel technical and scientific issues that will not be potential in any other case.

The undeniable fact that they’re distinctive constructions signifies that many instances they aren’t restricted to being the cluster of server {hardware}, however there have been instances in history in which {hardware} components equivalent to new processor architectures have been created. Furthermore, a big a part of the technological advances that now we have seen seem for PCs have had as their origin the improvement of a supercomputer and then be carried out at scale.

In brief, once we discuss with supercomputers we’re referring to the strongest {hardware} of every second in the history of computing.

The CDC 6600, the first of the supercomputers in history

CDC-6600

We owe the idea of supercomputers to the pc scientist Seymour Cray who was the first to suggest the basic structure of 1. Cray labored in the information management middle of the United States Army and when the CDC didn’t let him notice his invention he threatened to depart, in the finish they gave in and this allowed him to create the first supercomputer in history, the CDC 6600.

The CDC 6600 was the strongest supercomputer from 1964 to 1969, it was a fancy piece for the time composed of 400,000 transistors in whole, a clock pace of 40 MHz and a floating level unit at Three MFLOPS. Let’s not neglect that the first house computer systems that got here out a decade later ran at speeds between 1 MHz and four MHz, have been made up of some thousand transistors, and lacked a floating level unit.

The strongest pc on the market at the time was the IBM 7030 and the CDC 6600 outperformed it in each respect, making Seymour Cray and his designs a benchmark in high-performance computing, however the CDC 6600 solely it was the starting of the story.

Cray-1, the supercomputers that noticed the start of the SIMD unit

Cray-1 supercomputer

Today SIMD models are discovered in all CPUs for units of all types, however we owe their existence to the second supercomputer in history. Which was additionally designed by Seymour Cray, however this time already below the firm named together with his surname, Cray Research and the first of his supercomputers.

The Cray-1 was launched in 1975 and used an 80 MHz CPU and had a built-in 64-bit precision floating level SIMD unit, which was an enormous leap that allowed a soar of the CDC’s Three MFLOPS of energy. 6600 at 160 MFLOPS in the Cray-1. To offer you an concept of ​​what this probably meant now we have to say that it was not till the mid-90s that we didn’t see a CPU in PC with the similar energy as the Cray-1 in floating level and it was not till the look of Intel’s SSE know-how and AMD’s 3Dnow we did not see a 64-bit floating level SIMD in a PC CPU.

In 1982 Cray Research launched an improved model of its supercomputer in the type of its Cray X-MP, the place the initials “MP” come from multiprocessors and had not one however 4 of them, which reached 105 MHz every and with a 820 MFLOPS horsepower, however your swan music got here in the type of the Cray 2 launched in 1985 that elevated the horsepower to 1.9 GFLOPS. The Cray-2 was

Cray-2, NASA’s supercomputer

Cray-2 supercomputer

We owe the “NASA supercomputer” meme to Cray-2, created for the well-known particular company and deployed in 1985, was each a swan music for Cray Research and the final of its supercomputers. Cray Research elevated the CPU depend of this primary supercomputer with eight CPU cores. Which additionally added a sequence of extra processors, which have been answerable for dealing with entry to reminiscence, storage and I / O interfaces. Its computing energy? 1.9 GFLOPS of computing energy, so it wasn’t such a powerful leap, however its largest particularity is the undeniable fact that it was liquid-cooled.

However, the finish of the chilly battle was approaching and for the design of its supercomputers, Cray Research trusted the monumental capital in protection of the United States Army, in addition to that its CPUs have been of monumental measurement and have been unattainable to be transferred to different markets. In different phrases, when the Iron Curtain fell and the curiosity in having a protection supercomputer waned, Cray misplaced its largest clients that allowed it not solely to outlive, however to develop its new processors.

ASCI Red, supercomputers come to teraFLOP

Supercomputer ASCI Network

It quickly turned clear that the design of not-so-complex processors was essential to create a supercomputer, as nobody was keen to spend the big quantities of capital any longer after the chilly battle was over. So a paradigm shift was obligatory and this got here below the idea of utilizing a lot easier processors, equivalent to these used in PCs and servers for the creation of supercomputers.

If we discuss CPU for PC, one in all the most necessary was Intel’s Pentium Pro, because it launched ideas equivalent to the use of a second-level cache, the skill to make use of a couple of processor, and out-of-order execution. Well, the firm based by Gordon Moore started the design of the ASIC Red, which was a beast for the time composed of neither extra nor lower than 76 Intel Pentium Pro CPUs accompanied by 1212 GB of RAM and 9298 processors for pc duties. help.

The first supercomputer with the skill to achieve 1 TFLOPS of energy in the history of computing, nonetheless, and not like Seymour Cray’s designs, the Pentium Pro just isn’t a CPU that may stand out for having a SIMD unit, in truth, it lacked such a unit, however it stands out for being the first supercomputer in history to make use of a PC CPU for its building.

IBM Blue Gene and the NEC Earth Simulator, legendary supercomputers

IBM Blue Gene supercomputer

Once the energy of the teraflop of energy was reached, the subsequent problem was to get to the PetaFLOP with a pc, that’s, 1000 instances the computing energy of the ASCI Red designed by Intel and there was room to realize it. Being one in all the corporations that confronted this problem, IBM determined to make use of its PowerPC processors for the creation of its Blue Gene, a undertaking that started in 1999 and didn’t end till November 2004.

The first BlueGene, referred to as BlueGene / L, was made up of neither extra nor lower than 131072 CPUs, an astronomical determine that allowed it to achieve 70.72 TFLOPS of energy, one thing {that a} NVIDIA RTX 3090 doesn’t attain by itself. Figure with which it managed to surpass the Earth Simulator of NEC, which was the strongest supercomputer at the moment.

The Earth Simulator was a joint improvement between NEC and the authorities of Japan with a capability of virtually 40 TFLOPS of energy that was designed with a view to climate prediction. It differed from the Blue Gene by the undeniable fact that it was based mostly like the Cray in processors with huge SIMD models whereas the IBM design used PowerPC CPUs as a base with out such a models inside. So the IBM design was extra much like the ASCI Red whereas the Earth Simulator was to the early Cray.

The IBM Roadrunner, lastly the energy PetaFLOP is reached

IBM Roadrunner supercomputer

In 2001, IBM started improvement of a processor referred to as the Cell Broadband Engine, which turned well-known for being the fundamental CPU of the PlayStation Three console, however was additionally used for the creation of the IBM Roadrunner, a supercomputer that mixed a CPU. AMD Opteron with a variant of the CBEA used in PlayStation 3, which made use of vector processors or SIMD referred to as SPE inside.

The IBM Roadrunner consisted of 6,912 AMD Opteron dual-core CPUs and 12,960 Cell Broadband Engine processors, a a lot decrease determine than the Blue Gene, however which didn’t stop it from breaking the 1 PetaFLOP energy barrier. Although the CBEA is a CPU by itself, in the Roadrunner it was used as a help processor to hurry up the parallel elements of the code and was a precursor to the use of GPUs for these duties in a supercomputer.

The first supercomputer to make use of GPUs

Cray Titan

Nowadays, most supercomputers are designed with a CPU and GPU inside, however as you might properly know, this was not at all times the case and it was not till 2008 that the first supercomputer appeared that made use of a GPU to carry out its calculations, though it did. on a reasonably modest system in comparison with the IBM Roadrunner.

The TSUBAME was created by the Tokyo Institute of Technology and made use of the first technology of NVIDIA Tesla to achieve 170 TFLOPS, thus beating the Blue Gene and the Earth Simulator. Graphics processors had begun to have the skill to run more and more advanced algorithms since the implementation of shader models, and with the NVIDIA G80 structure they have been used to speed up scientific computing algorithms.

However, a GPU-based supercomputer did not get the high spot till 2013, at which level a resurrected Cray launched its Titan, made up of the mixture of AMD Opteron CPUs and NVIDIA Tesla GPUs that existed at the time. The computing energy obtained? 10 PetaFLOPS. So we’re speaking a couple of soar of greater than 50 instances in energy in simply 5 years, demonstrating the monumental effectivity of GPUs in computing energy.

The period of ExaFLOP, the new barrier about to be overcome

Super Computer Aurora

Today we’re in the period of ExaFLOP and the objective is to realize 1 million TeraFLOPS of energy with supercomputers. Something that has introduced with it an obsession that’s the discount of consumption in communication, this has led to the improvement of superior packaging and intercommunication techniques to have the ability to attain that determine with out taking pictures power consumption via the roof.

We will see the first supercomputers below this new paradigm from 2022, with the El Capitan supercomputer constructed with know-how solely from AMD and on the different hand Aurora with CPU and GPU know-how from Intel. Both characterize a sort of chilly battle between each corporations and their improvement has influenced and will affect future architectures and designs that we’ll have in the future in our PCs.