When the first computers were built, famously they would fill entire rooms but had computing power comparable to a modern day calculator. In the time between now and then, the chips that power computers have become both smaller and – following Moore’s Law – exponentially more powerful.
Today, the computers that sit on our desks are more powerful than Jack Kilby, the father of the modern microchip, could ever have imagined. In this broad sweep of history, the progress that has been made in computing power is incredible.
It’s not just PCs that have experienced an explosion in their processing power, though. In recent years innovations in chip design and manufacture has allowed for the next step in supercomputing: exascale.
The supersize world of supercomputers
Supercomputers, while resting on many of the same fundamental principles as desktop computers, are fundamentally different in the way they process data and what they’re used for.
The devices most people use every day, be that a laptop, desktop, phone, or tablet, use sequential computing. Put simply, this means one operation is performed after another in sequence. For most use cases this is perfectly fine, even if what’s being asked is quite demanding – such as some scientific calculations.
Supercomputers use parallel processing, where multiple computations are undertaken at once to solve much bigger problems faster. The exact speed increase depends on how many processes can be parallelized, in accordance with Amdahl’s Law, but it can be up to 20 times faster than a standard computer with similar specifications. That said, such a comparison would be increasingly hard to make given the advances in supercomputing architecture overall, from network fabric through to chips, cooling, and beyond.
Despite being far more advanced than early computers, they do bear a resemblance to each other in that supercomputers are exceptionally large. Consisting of rows and rows of specialised hardware, superficially at least those first computing pioneers would recognise the layout. One major difference, however, is that these computers aren’t so big they occupy an entire room – they occupy an entire specially built or adapted building instead.
Taking the next step: exascale computing
Until recently, the maximum speeds supercomputers ran at was petaflops – 1015 floating-point operations per second (flops). The first computer to break the petascale barrier was Roadrunner, an IBM-built supercomputer featuring a mix of IBM and AMD chips that came online in 2008 and reached a top performance of 1.456 petaflops. This was followed by the Cray-built supercomputer Jaguar, which also featured AMD chips and had a peak performance of 1.75 petaflops.
More recent supercomputers like Lumi and Tuolumne have a sustained performance of hundreds of petaflops, which sounds – and in many ways is – impressive. However, recent advances in technology have allowed for even greater power.
Exascale, a system that’s capable of 1018 flops, is the next step beyond petaflops in terms of supercomputing power.
This is still something of an emerging technology – there are only three exascale supercomputers as measured by the Linpack benchmark, all of which are in the USA.
The first of these to come online was Frontier. Hosted at the Oak Ridge Leadership Computing Facility and jointly operated by the Oak Ridge National Laboratory (ORNL) and the US Department of Energy, Frontier became operational in 2022. It’s based on the HPE Cray EX, a liquid-cooled, blade-based, high-density system and is powered by 9,472 AMD third generation Epyc 7713 64 core 2GHz CPUs and 37,888 AMD AMD Instinct MI250X GPUs.
It achieved a 1.1 exaflop performance when it became fully operational in 2022, as measured by Top500, putting it at the top of the list of the fastest supercomputers in the world. This has since increased to 1.35 exaflops as of November 2024, with a theoretical peak of 2.05 exaflops.
It’s also surprisingly energy efficient; while it consumes almost twice the power of its predecessor, Summit (21 MW vs 13 MW, respectively), it’s approximately nine times more powerful. Indeed, when it first came online, it was second in the Green500 chart of energy efficient supercomputers with an energy efficiency of 52.2 GFlops/watt.
Although it has since been surpassed by a number of other, less powerful supercomputers, such as Adastra 2 and Capella, it’s still in the top 50 and is one of the most powerful to rank there, alongside HPC6, Lumi, and El Capitan – the fastest, most powerful supercomputer in the world.
Raising the bar
In November 2024, El Capitan dislodged Frontier from the number one spot on the Top500, with a performance level of 1.74 exaflops and a theoretical peak of 2.75 exaflops.
Like Frontier, El Capitan is built on hardware from HPE and AMD, namely HPE Cray EX255A architecture with 43,308 Epyc 24 core 1.8 GHz CPUs and 43,808 Instinct MI300A GPUs, with a combined core count of over 11 million.
Despite requiring more power than Frontier (30 MW vs 21 MW), it’s more energy efficient, sitting at 18 in the November 2024 Green500 rankings with an energy efficiency of 58.89 GFlops/watt.
Unlike Frontier, which is available to researchers from around the world to use, El Capitan is exclusively for use by the US National Nuclear Security Administration (NNSA). Housed at the Lawrence Livermore National Laboratory (LLNL), its primary focus is to support the maintenance and management of the USA’s nuclear stockpile, which includes simulating nuclear testing. Additional uses, according to LLNL, are modeling high-energy-density physics experiments like fusion reactions and exploring in detail how materials behave in extreme conditions.
The past two years have seen an incredible leap forward in the processing power of supercomputers. Combined with the generative AI revolution, it’s an exciting time to be a researcher or an IT professional.
Source link