The nuclear arms race between the United States of American and the former Soviet Union technically ended in 1994 with the collapse of the latter. In current times, such competition between states and sovereignty has moved into the realms of science and technology; a global race in which different countries embark on their quests to build the world’s fastest supercomputer.
According to Wikipedia’s definition, a supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. It also boasts massive compute prowess that is able to simulate and model real-life systems that are impossible to replicate in laboratories due to scale and cost issues.
With such simulated data, supercomputers are able to reduce the time spent on scientific research and advance our understanding of the world, as well as to provide immediate benefits – the United States' National Oceanic and Atmosphere Administration uses supercomputers to help make more accurate weather forecasts, which can have life and death consequences when dangerous storms occur.
To vested organisations and governments, being top dog in the supercomputing realm is an important goal. According to the TOP500 project that tracks the performance of supercomputers, the fastest of the lot today is the Cray XK7 Titan that belongs to the Oak Ridge National Laboratory (ORNL) in the United States. This laboratory comes under the purview of the US Department of Energy and the US had been at the top of the game since 2004.
However, in 2010, China managed to snatch the honor of having the fastest supercomputer in the world from the US. It was a shocking defeat as China’s Tianhe-1A took the crown from the US’ Jaguar system, which was also based at the ORLC. Another Asian power delivered a wakeup call to the United States when Japan relegated the former to third place with their K Computer.
With this sudden turn of events, it seemed that the dominance of the United States in the realm of supercomputing was gravely under threat from the East. At the same time, there was also a paradigm shift in the engineering principles of supercomputing. Instead of just gunning for best performance in supercomputing systems, there's now a pressing need to address the power consumption of these energy-hungry computing monstrosities. Take for example, the K Computer from Japan, it required 9.89 MW (megawatt) of power during operation, which translated to the energy consumption of almost 10,000 suburban homes.
By June 2012, the United States had regained the crown with IBM’s Sequoia supercomputer, and their lead was further cemented with the Cray XK7 that has a power consumption of about 8.9 MW and is 17 times faster than the K Computer, making it a relatively power-efficient global supercomputer. Now, we wait with bated breath as the race shifts towards building a supercomputer that runs on less power but delivers more computing prowess. Whoever wins the race for the moment, the backers of supercomputing development argue that the ultimate beneficiary is the common man, who stands to benefit from the innovations derived from this competition.
Wong Chung Wee / Tech Writer
Chung Wee has a penchant for recycling hardware hand-me-downs due to his strong 'waste not, want not' belief.
- 72 hours with a BMW that comes with a 24/7 personal concierge
- Building your own power packed gaming PC
- This is probably why the Surface Pro 4 and Surface Book don’t have a USB Type-C port
- After 100 Days with Apple CarPlay, here's 4 things you need to know
- Ad-blocking on iOS 9: why it’s good for you, bad for us