As smartphones and computers have got smaller, the pressure is on technology companies to keep up and produce computer chips that are faster than before, but the same size or smaller than their predecessors. However, one company still believes that bigger is better.
Cerebras systems have designed a goliath computer chip that may be capable of sailing past the competition in specific tasks, claims a collaboration between Cerebras and the National Energy Technology Laboratory (NETL) in a preprint paper (not yet peer-reviewed) published on arXIV and presented to the SC20 conference this week. Designed to "revolutionize deep learning," the CS-1 measures 8.5 inches (21.6 centimeters) across, packs in 1.2 trillion transistors, and, according to the company, is 200 times faster than rival supercomputer Joule 2.0 (the 82nd fastest supercomputer in the world) in a combustion simulation.
In fact, this chip could be so fast that it could simulate an event quicker than it happens in real-time.
With the chip outperforming expectations in simulations, the company claims the CS-1 can "tell you what is going to happen in the future faster than the laws of physics produce that same result." If the faster-than-real-time performance holds true, the chip may find use in a large array of applications – one possible use is in power plants. With the chip performing constant simulations and monitoring, it is possible that a CS-1-integrated system could tell the result of operating at certain conditions before they can actually happen. This would be revolutionary in threat warning systems and scenario modeling.
So, how is the chip capable of such impressive performance?
Well, the CS-1 is just really, really big. Computer chips use transistors, which are tiny semiconductors that amplify or switch electronic signals, to perform complex calculations. The more transistors, the more things the chip can do at the same time.
Simply put, technology companies constantly aim to reduce the size of their components so they can pack more transistors in, effectively increasing the speed and raw processing power of their chips.
The world’s second-largest chip, the NVIDIA A100 GPU, packs in a massive 54 billion transistors. This is a pretty huge number, but compared to the CS-1’s 1.2 trillion transistors it pales in comparison.
However, do not expect to be seeing a chip of this kind in computers any time soon. This type of singular chip is used for very specific applications, and in tasks not tailored to the architecture of the chip, you would likely see very different results. It also draws an impressive amount of power – despite being only a fraction of the Joule 2.0’s power consumption, 20 kilowatts for a single chip is quite the hog.
Despite its highly niche skillset, it will be fascinating to follow whether single-chip designs become the new standard for scientific modeling, or whether the massive new supercomputers coming soon will be more applicable to the wide array of tasks required of a supercomputer.