At GTC 2017, when NVIDIA debuted the crazy fast Volta-based Tesla V100 compute unit that can churn out 120 Tensor TFLOPS, they made sure to reserve the first batch for their DGX-1 supercomputers for a powerful boost.
The Tokyo Institute of Technology has announced plans for a new AI supercomputer called Tsubame3.0. The new system will use NVIDIA’s Tesla P100 GPUs to double performance over its predecessor, Tsubame2.5.
Purpose built and engineered for deep learning, the NVIDIA DGX-1 is a rack-mounted server that can deliver 170TFLOPS of FP16 processing throughput, boasts numerous Pascal based Tesla P100 GPUs, and much more. Check out who was it designed for and its full set of specs!
President Obama has just signed an executive order for the creation of a National Strategic Computer Initiative (NSCI). This is in hope of putting his country ahead in the global supercomputing race. More details after the jump.
nCore HPC has rolled out their BrownDwarf supercomputer that features a heterogeneous ARM- and digital signal processor-based (DSP) system, capable of carrying out high-performance computing tasks at significantly reduced power levels.
Swiss National Supercomputing Centre (CSCS) is upgrading its Piz Daint supercomputer with NVIDIA Tesla K20X GPUs, with the assistance of Cray, the supercomputer company. After its upgrade, Piz Daint will be the first petascale supercomputer in Switzerland.
Intel has shipped its 60-core Xeon Phi coprocessor to selected customers. First unveiled in June this year, the coprocessor comes in the form of a PCIe expansion card and operates independently of the host operating system, courtesy of its own Linux operating system that manages each x86 core.