Tachyum unveils the world’s 1st universal processor platform that has 10x the performance of conventional processor
Christos Kozyrakis
Silicon Valley startup Tachyum Inc., which is backed by IPM Growth, unveiled its new processor family – codenamed “Prodigy” – that combines the advantages of CPUs with GP-GPUs, and specialised AI chips in a single universal processor platform with ten times the processing power per watt capable of running the world’s most complex compute tasks.
With its disruptive architecture, Prodigy will enable a super-computational system for real-time full capacity human brain neural network simulation by 2020.
Tachyum’s universal processor offers the programming ease comparable to a CPU with performance and efficiency comparable to GP-GPU, for a universal-purpose processor that can handle hyperscale workloads, AI, HPC, and other demanding applications with ease.
A typical hyperscale data centre using servers equipped with Prodigy will provide ten times the compute performance at the same power budget. Prodigy will reduce data centre TCO (total cost of ownership) by a factor of four; conversely, a Prodigy-based data centre delivering the same performance as conventional servers can be built in as small as 1% the space and consume one-tenth the energy.
AI use cases in particular will benefit from Prodigy’s extreme compute muscle. AI has rapidly evolved into several distinct disciplines, including convolutional networks, deep learning AI, symbolic AI, general AI and bio AI, each running distinctly different algorithms with different processing requirements.
Similarly, human brain simulation is highly sought after by R&D projects because of its promise for deriving insights from massive data sets. The Tachyum Universal Processor Platform is an ideal tool for efforts like the real-time Human Brain Project, where there’s a need for more than 1019 Flops (10,000,000,000,000,000,000 floating-point operations per second – 10 exaflop), as well as providing the computational power for scientific and engineering solutions that cannot be provided by today’s systems.
“Rather than build separate infrastructures for AI, HPC and conventional compute, the Prodigy chip will deliver all within one unified simplified environment, so for example AI or HPC algorithms can run while a machine is otherwise idle or underutilised,” said Tachyum CEO Dr. Radoslav ‘Rado’ Danilak. “Instead of supercomputers with a price tag in the hundreds of millions, Tachyum will make it possible to empower hyperscale datacentres to produce more work in a radically more efficient and powerful format, at a lower cost.”
Tachyum’s architecture overcomes the limitations of semiconductor device physics, which were thought to be insolvable. Tachyum in essence solved the performance problem of connecting very fast transistors with very slow wires – a standard processor design that has stifled semiconductor innovation for years, and stymied Silicon Valley engineers, even though nanometer-sized transistors in use today are far faster than in the past.
“Despite efficiency gains from virtualisation, cloud computing, and parallelism, there are still critical problems with datacentre resource utilisation particularly at a size and scale of hundreds of thousands of servers,” said Christos Kozyrakis, professor of electrical engineering and computer science at Stanford, who leads the university’s Multiscale Architecture & Systems Team (MAST), a research group for cloud computing, energy-efficient hardware, and operating systems.
“Tachyum’s breakthrough processor architecture will deliver unprecedented performance and productivity.” Kozyrakis is a corporate advisor to Tachyum.
Because Prodigy delivers an efficiency per watt that is an order of magnitude better than today’s CPUs, Tachyum’s new design addresses what many identify as one of the most critical challenges facing hyperscale enterprises today: energy consumption. Global datacentres currently consume 40% more electricity than the entire United Kingdom, and demand is doubling every five years.
Comment on this article below or via Twitter @IoTGN