Cambridge, UK, August 5 2019 – Myrtle, a world leader in the creation of optimized AI implementations for speech applications in data centers, released a set of performance numbers that showcase the cost and performance benefits offered by FPGAs for implementing speech inference. Myrtle’s AI solution running on the new high-performance Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) D5005 accelerator requires less data center infrastructure and consumes less electricity than traditional methods. This reduces costs and removes growth constraints for businesses offering speech services such as transcription, translation, synthesis or voice assistance in on-premise or cloud-based data centers.

The results derive from the collaboration between Intel and Myrtle to optimize a recurrent neural network (RNN) for speech inference on the Intel FPGA PAC D5005. Highlights of the results include running more than four thousand voice channels concurrently on one FPGA, leading to a six-fold improvement in performance per watt compared with general purpose GPUs with a latency of one thirtieth that of a GPU.

“The industry has to take new approaches to produce machine learning solutions that meet customers’ stringent latency, power and cost constraints”, said Peter Baldwin, CEO, Myrtle, “we are delighted to be releasing industry-leading performance metrics on Intel’s latest Programmable Acceleration Card, so customers preserve their investment in hardware as machine learning models evolve.”

Myrtle’s expertise in hardware-software codesign and the quantization, sparsity and compression of machine learning models has been recognized by the MLPerf consortium. Myrtle owns the MLPerf speech transcription workload and has open sourced its code to help the industry benchmark new edge and data center hardware more consistently.
More details about how to achieve a step change improvement for data center inference performance can be found on www.myrtle.ai and on Intel’s FPGA AI partners web page. Contact Myrtle today on speech@myrtle.ai to evaluate the solution.

About Myrtle
Myrtle is a world leader in the creation of high-performance, energy-efficient computing solutions for deep learning inferencing on next-generation data center hardware. Myrtle’s industry-leading RNN technology enables companies to cost-efficiently implement and scale speech applications on cloud or on-premise infrastructure. Myrtle is a partner in Intel’s design solutions network (DSN). For more information, please visit www.myrtle.ai and follow us on twitter.

Contact:
speech@myrtle.ai