Myrtle AI

The MAU Accelerator is a low latency inference accelerator for data center machine learning workloads. It achieves both deterministic low tail latency and high throughput, without trading off one against the other. This enables higher quality models to be deployed, providing better services and customer experiences, while significant savings can be made in infrastructure costs and energy consumption.

Key Benefits

  • Deterministic low tail latency
  • Improved latency-bounded throughput
  • Reduced infrastructure costs
  • Enables use of higher quality models under a given latency bound
  • Reduced energy consumption


Low latency inference acceleration for real-time, memory-bounded workloads including:

  • Speech transcription
  • Natural language processing
  • Speech synthesis
  • Time series analysis
  • Payment & trading fraud detection
  • Recommendation systems

Rapid & Easy Deployment

The MAU Accelerator runs on data center servers enhanced by accelerator cards from Intel and Xilinx. These accelerator cards are available today, both in the cloud and for on-premise data centers, facilitating rapid implementation at scale. Neural network models created using popular ONNX supported frameworks such as TensorFlow, PyTorch or MXNet can easily be deployed on the MAU Accelerator, which is ONNX Runtime supported.

Compelling Advantages

Speech Transcription Example

  • 165x higher performance than a CPU-only solution
  • 2.1x higher performance per watt than a GPU solution
  • 29x lower latency than a GPU solution

Natural Language Processing Example

  • 2.2x lower cost than a CPU-only solution
  • 7.7x smaller carbon footprint than a CPU-only solution

Speech Synthesis Example

  • 8x higher latency bounded throughput than a GPU solution
  • More advanced model deployment with same throughput

Implementation Feature: Speech Synthesis

The MAU Accelerator can be used to deliver high fidelity speech synthesis at very high throughput, running WaveNet on an Intel® Stratix® 10 NX FPGA.

  • Best in class vocoder model for near-human-quality speech synthesis
  • Low, deterministic tail latency
  • 8x throughput advantage over a GPU solution
  • Significant CapEx and energy savings

For more information please see our blog, our demo video and our White Paper.

Featured Documents & Videos

Demonstration Video

Solution Brief

Product Overview

To evaluate what the MAU Accelerator can do for your business, contact at

Scroll to Top

This website uses cookies to ensure you get the best experience on our website. By continuing to browse on this website, you accept the use of cookies for the above purposes.