Rapid advances in the development of ever-more-complex machine learning algorithms and an explosion in the use of machine learning to introduce AI into all walks of life is massively increasing demand for compute capacity and the energy required to power it. The need for low latency in real-time applications is exacerbating that trend. Growth is unsustainable using traditional software-only solutions.
Our vision is to produce machine learning inference solutions which meet these exacting demands using minimum compute capacity and minimum energy, thus advancing the vast potential of machine learning to enhance our lives without costing the planet.
New open engineering consortium established to accelerate innovation in ML
Achieves high fidelity speech synthesis at lower cost using Intel® Stratix® 10 NX FPGA
Enables large cost savings with a straightforward scale up of existing infrastructure
Myrtle.ai launches new domain-specific, sparsity-exploiting inference accelerator at Xilinx Developer Forum 12 November 2019
Delivers very high throughput within tight latency bounds
Reduces costs and removes growth constraints for businesses offering speech services
Myrtle.ai selected to provide artificial intelligence benchmark code for internationally recognised competition
MLPerf selects myrtle.ai to provide benchmark code for Speech To Text