Rapid advances in the development of ever-more-complex machine learning algorithms and an explosion in the use of machine learning to introduce AI into all walks of life is massively increasing demand for compute capacity and the energy required to power it. The need for low latency in real-time applications is exacerbating that trend. Currently, data centers are the fourth largest consumer of power behind only China, the US and the EU, while performance and power consumption remain significant hurdles to the adoption of machine learning at the edge. Growth is unsustainable using traditional software-only solutions.
Our vision is to produce machine learning inference solutions which meet these exacting demands using minimum compute capacity and minimum energy, thus advancing the vast potential of machine learning to enhance our lives without costing the planet.
Myrtle.ai launches new domain-specific, sparsity-exploiting inference accelerator at Xilinx Developer Forum 12 November 2019
Delivers very high throughput within tight latency bounds
Reduces costs and removes growth constraints for businesses offering speech services
Myrtle.ai selected to provide artificial intelligence benchmark code for internationally recognised competition
MLPerf selects myrtle.ai to provide benchmark code for Speech To Text