ML inference directly on the network in a SmartNIC.
VOLLO® inference accelerator on the AMD Alveo™ V80 compute accelerator card.
Unrivalled latencies achievable by FSI companies with no FPGA design expertise
Enables financial firms to make faster and more intelligent ML decisions
New open engineering consortium established to accelerate innovation in ML
Enables large cost savings with a straightforward scale up of existing infrastructure
Delivers very high throughput within tight latency bounds
Reduces costs and removes growth constraints for businesses offering speech services
MLPerf selects myrtle.ai to provide benchmark code for Speech To Text