A machine learning inference accelerator for the finance industry

Designed for fintech

VOLLO™ is designed to achieve the best latency, throughput, quality and energy- and space-efficiency metrics for the STAC-ML Markets (Inference) benchmarks1. VOLLO can accelerate a wide range of similar models developed by financial companies themselves.

Unrivalled low latency

VOLLO achieves latencies as low as 24 microseconds for the LSTM-based neural network models defined in the STAC–ML benchmarks.

Simple to install

VOLLO runs on an industry-standard FHFL PCIe accelerator card. The IA-840f card is powered by an Intel® Agilex™ FPGA and built by BittWare, a Molex company.

High accuracy

High accuracy is achieved through the use of floating point format in all operations. Models can be trained in FP32 or bfloat16 and run on VOLLO without the need for retraining or accuracy compromises.

High throughput and power-efficiency

Designed to be installed in a server co-located in a stock exchange, VOLLO achieves very high throughput and low energy consumption in a 1U server. This significantly reduces the costs incurred in running co-located servers.

Simple to program

Models can be trained in PyTorch or TensorFlow before being exported in ONNX format into the VOLLO tool suite, making it simple to program from your existing ML development environment.

Flexible for future-proofing

The flexibility of FPGA technology ensures that not only can VOLLO be software-configured with users’ LSTM model configurations, but significant architectural innovations can also be adopted quickly with optimal compute structures2.