Skip to content

for network security

Deploy the latest AI models at the highest performance to best defend against the latest threats

Get in touch

Trusted by

  • Trusted partner logo 1
  • Trusted partner logo 2
  • Trusted partner logo 3
  • Trusted partner logo 4
  • Trusted partner logo 5
  • Trusted partner logo 6
  • Trusted partner logo 7

The security challenge

Effective anomaly detection requires spotting brief, subtle deviations in massive streams of network traffic, user behavior, and system activity. AI now outperforms traditional methods by identifying malicious patterns faster and adapting to evolving threats. Deploying AI models at the network endpoint ensures rapid detection and containment, but the real challenge is achieving low-latency, line-rate inference on huge data flows to counter increasingly sophisticated attacks.

The VOLLO solution 

VOLLO makes it simple to deploy the latest AI models quickly and flexibly on SmartNICs that ingest network traffic directly. By performing AI inference inline at high data rates, it allows advanced algorithms to run exactly where they’re most effective. VOLLO delivers the lowest-latency machine learning inference available – outperforming GPU-based platforms and even custom AI ASICs.

The benefits of using VOLLO

Lowest latency

The lowest latency inference of your AI models, validated by independent audits. This is achieved using FPGA technology, which overcomes the latency limitations of processor-based architectures

Future-proof acceleration

New models can be programmed in under a second and the flexibility of FPGA technology ensures that even new AI-driven network security models in the future can quickly be adopted

East to adopt

VOLLO compiles PyTorch and TensorFlow models to FPGA and requires no FPGA expertise or tools to use

Flexible

Can be software-configured to run multiple models simultaneously. Supports a wide variety of operations and layer types – from decision trees, LSTMs, MLPs, and CNNs to streaming architectures like structured state-space models such as S4 and Mamba. These models can run concurrently and be switched in a second

Quick to deploy

VOLLO can be programmed onto PCIe cards for rapid adoption on existing infrastructure or imported into your proprietary FPGA design as a netlist

VOLLO features

Evaluate the potential

Our cycle-level and bit-accurate simulator allows you to gain a performance estimation on your own models

Program using your existing ML framework

Models can be developed in PyTorch or TensorFlow before being exported in ONNX format into the VOLLO tool suite

Deploy with ease

Choose between rapid deployment on a PCIe card and highly integrated deployment with an FPGA netlist

Maintain privacy

Your models and data remain secure on your own premises

Improve the performance of your network security AI

Use Cases

Improve the performance of your network security AI

Strengthen your cybersecurity with VOLLO. Run your threat detection and response at lower latency, decreasing your response time to an attack. VOLLO’s scalability enables two powerful support modes:

 

  • Inline detection at the edge: VOLLO on SmartNICs allows anomaly detection models to run directly in the network datapath, eliminating the need to ship traffic to external servers or accelerators
  • Centralized, high-capacity detection: larger FPGA boards can support richer ensembles and heavier models while preserving ultra-low latency
We’ve found VOLLO to be capable of accelerating a wide range of machine learning models with extremely low latency. When run behind our custom SmartNIC platform, VOLLO enables our customers to maintain the highest levels of security on their networks and make more intelligent decisions than before at unrivalled speeds.
Telesoft Technologies

Built for performance

VOLLO runs on FPGA-based platforms, enabling them to execute combinations of advanced AI models, such as ensembles of anomaly detection networks, at low latency. This translates directly into earlier detection, faster responses, and higher threat-resilience.

VOLLO is not limited to high-performance FPGA boards—it also scales down efficiently to SmartNIC-class devices used in inline cybersecurity applications:

  • Napatech NT400D11 SmartNIC (AGF014): Supports models up to ~3M parameters with latencies as low as 2.16 μs (463,000 inferences/s). Even near the parameter ceiling, inference remains below 21 μs, well within strict inline processing budgets.
  • Napatech NT400D13 SmartNIC (AGF027 ): Roughly twice the size, extending capacity to ~8M parameters. Achieves 1.83 μs latencies (545,000 inferences/s) and scales gracefully to 8.3M parameters in just 23 μs.

Why myrtle.ai?

We enable organizations to meet their inference performance goals, no matter the scale, complexity or industry

Expertise you can rely on

 

We are a team of hardware/software co-design specialists, infrastructure experts and machine learning scientists – we understand your challenges and can deliver the solutions you need

Trusted partner to leading companies

 

We are relied upon by companies at the top of their game because we make it possible for them to deploy complex machine learning models that run in microseconds

Frictionless deployment

 

We enable effortless iteration and deployment of machine learning models, freeing engineers to advance development

Increase the performance of your machine learning models

Discover how myrtle.ai can help you access low latency inference and deploy complex machine learning models that run in microseconds