Trusted by
The recommendation challenge
Recommendation models power search, ads and news feeds, but their memory intensity creates CPU bottlenecks. This slows output and delays accurate recommendations, at the risk of missed customer interactions.

The SEAL solution
SEAL is programmable to accelerate multiple workloads, delivering significant throughput gains for industry-relevant models under real-world conditions. And, the more SEAL Accelerator modules used, the better the throughput.
The benefits and features of using SEAL
Designed to impress
Proven to deliver a 5x throughput improvement on industry-standard recommendation model benchmarks under real-world traffic
Low latency
Highly deterministic latency-bounded throughput with sub two millisecond latencies for all batch sizes when using five or more SEAL Accelerator modules
Widely compatible
Compatible with all major deep learning frameworks, SEAL accelerates all number formats, bit depths and quantization schemes
Flexible
SEAL supports recommendation model co-location and sharding with zero performance degradation and is complementary to next-gen AI inference accelerators
Designed to impress
Proven to deliver a 5x throughput improvement on industry-standard recommendation model benchmarks under real-world traffic
Low latency
Highly deterministic latency-bounded throughput with sub two millisecond latencies for all batch sizes when using five or more SEAL Accelerator modules
Widely compatible
Compatible with all major deep learning frameworks, SEAL accelerates all number formats, bit depths and quantization schemes
Flexible
SEAL supports recommendation model co-location and sharding with zero performance degradation and is complementary to next-gen AI inference accelerators
Use Cases
Increase the performance of your memory intensive workloads
SEAL increases throughput and removes bottlenecks, strengthening the revenue-generating potential of your deep learning models by delivering accurate and timely recommendations to your target audience.
- Analytics: speed up lookups in databases
- IoT devices: efficiently process vast amounts of data
- Personalization: deliver personalized recommendations at scale
Why myrtle.ai?
High performance
Proven to deliver a 5x throughput improvement on industry-standard recommendation model benchmarks under real-world traffic
Low latency
Highly deterministic latency-bounded throughput with sub two millisecond latencies for all batch sizes when using five or more SEAL Accelerator modules1
Widely compatible
Compatible with all major deep learning frameworks, SEAL accelerates all number formats, bit depths and quantization schemes
Increase the performance of your machine learning models
Discover how myrtle.ai can help you access low latency inference and deploy complex machine learning models that run in microseconds