Myrtle AI


We work with a range of technology partners to share knowledge and expertise in machine learning and acceleration.

We partner with the two companies which pioneered FPGAs and FPGA-enabled accelerator cards for data centers, Xilinx and Intel. This enables us to stay at the forefront of development in digital design and specifically hardware-accelerated machine learning. We also partner with leading companies in specific applications and with cloud companies. These enable us to ensure our products and solutions are fit for purpose and available to a broad market of users. Sharing our expertise with other industry leaders has led us to take a leading role in defining machine learning benchmarks in an organisation which is backed by all the market leaders.

We’re an Intel Gold Partner. We work within their Design Solutions Network (DSN) program delivering solutions for the Intel range of data center acceleration cards, based on Arria 10 and Stratix 10 FPGAs. Our MAU™ Accelerator runs on the Intel FPGA PAC D5005 accelerator card.

We’re a Xilinx Alliance Partner and target a range of Xilinx boards. Our MAU™ Accelerator runs on the Xilinx Alveo U250 accelerator card. Currently available under Nimbix, Alveo is Xilinx’s range of adaptable accelerator cards for Data Center Workloads. We also deploy on the Xilinx UltraScale+ cards in the Amazon AWS F1 cloud.

Jaguar Land Rover is a leader in autonomous driving and their Cortex project is all about improving current systems and working with the tech that can take autonomy further afield. We’re very proud to have been chosen by them to be a core part of Cortex. In collaboration with Birmingham University the collaboration combines machine learning, with advanced radar and camera data to realize Level 4 autonomous driving in poor weather and off-road conditions. The project recently featured in a Wired magazine article.

 Jaguar Land Rover

We are proud to be an Amazon Web Services Partner. We deploy object detection, image segmentation and image recognition solutions on Amazon F1 instances. Originally developed for edge devices these run on data center Xilinx UltraScale+ boards and can be viewed on the AWS marketplace. These demonstrations show how inference can be run securely on re-programmable silicon between decryption/ encryption stages. is an industry led machine learning benchmarking effort. It covers the seven principal real-world uses of machine learning. One of only 10 global benchmark owners, we own the benchmark for the speech transcription workload. The code we have open sourced as part of our commitment is being used to benchmark new edge and data center hardware. is also part of the MLCommons whose mission is to accelerate ML innovation and increase its positive impact on society. MLCommons aims to do this by creating public resources, industry-scale public datasets and supporting outreach activities.

We recognize the trademarks of 3rd party companies referenced above. 

Scroll to Top

This website uses cookies to ensure you get the best experience on our website. By continuing to browse on this website, you accept the use of cookies for the above purposes.