Our Vision

The collective power usage of data centers around the world is rapidly growing. Currently, data centers are the fourth largest consumer of power behind only China, the US and the EU. Growth is unsustainable using current hardware. We believe that re-programmable silicon is the heart of the low-power future of data centers. We’re in good company with this view: every server Microsoft deploys into Azure data centers contains the re-programmable silicon we program for exactly this reason.

Machine learning, our flagship expertise, is expected to fuel a significant proportion of data center growth. Our vision is to produce data center solutions that accelerate machine learning whilst reducing its power consumption: advancing the vast potential of machine learning to enhance our lives without costing the planet.

OUR APPROACH

Machine learning models change so fast that hardware optimized for current models can quickly become inefficient. We view machine learning optimization as a three-piece problem. We believe machine learning models must be designed paying close attention to their numerics; their quantization, sparsity and compression; with a clear understanding of the hardware cost of implementing them on all the available platforms.

Only by understanding machine learning model behavior, efficiencies in model computation and optimal design of computation hardware, can we generate truly optimized solutions. We advocate algorithm-accelerator co-design to create world class solutions. Models have unique properties that can be capitalized upon. New data center boards from Xilinx and Intel allow optimized data structures to be continually redesigned and deployed, allowing new machine learning models to be optimized for performance and power consumption.

We offer a step change improvement for your data center inference performance: license our hardware accelerators and use our globally recognized machine learning expertise to access the benefits of next-generation hardware.

Our ability to heavily optimize machine learning models means that they are plummeting in size and are taking our hardware designs towards the edge. As a global edge workload benchmark owner this means we now have the ability to target the hardware designs we now generate as edge FPGA and ASIC components.

Yesterday’s programmers of reconfigurable systems were highly trained digital designers using languages like Verilog. We’re at the forefront of the revolution — putting software engineers in the vanguard of harnessing reconfigurable technology in the cloud: mapping their algorithms onto a mixture of compute resources and achieving previously impossible levels of performance, low energy consumption and execution scenarios. This is the future of the cloud and our ever more interconnected world.

Key Staff

Peter Baldwin

CEO

Peter is known for his data center software. He directly produced and supported simulation visual effects on twenty major Hollywood movies. He has a maths PhD and a special interest in the mathematical foundations of deep learning.

David Page

Chief Scientist

David has a background in mathematics and theoretical physics. He completed a PhD at Durham University and postdoctoral research at the University of Toronto. After a period in industry managing Quantitative Analysis teams, he returned to research in the areas of machine learning and theoretical neuroscience. His interests are in understanding the type of algorithms that can allow intelligent agents to operate safely and effectively in complex, dynamic environments. This will require a deeper understanding of the statistical properties of learning algorithms, in order to guarantee robustness and safety, and also algorithmic advances to handle the rich problem solving capabilities needed for this kind of behavior.

Liz Corrigan

Senior Engineering Manager

Liz runs myrtle’s engineering operations. Liz has been running teams to create FPGA based products and systems for over 15 years. She has developed state of the art mobile telecommunications equipment, shipped defence systems into theatre and led verification activities for security critical applications. Liz is a Chartered Engineer with a technical background in RTL design, verification and electronic systems design.

Brian Tyler

Commercial Director

Brian has held C-level roles in management, sales & marketing at several international software and hardware companies.

Graham Hazel

GPU Lead

Graham’s team accelerates our machine learning training and rapidly prototypes FPGA designs. Prior to joining Myrtle, Graham worked at ARM semiconductors.

Ollie Bunting

FPGA Lead

Ollie leads our FPGA group and has extensive experience in embedded systems including high speed packet inspection and high grade cryptography.

Christiaan Baaij

Lead Architect

ENIAC award winner. Senior functional developer on Myrtle’s neural net hardware team.

Jonathan Shipton

Software Lead

Cambridge University educated computer scientist and expert on functional programming and its applications to low level systems.

Sam Davis

Technical Lead - Machine Learning

Sam leads our ML engineering effort into new model topologies and scalable training. Sam is currently the chair of the Speech Working Group for the MLPerf.org benchmarking consortium and he is also the owner of the official MLPerf transcription repository which is used to benchmark all inference and training hardware for this category.

Ian Ferguson

West Coast Evangelist

Before joining us, San Francisco based Ian was the VP of WorldWide Marketing and Strategic Alliances for a global fabless semi-conductor company. There he defined, articulated and executed delivery of disruptive hardware and software technology to Cloud Infrastructure companies and Enterprises.

Trusted By