We perform bespoke acceleration of clients’ inference models on FPGAs and other programmable platforms to provide them with a latency advantage over their competitors or a cost benefit compared with alternative solutions. These solutions may remain programmable to benefit from the latest developments in ML or be converted to ASIC for greater performance and lower unit cost. For a confidential discussion to determine if this could benefit you, please contact us here.
Our work has benefited clients in a wide range of markets, including finance, aerospace & defense, speech, social media and automotive.
We optimize ML inference for efficient hyperscale deployment of a wide range of workloads using our patented MAU Accelerator™ technologies and proven design techniques such as:
These techniques and the resulting compelling benefits are described in this white paper
For more information regarding some of the solutions we have developed with clients, please see the following:
Speech synthesis: See our blog, our demo video and our White Paper
Speech transcription: See our Achronix White Paper, Intel White Paper and Intel Solution Brief