Users and solution providers of services such as speech transcription, natural language processing or recommendation systems which run on deep neural networks (DNNs) can save costs and enhance and scale the services they offer by using Myrtle.ai technology. Myrtle.ai achieves this by creating DNNs which are optimized for specific inferencing workloads and deployable in today’s cloud and on-premise data centers and in edge applications.
With the rapid growth in AI based on DNNs, including speech services and recommendation systems, businesses using these to engage with their customers are struggling to scale up their IT resources and deal with the resulting increase in energy demand. To address this, cloud companies and enterprises running their own data centers are adopting FPGA accelerator cards such as the FPGA PAC cards from Intel® or the Alveo cards from Xilinx®.
Myrtle.ai’s highly efficient MAU Accelerator™ tiles combined in its proprietary scalable architecture enable it to build DNNs which are optimized for specific workloads and run on these FPGAs. The unrivalled efficiency achieved by Myrtle.ai results from exploiting unstructured sparsity and optimal quantization. These techniques and the resulting compelling benefits over CPUs and GPUs are described in this white paper.
The reprogrammable nature of FPGAs enables Myrtle.ai to future-proof your DNN. As research uncovers new models and optimization techniques, so the FPGA can be reprogrammed to benefit from these, increasing the technology advantage of the FPGA over other hardware accelerators.
Whether you need a solution for a data center or edge application, you can evaluate the competitive advantage Myrtle.ai can bring to your business by contacting us today.