By Dan McNamara
Intel® field programmable gate arrays (FPGAs) continue to gain momentum in the marketplace. Paired with Intel® processors, FPGAs are uniquely positioned to accelerate growth across a range of use cases from the cloud to the edge, unlocking the power of data to transform our world.
FPGAs are the Swiss army knife of semiconductors because of their flexibility. These devices can be programmed anytime – even after equipment has been shipped to customers. FPGAs contain a mixture of logic, memory and digital signal processing blocks that can implement any desired function, with extremely high throughput, and in real time. This makes FPGAs ideal for many critical cloud and edge applications.
The Internet of Things (IoT) is projected to reach up to 50 billion smart devices in 2020. That is about six smart devices for every human on Earth. Each person will generate about 1.5 GB of data daily, while each smart connected machine will generate as much as 50 GB daily. Extracting business intelligence by storing, processing and analyzing this massive amount of data in real time and in a power-efficient manner is a benefit that FPGAs bring to cloud and edge computing.
The recent announcement that Intel FPGAs are bringing power to artificial intelligence in Microsoft Azure* is a perfect example. The foundation is Project Brainwave*, Microsoft’s principal architecture for serving real-time artificial intelligence (AI) that is used in Bing’s* intelligent search, and now offered in Azure and at the edge.
Whether in the cloud or at the edge, Intel FPGAs offer a low-latency and power-efficient path to realize real-time AI without the need for batching calculations into smaller processing elements. For example, FPGA-powered AI is able to achieve extremely high throughput that can run ResNet-50, an industry-standard deep neural network requiring almost 8 billion calculations without batching. This is achievable in FPGAs because the programmable hardware, including logic, DSP, and embedded memory, allows any desired logic function to be easily programmed and optimized for area, performance or power. Since this fabric is implemented in hardware, it can be customized and can perform parallel processing, which makes it possible to achieve orders of magnitudes of performance improvements over traditional software or GPU design methodologies.
Enterprise applications are also leveraging this same capability. Dell EMC* and Fujitsu* are putting the Intel Arria® 10 GX Programmable Acceleration Cards (PAC) into off-the-shelf servers for enterprise data centers. These accelerator cards are designed to work with Intel Xeon® processors across workloads, such as real-time data analytics, AI, video transcoding, financial, cybersecurity and genomics. These are data-intensive workloads facing an explosion of data and benefitting from the real-time and parallel processing offered by FPGAs. Intel has fostered an expansive partner ecosystem to develop full turnkey solutions across these workloads using the Acceleration Stack for Intel Xeon CPU with FPGAs.
Levyx* – a big data company led by former financial services industry executives – uses the Intel PAC based on Arria 10 FPGAs to accelerate financial backtesting, a commonly used approach to help predict the performance of computational trading strategies for financial instruments, including a full range of securities, options and derivatives. It’s a highly parallel, data- and compute-intensive workload that can often take many hours, or even days, to execute. Levyx was able to obtain 850 percent faster performance on financial backtesting using FPGAs. The attached graphic shows data across 50 algorithm simulations on 20 stock trading symbols. The results are compelling.