FPGA
Xilinx FPGAs Accelerate Artificial
Intelligence Inferencing at SK Telecom
Daniel Eaton, Xilinx
The massive parallelism and reconfigurability of Xilinx
Kintex UltraScale FPGAs enabled SK Telecom to enhance
both the speech-recognition accuracy and response time
of its NUGU voice-activated assistant, while also gaining the
flexibility to evolve its Automatic Speech Recognition platform
with the advancing state of the art in Artificial Intelligence.
Artificial Intelligence (AI) is quickly penetrating markets for
online consumer services, and major players are moving quickly
to upgrade their data centres accordingly. Low-latency inference
is a key requirement, to ensure applications such as voice
recognition deliver a seamless user experience. In addition,
while cost, power consumption, and time to market are key
concerns, flexibility is essential to keep pace with the rapid
technological advancements in AI without incurring high ownership
costs.
Figure 1. SK Telecom’s NPU in Xilinx Kintex UltraScale FPGAs
powers the AIX accelerator for NUGU’s ASR servers.
Large cloud and telecom operators agree that traditional
hardware platforms for neural networks are unable to meet the
demands of large-scale commercial deployment. One such
company, SK Telecom, has successfully deployed AI accelerators
at its data centers in South Korea, and has achieved
extremely high performance with low latency by working with
Xilinx to maximize the benefits of FPGA parallelism and power
efficiency. At the same time, FPGAs allow flexibility to upgrade
the accelerators quickly with even more advanced neural networks
as AI technology continues to move forward at a rapid
pace.
Figure 2. SK Telecom diagram - Throughput vs number of
channels comparing GPU and FPGA accelerators vs CPUonly
servers.
4 Embedded September 2019 www.eenewsembedded.com News @eeNewsEurope
/eenewseurope
/www.eenewsembedded.com