posted Aug 12, 2018, 2:49 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Sep 19, 2018, 10:01 PM
]
Company |
Project |
Alibaba |
|
Amazon |
|
Baidu |
|
Google |
|
Microsoft |
|
Tencent |
|
AI implementations and main weakness of each: - ASIC: slow to market, must find a large enough market to justify cost
- CPU: very energy inefficient
- GPU: great for training but very inefficient for inference
- DSP: not enough performance, high cache miss rate
When to Use FPGAs
- Transistor Efficiency & Extreme Parallelism
- Bit-level operations
- Variable-precision floating point
- Power-Performance Advantage
- >2x compared to Multicore (MIC) or GPGPU
- Unused LUTs are powered off
- Technology Scaling better than CPU/GPU
- FPGAs are not frequency or power limited yet
- 3D has great potential
- Dynamic reconfiguration
- Flexibility for application tuning at run-time vs. compile-time
- Additional advantages when FPGAs are network connected ...
- allows network as well as compute specialization
When to Use GPGPUs
- Extreme FLOPS & Parallelism
- Double-precision floating point leadership
- Hundreds of GPGPU cores
- Programming Ease & Software Group Interest
- CUDA & extensive libraries
- OpenCL
- IBM Java (coming soon)
- Bandwidth Advantage on Power
- Start w/PCIe gen3 x16 and then move to NVLink
- Leverage existing GPGPU eco-system and development base
- Lots of existing use-Cases to build on
- Heavy HPC investment in GPGPU When to Use GPGPUs
Bibliography
|
|