The GPU Overshadows the CPU

Ask a teenager about GPUs (Graphics Processing Unit) and you might get a surprisingly informed response.  As I watch my kids, nephews, and their friends build “gaming PCs” while I contentedly play bingo real money games on my phone, they all seem quite current on the relative performance of AMD vs Nvidia, the merits of GPU memory, power issues, etc.  (And one important side effect:  a fairly healthy family ecosystem of hand-me-down GPUs).

While it’s great to run Battlefield 4 at 60fps on ultra detail across three HD monitors, what’s most interesting is how GPU capabilities are generalizing beyond graphics. This is one of my absolute favorite disruption patterns: “commodization+crossover”, where a technology is commoditized by demand for one application and then applied elsewhere.

GPUs began as very specialized (and expensive) 2D & 3D hardware accelerators. Things began to change in the 1990s, driven by demand for 3D games, first with arcade units and consoles, and then PCs. In 1999, Nvidia coined the term “GPU”, starting a consumer-driven 15yr+ price/performance ramp with no end in sight.

GPUs are also getting much more generalized.  The first, fairly rigid 3D-transform computation pipelines have gradually given way to more general stream processors.  So called graphics “shaders” are now nearly fully programmable:  GPU developers write compute “kernels” in C-like languages (such as Open GL GLSL or DirectX HLSL) that then run on hundreds or thousands of compute units on the GPU.  And more recent technologies, such as Nvidia’s CUDA and the OpenCL platforms, dispense with the graphics-centric worldview entirely,

Because of their parallel architecture, GPUs have continued to scale while single CPU performance has effectively flattened.  For certain “embarrasingly parallel” problems where a repeated operation is applied to large amounts of data, they are hard to beat. For example, $350 gets you ~3.4 trillion floating point ops/second, 42,000x faster than the original Cray supercomputer!  Amazon offers GPU instances, and even Intel has conceded in a way:  on a modern x86 multi-core processor, almost 2/3rds of the die area is GPU.

It’s not surprising to see GPU horsepower applied to more and more non-graphics applications, such as simulating physics, aligning genome sequences, and training deep neural networks.  I think this pattern will continue, with the GPU firmly entrenched in computing systems as a highly scalable vector co-processor.

Leave a Reply

Your email address will not be published. Required fields are marked *