Jeffrey Rowe has more than 40 years of experience in all aspects of industrial design, mechanical engineering, and manufacturing. On the publishing side, he has written well over 1,000 articles for CAD, CAM, CAE, and other technical publications, as well as consulting in many capacities in the … More »
The Continuing Importance of GPUs For More Than Just Pretty Pictures
March 16th, 2017 by Jeff Rowe
While it seems that central processing units (CPUs) get all the glory for computing horsepower, graphical processing units (GPUs) have become the processor of choice for many types of intensively parallel computations.
As the boundaries of computing are pushed in areas such as speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas, researchers continue to look for new and better ways to extend and expand computing capabilities. For decades this has been accomplished via high-performance computing (HPC) clusters, which use huge amounts of expensive processing power to solve problems.
Researchers at the University of Illinois had studied the possibility of using graphics processing units (GPUs) in desktop supercomputers to speed processing of tasks such as image reconstruction, but it was a computing group at the University of Toronto that demonstrated a way to significantly advance computer vision using GPUs. By plugging in GPUs, previously used primarily for graphics, it became possible to achieve huge performance gains on computing neural networks, and these gains were reflected in superior results in computer vision.
CPU Vs. GPU
Unlike CPU applications, however, programs running on graphical processing units (GPUs) currently have no direct access to files on the host OS file system. Although the power, functionality, and utility of today’s GPUs now extend far beyond graphics processing, the coprocessor-style GPU programming model still requires developers to manage movement of data explicitly between its “home” in the CPU’s main memory and the GPU’s local memory.
What does a GPU do differently than a CPU and why don’t we use them for everything? Find out below from Jem Davies, VP of Technology at ARM.
CPU vs GPU (What’s the Difference?)
GPU architectures have their roots in basic graphical rendering operations, such as shading. In 1999, Nvidia introduced the GeForce 256, often referred to as the world’s first GPU – the specialized circuits, which can be built into a video card or on a motherboard to optimize computer memory for accelerating rendering.
A big advantage of GPUs are their superior processor-to-memory bandwidth. As a result, the relative processor-to-memory bandwidth advantage transfers directly to superior application performance. The key is that GPUs provide greater floating-point operations per second (FLOPs) using fewer watts of electricity per computational operation.
GPUs deliver superior performance and better architectural support for neural networks. The performance advantages of GPUs on neural nets are transferred onto an increasingly broad variety of applications essential for training neural nets to support emerging applications, such as self-driving vehicles.
At this time, GPU technology is advancing far faster than that of conventional CPUs.
GPU-accelerated computing has now grown into a mainstream movement supported by the latest operating systems from Apple and Microsoft. The reason for the wide and mainstream acceptance is that the GPU is a computational powerhouse, and its capabilities are growing faster than those of the x86 CPU.
As GPU hardware becomes increasingly general-purpose, it is quickly outgrowing the traditional, constrained GPU-as-coprocessor programming model.
In the following video, Mythbusters, Adam Savage and Jamie Hyneman demonstrate the power of GPU computing.
Mythbusters Demo GPU versus CPU
Much of the momentum behind GPUs has come from Nvidia, which has introduced increasingly sophisticated GPUs, including the new Pascal architecture that is designed to handle specific tasks. Its latest GPU system, the Tesla P100 chip, contains 15 billion transistors on a single chip – twice as many as previous processors.
While GPU technology is advancing quickly, several challenges remain. For example, programming GPUs is still relatively difficult, and that difficulty is only compounded when these devices are assembled in multi-GPU clusters.
Another challenge is how to better integrate GPUs with CPU/GPUs, as these two types of processors are not often integrated together, and they usually do not have high bandwidth communication between the two. This translates into a limited number of applications and capabilities that run well on these systems.
“The CPU (central processing unit) has often been called the brains of the PC. But increasingly, that brain continues to be enhanced by another part of the PC – the GPU (graphics processing unit), which is its soul” — Source: Nvidia from an advertisement dated 2009.
Obviously, all PCs have chips that render the display images to monitors, but not all these chips are created equal. For example, Intel’s integrated graphics controller provides basic graphics that can display only productivity applications like Microsoft Office, low-resolution video, and basic games.
How GPU Computing Works (Source: Nvidia)
The GPU is in a class by itself – it goes far beyond basic graphics controller functions, and is a programmable and powerful computational device in its own right.
Currently, Nvidia GPUs are located on separate chips, and they are usually connected to the CPU via an I/O bus (PCIe). This is the reason for sending large tasks to the GPU. Future systems will integrate GPUs and CPUs in one package.
The unique capabilities of GPUs have been described in the following way: “GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place.”
Architecturally, the CPU is composed of just few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The ability of a GPU with 100+ cores to process thousands of threads can accelerate some software by 100x over a CPU alone. What’s more, the GPU achieves this acceleration while being more power- and cost-efficient than a CPU.
The combination of a CPU with a GPU can deliver the best value of system performance, price, and power.
GPUs, then, are a likely gateway to the future of computing, and GPU technology is an important part of pushing the limits and potential of computing — much more than just displaying pretty pictures.