GPUs are more powerful than CPUs, and they’re becoming more powerful all the time. So why are we still using CPUs instead of GPUs? There are a few reasons. First, CPUs are cheaper to buy and maintain than GPUs. Second, CPUs can handle more complex tasks than GPUs. Third, CPUs can be integrated into devices like smartphones and laptops, while GPUs require separate hardware. But all of that is changing. GPU prices have fallen dramatically in recent years, and processors with dedicated graphics cores (like the AMD Ryzen 7) are becoming increasingly affordable. Meanwhile, advances in artificial intelligence (AI) mean that more tasks are being performed using machine learning algorithms – which can be done much better on a GPU than on a CPU. Sooner or later, it’s likely that we’ll see widespread adoption of GPUs for tasks like rendering 3D graphics and performing AI calculations. In the meantime, though, there’s still a lot of good reason to use them instead of CPUs – especially if you want to get the most out of your device! ..


Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites.

The Question

SuperUser reader Ell keeps up with tech news and is curious why we’re not using more GPU-based systems:

It seems to me that these days lots of calculations are done on the GPU. Obviously graphics are done there, but using CUDA and the like, AI, hashing algorithms (think Bitcoins) and others are also done on the GPU. Why can’t we just get rid of the CPU and use the GPU on its own? What makes the GPU so much faster than the CPU?

Why indeed? What makes the CPU unique?

The Answer

SuperUser contributor DragonLord offers a well supported overview of the differences between GPUs and CPUs:

Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

The detailed answer: GPGPU is still a relatively new concept. GPUs were initially used for rendering graphics only; as technology advanced, the large number of cores in GPUs relative to CPUs was exploited by developing computational capabilities for GPUs so that they can process many parallel streams of data simultaneously, no matter what that data may be. While GPUs can have hundreds or even thousands of stream processors, they each run slower than a CPU core and have fewer features (even if they areTuring complete and can be programmed to run any program a CPU can run). Features missing from GPUs include interrupts and virtual memory, which are required to implement a modern operating system.

In other words, CPUs and GPUs have significantly different architectures that make them better suited to different tasks. A GPU can handle large amounts of data in many streams, performing relatively simple operations on them, but is ill-suited to heavy or complex processing on a single or few streams of data. A CPU is much faster on a per-core basis (in terms of instructions per second) and can perform complex operations on a single or few streams of data more easily, but cannot efficiently handle many streams simultaneously.

As a result, GPUs are not suited to handle tasks that do not significantly benefit from or cannot be parallelized, including many common consumer applications such as word processors. Furthermore, GPUs use a fundamentally different architecture; one would have to program an application specifically for a GPU for it to work, and significantly different techniques are required to program GPUs. These different techniques include new programming languages, modifications to existing languages, and new programming paradigms that are better suited to expressing a computation as a parallel operation to be performed by many stream processors. For more information on the techniques needed to program GPUs, see the Wikipedia articles on stream processing and parallel computing.

AMD is pioneering a processor design called the Accelerated Processing Unit (APU) which combines conventional x86 CPU cores with GPUs. This could allow the CPU and GPU components to work together and improve performance on systems with limited space for separate components. As technology continues to advance, we will see an increasing degree of convergence of these once-separate parts. However, many tasks performed by PC operating systems and applications are still better suited to CPUs, and much work is needed to accelerate a program using a GPU. Since so much existing software use the x86 architecture, and because GPUs require different programming techniques and are missing several important features needed for operating systems, a general transition from CPU to GPU for everyday computing is extremely difficult.