GPGPU-Ch2-Programming-Model
本文最後更新於:2024年3月3日 晚上
Complete notes can be found here (Mandarin) :
Brief overview of chapter2:
Chapter 2 dives into the fascinating world of GPU computing, unraveling how GPUs manage to perform complex calculations at breakneck speeds. It kicks off by introducing us to the magic behind GPUs: their ability to handle thousands of tasks simultaneously without breaking a sweat. This capability is powered by clever tricks like creating threads with minimal fuss, scheduling them effortlessly, and synchronizing tasks swiftly, allowing for an incredible level of detail and speed in processing.
Imagine a bustling city where every inhabitant has a specific task. In this city, there are neighborhoods (thread blocks) where tasks are finely detailed and closely related (fine-grained parallelism), and there are entire districts (independent thread blocks) where tasks are more varied but still connected by a common goal (coarse-grained parallelism). Even broader, the city comprises multiple regions (independent grids) each taking on different large-scale projects (task parallelism).
The chapter then shines a spotlight on SIMD (Single Instruction, Multiple Data), a smart architecture that lets a single command work on multiple pieces of data at once. This is particularly handy for tasks that need the same operation repeated over and over on lots of data points. Modern GPUs are built on this principle, but they smartly hide the complexity and instead present a more versatile model through programming interfaces like CUDA and OpenCL. This model is like having an army of tiny, efficient workers (scalar threads) each capable of taking their own path and accessing any information they need to get the job done.
Next, we explore how GPUs fit into the bigger picture alongside CPUs. Whether we’re talking about standalone GPUs or those integrated directly with CPUs, the dance between allocating memory, transferring data, and launching computational kernels (the actual computations running on the GPU) is beautifully orchestrated to ensure everything runs smoothly. The chapter uses the example of a simple mathematical operation (SAXPY) to show how these calculations are split up and run in parallel across thousands of threads, demonstrating the power and efficiency of GPU processing.
In an exciting turn, the chapter delves into the nuts and bolts of GPU instruction sets, comparing NVIDIA’s approach with AMD’s. It introduces us to PTX and SASS, two levels of instruction within NVIDIA’s ecosystem, showing us how high-level programming commands are translated into the language that the GPU hardware actually understands. This process is a bit like translating a novel into multiple languages, each more detailed and closer to the action.
Lastly, the comparison with AMD’s Graphics Core Next ISA offers a glimpse into the diverse strategies tech giants use to push the boundaries of what GPUs can achieve. This deep dive into the inner workings of GPUs showcases the constant innovation and the intricate dance between software and hardware that powers the stunning graphics and lightning-fast computations we’ve come to rely on in gaming, scientific research, and so much more.