Chapter 1 Introduction
- 1.1 Think Parallel
- 1.2 Performance
- 1.3 Motivation: Pervasive Parallelism
- 1.4 Structured Pattern-Based Programming
- 1.5 Parallel Programming Models
- 1.6 Organization of this Book
- 1.7 Summary
All computers are now parallel. Specifically, all modern computers support parallelism in hardware through at least one parallel feature, including vector instructions, multithreaded cores, multicore processors, multiple processors, graphics engines, and parallel co-processors.
In short, parallel programming is programming.
1.1 Think Parallel
Operations need to be done simultaneously. If the original iterative form is not suitable, we need to convert them to the appropriate parallel structure.
1.2 Performance
Two problems:
- Computation may not be the bottleneck. Instead, access to memory or communication may be.
- The potential for scaling performance is constrained by the algorithm's span (longest sequential-only part, or the critical path).
This book focuses on the shared memory machine model.
1.3 Motivation: Pervasive Parallelism
1.3.1 Hardware Trends Encouraging Parallelism
Three walls:
- Power wall
- Instruction-level parallelism (ILP) wall
- Memory wall
1.3.2 Observed Historical Trends in Parallelism
1.3.3 Need for Explicit Parallel Programming
Serial traps: the long-sustained serial illusion has built traps into our tools and ways of thinking. For example, pointers work well in serial programs but become a nightmare in parallel programs.
So we need to:
- Create appropriate tools.
- Think in parallel.
Two types of parallelism:
- Mandatory parallelism
- Optional parallelism