5. Amdahl and Gustafson

Amdahl’s Law

Law

The speedup you gain from parallelizing a program is fundamentally limited by the portion of the program that must run sequentially (can’t be parallelized).

Formula: Speedup = Where:

  • = the fraction of the program that is strictly sequential
  • = number of processors
  • = the parallelizable fraction

Consequence: all non-parallel parts of program, regardless of size, must be minimized.

Gustafson’s Law

Law

As you add more processors, you can scale the problem size proportionally — the serial fraction becomes negligible relative to the growing parallel workload, yielding near-linear speedup.

Formula: Speedup = Where:

  • = number of processors
  • = fraction of work that must remain serial

Derivation:

  • Starting point: you have a program running on n processors. Normalize the total parallel execution time to 1.
    • = time spent on serial work
    • = time spent on parallel work
  • Parallelized execution time:
  • Now ask: how long would this same workload take on 1 processor?
    • The serial part still takes
    • The parallel part took across processors, so on 1 processor it would take
  • Speedup:

Intuition: instead of asking “how much faster can we solve a fixed problem?”, ask “how much more work can we do in the same time?” — speedup scales with problem size, not just processor count.


Amdahl vs Gustafson

  • Amdahl fixes the problem size → pessimistic, speedup is capped by the serial fraction.
  • Gustafson scales the problem size with processors → optimistic, near-linear speedup is achievable as long as the parallel portion grows.