- load balancing (spreading requests across available processing units)
- scatter and gather (workers receive copies of the request, and respond with subsets of data, collated by the requester)
- result cache (a cache, think memcached)
- shared space (workers process the request until a solution is reached)
- pipe and filter (requests get transformed into other requests until a terminal state is reached)
- map/reduce (which is described as a way to use a distributed filesystem to reduce I/O requirements, but is more like the scatter and gather approach in reality, not sure what the differences are)
- bulk synchronous parallel (each worker performs identical steps until a terminal condition is reached)
- execution orchestrator (sort of like BPEL, an orchestration coordinator tells workers what to do and in what order).
Map/Reduce is a buzzword nowadays - you can't throw a stick in the air without landing on some grid guy saying that their grid does map/reduce. The result cache is also popular, with tons of products out there; load balancing is probably the simple man's first solution to scaling anything.
The rest are EAI patterns, well-known but not often really explained well or quickly.
It'd be interesting to see what the scalability people would do to show each of these patterns in action, and why they do them the way they do.