The basic architecture described earlier executes an instruction in several execution phases, typically fetching a code of the instruction, decoding the instruction, fetching the operands, executing the instruction, storing the results. Each of these phases only employs some parts of the processor and leaves other parts idle. The idea of instruction pipelining is to process several instructions concurrently so that each instruction is in a different execution phase and thus employs different parts of the processor. The instruction pipelining yields an increase in speed and efficiency.
For illustration, Intel Pentium 3 processors sported a 10 stage pipeline, early Intel Pentium 4 processors have extended the pipeline to 20 stages, late Intel Pentium 4 processors use 31 stages. AMD Opteron processors use 12 stages pipeline for fixed point instructions and 17 stages pipeline for floating point instructions. Note that it is not really correct to specify a single pipeline length, since the number of stages an instruction takes depends on the particular instruction.
One factor that makes the instruction pipelining more difficult are the conditional jumps. The instruction pipelining fetches instructions in advance and when a conditional jump is reached, it is not known whether it will be executed or not and what instructions should be fetched following the conditional jump. One solution, statistical prediction of conditional jumps is used. (AMD Athlon processors and Intel Pentium processors do this. AMD Hammer processors keep track of past results for 65536 conditional jumps to facilitate statistical prediction.) Another solution, all possible branches are prefetched and the incorrect ones are discarded.
An increase in speed and efficiency can be achieved by replicating parts of the processor and executing instructions concurrently. The superscalar execution is made difficult by dependencies between instructions, either when several concurrently executing instructions employ the same parts of the processor, or when an instruction uses results of another concurrently executing instruction. Both collisions can be solved by delaying some of the concurrently executing instructions, thus decreasing the yield of the superscalar execution.
An alternative solution to the collisions is replicating the part in the processor. For illustration, Intel Core Duo processors are capable of executing four instructions at once under ideal conditions. Together with instruction pipelining, AMD Hammer processors can execute up to 72 instructions in various stages.
An alternative solution to the collisions is reordering the instructions. This may not always be possible in one thread of execution as the instructions in one thread typically work on the same data. (Intel Pentium Pro processors do this.)
An alternative solution to the collisions is splitting the instructions into micro instructions that are scheduled independently with a smaller probability of collisions. (AMD Athlon processors and Intel Pentium Pro processors do this.)
An alternative solution to the collisions is mixing instructions from several threads of execution. This is attractive especially because instructions in several threads typically work on different data. (Intel Xeon processors do this.)
An alternative solution to the collisions is using previous values of the results. This is attractive especially because the processor remains simple and a compiler can reorder the instructions as necessary without burdening the programmer. (MIPS RISC processors do this.)
References.Â
Agner Fog: Software Optimization Resources. http://www.agner.org/optimize