1.3: Instructions plumbing and Instruction level parallelism (ILP)
Introduction
This section introduces the learners to the Instruction plumbing and Instruction level parallelism (ILP). Basically this all about how many of the operations in a computer program can be performed simultaneously
Activity Details
Instruction plumbing is a measure of how many of the operations in a computer program can be performed simultaneously. The potential overlap among instructions is called instruction level parallelism.
There are two approaches to instruction level parallelism:
- Hardware
- Software
Hardware level works upon dynamic parallelism whereas, the software level works on static parallelism
The Pentium processor works on the dynamic sequence of parallel execution but the Itanium processor works on the static level parallelism.
Example \(\PageIndex{1}\)
Consider the following program:
1.e = a + b
2.f = c + d
3.m = e * f
Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed. However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously. If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time, giving an ILP of 3/2.
A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible.
Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer. ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed.
How much ILP exists in programs is very application specific. In certain fields, such as graphics and scientific computing the amount can be very large. However, workloads such as cryptography may exhibit much less parallelism.
Micro-architectural techniques that are used to exploit ILP include:
-
Instructionpipeliningwheretheexecutionofmultipleinstructions can be partially overlapped.
-
Superscalar execution, VLIW, and the closely related explicitly parallel instruction computing concepts, in which multiple execution units are used to execute multiple instructions in parallel.
-
Out-of-order execution where instructions execute in any order that does not violate data dependencies. Note that this technique is independent of both pipelining and superscalar. Current implementations of out-of-order execution dynamically (i.e., while the program is executing and without any help from the compiler) extract ILP from ordinary programs. An alternative is to extract this parallelism at compile time and somehow convey this information to the hardware. Due to the complexity of scaling the out-of-order execution technique, the industry has re-examined instruction sets which explicitly encode multiple independent operations per instruction.
-
Register renaming which refers to a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations, used to enable out-of-order execution.
-
Speculative execution which allow the execution of complete instructions or parts of instructions before being certain whether this execution should take place. A commonly used form of speculative execution is control flow speculation where instructions past a control flow instruction (e.g., a branch) are executed before the target of the control flow instruction is determined. Several other forms of speculative execution have been proposed and are in use including speculative execution driven by value prediction, memory dependence prediction and cache latency prediction.
-
Branch prediction which is used to avoid stalling for control dependencies to be resolved. Branch prediction is used with speculative execution
Dataflow architectures are another class of architectures where ILP is explicitly specified. In recent years, ILP techniques have been used to provide performance improvements in spite
of the growing disparity between processor operating frequencies and memory access times (early ILP designs such as the IBM System/360 Model 91 used ILP techniques to overcome the limitations imposed by a relatively small register file). Presently, a cache miss penalty to main memory costs several hundreds of CPU cycles. While in principle it is possible to use ILP to tolerate even such memory latencies the associated resource and power dissipation costs are disproportionate. Moreover, the complexity and often the latency of the underlying hardware structures results in reduced operating frequency further reducing any benefits. Hence, the aforementioned techniques prove inadequate to keep the CPU from stalling for the off-chip data. Instead, the industry is heading towards exploiting higher levels of parallelism that can be exploited through techniques such as multiprocessing and multithreading.
Superscalar architectures
Is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor or a computer designed to improve the performance of the execution of scalar instructions. A scalar is a variable that can hold only one atomic value at a time, e.g., an integer or a real. A scalar architecture processes one data item at a time - the computers we discussed up till now.
Examples of non-scalar variables:
- Arrays
- Matrices
- Records
In a superscalar architecture (SSA), several scalar instructions can be initiated simultaneously and executed independently. Pipelining allows also several instructions to be executed at
the same time, but they have to be in different pipeline stages at a given moment. SSA includes all features of pipelining but, in addition, there can be several instructions executing simultaneously in the same pipeline stage. SSA introduces therefore a new level of parallelism, called instruction-level parallelism.
Conclusion
This section covered the Instruction plumbing and Instruction level parallelism (ILP), that is, how many of the operations in a computer program can be performed simultaneously.
Assessment
1. Outline give an example of an Instruction level parallelism (ILP)
is a measure of how many of the operations in a computer program can be performed simultaneously. The potential overlap among instructions is called instruction level parallelism.
basic idea is to execute several instructions in parallel. Parallelism exists in that we perform different operations (fetch, decode, ...) on several different instructions in parallel.
Mostly determined by the number of true (data) dependencies and procedural (control) dependencies in relation to the number of other instructions.
e.g.
- A: ADD R1 = R2 + R3
- B: SUB R4 = R1 – R5
ILP is traditionally “extracting parallelism from a single instruction stream working on a single stream of data”.