Pipeline burst cache
In computer engineering, the creation and development of the pipeline burst cache memory is an integral part in the development of the superscalar architecture. It was introduced in the mid 1990s as a replacement for the Synchronous Burst Cache and the Asynchronous Cache and is still in use till date in computers. It basically increases the speed of the operation of the cache memory by minimizing the wait states and hence maximizing the processor computing speed. Implementing the techniques of pipelining and bursting, high performance computing is assured. It works on the principle of parallelism, the very principle on which the development of superscalar architecture rests. Pipeline burst cache can be found in DRAM controllers and chipset designs.[1]
Introduction
In a processor-based system, the speed of the processor is always more than that of the main memory. As a result, unnecessary wait-states are developed when instructions or data are being fetched from the main memory. This causes a hampering of the performance of the system. A cache memory is basically developed to increase the efficiency of the system and to maximise the utilisation of the entire computational speed of the processor.[2]
The performance of the processor is highly influenced by the methods employed to transfer data and instructions to and from the processor.The less the time needed for the transfers the better the processor performance.
The Pipeline Burst Cache is basically a storage area for a processor that is designed to be read from or written to in a pipelined succession of four data transfers. As the name suggests 'pipelining', the transfers after the first transfer happen before the first transfer has arrived at the processor. It was developed as an alternative to asynchronous cache and synchronous burst cache.
It was first implemented in the year 1996 by Intel in the Pentium microprocessor.
Principles of Operation
The Pipeline Burst Cache is based on two principles of operation namely
Burst Mode
In this mode, the memory contents are prefetched before they are requested.
For a typical cache, each line is 32 bytes wide meaning that, transfers, to and from the cache, occur 32 bytes (256 bits) at a time. The data paths are however only 8 bytes wide. This means that four operations are needed for a single cache transfer. If not for burst mode each transfer would require a separate address to be provided. But since the transfers are to be done from consecutive memory locations there is no need to specify a different address after the first one. Using the technique of Bursting, the transfers of successive data bytes can take place without specifying the remaining addresses. This helps in speed improvement.[3]
Pipelining Mode
In this mode, one memory value can be accessed in Cache at the same time that another memory value is accessed in DRAM. The pipelining operation suggests that the transfer of data and instructions from or to the cache is divided into stages. Each stage is kept busy by one operation all the time. This is just like the concept used in an assembly line. This operation overcame the defects of sequential memory operations which involved a lot of time wastage and decrease in the processor speed.[4]
Operation
With the help of the above two principles of operations explained, a Pipeline Burst Cache is implemented. In this cache, transferring of data, from or to a new location, takes multiple cycles for initial transfer but subsequent transfers are done in a single cycle.[5][6]
Trade-Off
The circuitry involved in this cache is very complex due to the simultaneous involvement of pipelining and burst mode. Hence, more time is required initially to set up the "pipeline".
The pipeline burst cache is generally preferred over the asynchronous and synchronous burst cache for higher speeds operation. Particularly the synchronous burst cache is preferred for speeds up to 66 MHz. For speeds greater than this, the pipeline burst cache is used. In general in current computers with speeds of around 2 GHz or more, pipeline burst cache is widely used.[7][8]
The below table illustrates the use of different type of caches for different speeds of a processor.[9]
Bus Speed (MHZ) |
33 | 50 | 60 | 66 | 75 | 83 | 100 |
---|---|---|---|---|---|---|---|
Cache Used | Asynchronous | Synchronous | Synchronous | Synchronous | Pipelined Burst | Pipelined Burst | Pipelined Burst |
See also
References
- ↑ "Network dictionary".
- ↑ "How cache works".
- ↑ "Cache Bursting". Pcguide.
- ↑ "Modes of operation".
- ↑ "Operation".
- ↑ "Pipeline Burst Cache". Pcguide.
- ↑ "Timing Comparison".
- ↑ "Timing Comparison".
- ↑ "Cache comparison". PCGuide.