Associative Memory
When implementing page tables, you have to access memory several times
to access the physical memory byte. If two-tier pages are used, three access
operations are required: the page directory, the page table, and directly to
the address of this byte, and the three-level tables require four
operations. This slows down memory access and reduces overall system
performance.
As already noted, the ninety to ten rule indicates that most memory access
to a process belongs to a small subset of its pages, and the composition of
this subset changes rather slowly. A way to improve the performance of a
page-based memory organization is to cache the addresses of the memory
frames corresponding to this subset of pages.
To solve this problem was proposed technology associative memory
or cache translation ( translation look-aside buffers or TLB ). In high-
speed memory (faster than main memory), they create a set of several
elements (different architectures assign associative memory from 8 to 2048
elements, in the architecture of IA-32 such elements to Pentium 4 were 32,
starting with Pentium 4 - 128). Each element of the translation cache
corresponds to one element of the page table.
Now when generating a physical address, the corresponding element of the
table is first searched for in the cache (in IA-32 - by the directory field, the
table field and the offset), and if it is found, the address of the
corresponding frame becomes available, which can be immediately used to
access the memory. If there is no corresponding element in the cache, then
memory is accessed through the page table, and then the element of the
page table is stored in the cache instead of the oldest element.
Unfortunately, when switching contexts in the IA-32 architecture, the entire
cache needs to be cleared, since each process has its own page table, and
the same page numbers for different processes may correspond to different
frames in physical memory. Clearing the translation cache is a very slow
operation that must be avoided in every possible way.
An important feature of the translation cache is the percentage of hits, that
is, the percentage of times that the required elements of the page table is in
the cache and does not require memory access. It is known that at 32
elements 98% of hits are provided. Note also that this percentage decrease
in performance hits when using a two-level page table compared to the
single-level is 28%, but the benefits derived when allocating memory would
be making a decline.
|