Objectives: Introduction to Virtual Memory and Paging

Objectives: Introduction to Virtual Memory and Paging
ref: [O’H&Bryant, sect 10.1–10.7] or [Tanembaum, sect 6.1.1-6.1.2]; also [PeANUt
Spec, sect 3]
l to understand the concepts of virtual memory and paging, why they are needed
and some basic issues involving them
l to understand how page tables can be used to implement virtual memory
l to understand on a conceptual level a concrete (but illustrative) VM implementation
n note: you are not asked to write PeAMUt VM interrupt (trap) handlers this year!
n you should however think about how you might do something similar for
rPeANUt (the only fundamental difference in the two is the 10-bit vs 16-bit
address spaces)
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
1
Virtual Memory
l motivation: (multiple) users regularly need to run jobs whose capacity exceeds that
of physical memory (main memory)
n one of the first (and most important) instances of virtualization
n also, in a multiprocessing environment, each program ‘sees’ memory in the
same way, regardless of where it is executing in physical memory
l virtual memory is a technique whereby program-addressable memory is made to
appear to be larger than physical memory
l needed because there is a memory hierarchy:
n many different mediums for the storage of data
n generally, there is a trade-off between speed and capacity (fast memories tend to
be small; large memories tend to be slow)
medium
access time (nsec)
registers
∼1
cache memory
∼10
physical memory
100
disk
1,000,000
COMP2300: Virtual Memory and Paging
typical size
∼ 2 KB
∼ 2 MB
∼ 2 GB
> 20 GB
[O’H&Bryant, fig 6.21]
2015 JJ J • I II ×
2
Virtual Memory – Nomenclature
l address space: range of addresses accessible to programmer
l logical/virtual addresses: addresses as seen by the programmer
l physical addresses: actual addresses in main memory
l the Memory Management Unit (MMU) performs this translation
Memory
Fast
Address
Slow
MMU
CPU
Data
4GB
4GB
16MB
Main Memory
Disk
All problems in computer science can be solved by another level of indirection, . . .
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
3
Paging
l how do we share main memory between competing chunks of the (virtual memory)
address space?
l one solution is called paging
n break all memory into equal sized chunks called pages
n when accessing a virtual address, check if the corresponding page is in main
memory
(if not, move it into main memory and then access it)
l exploits locality of (address) references:
n memory accesses tend not to be random (in a program, they are often in a
sequence)
u consider in particular accesses involved with instruction fetching
n rule of thumb: a program spends about 90% of its time in only 10% of the code
(accessing only 10% of its data)
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
4
Paging Issues
l how big should a page be? Influenced by disk technology
n needs to be large enough to amortize costs of overheads (disk block seek time;
also page book-keeping costs)
n but if too large, causes fragmentation
l what does main memory look like? It consists of a mixed group of pages, with each
page occupying a slot (page frame)
(e.g. PeANUt VM)
l what does (disk) virtual memory look like? It consists of all of the pages
l programmer’s view of paging:
n is oblivious of it: all program addresses are virtual
n can only see performance degradation (when paging requires many disk
accesses, called swapping)
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
5
Virtual Memory Issues
l what pages should be resident in main memory (MM) at any time?
n the most used pages
(in the near future)
n different paging policies give an approximation to ‘most used’
l hardware support: upon a page fault (access of data in a page not currently in
physical memory), the CPU must generate an interrupt (exception is a more
precise term)
l OS support: the OS must install an interrupt handler routine for a page fault
l data consistency:
n upon a page fault, a page currently in main memory usually has to be removed:
u if simply thrown out, data may be lost
u if always written back to disk (upon each store), will be too slow!
n solution: write it back to disk, if it has been written to (made dirty)
u hence the MMU must record this for each page (‘Dirty bit’)
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
6
Virtual Memory in PeANUt
l our previous model of the PeANUt considered memory to be a black box
n data was written or read by the memory automatically
n suppose paging is implemented: how would you know if the data is in main
memory or on the disk?
l a closer look at PeANUt memory:
n main memory (MM): 256 cells, in 8 × 32-cell page frames
n secondary memory (SM): 1024 cells, in 32 × 32-cell pages
l VM is controlled by a Memory Management Unit (MMU)
n
n
n
n
not explicitly shown in the PeANUt architecture
uses a page table with 32 entries (one per page)
translates virtual addresses into physical addresses
(supports OS to) move pages between main memory and disk whenever
necessary
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
7
The PeANUt Virtual Memory System
0
Slow
...
10
10
MEMORY
5
16
111
000
000
111
Fast
5
Page#
0
Offset
0
0
16
3
8
111
000
000256
111
31
Page Table
1024
7
Main Memory
...
Disk
1023
01010.
If/when present, say in frame 011, entry 5 of PT will have
31
(consider accessing VA 00101
COMP2300: Virtual Memory and Paging
...
1
011 )
2015 JJ J • I II ×
8
Page Tables
l purpose: give information on the status of each of the pages of the virtual memory
n is it present in main memory? Present bit
n if so, in which page frame is it? Frame number field (3 bits)
n if so, has data been written to it? Dirty bit
Other information
D P Frame
l where does this page table reside?
n possibly in special hardware
n in PeANUt: part of main memory, at VM addresses 0–31 (large: 1/8 of main
memory!)
l Discussion point: on real computers, the MMU will cache recent page table entries
(in some small table within the CPU). Why is this important?
Also, what if the (a) page table and (b) page fault handler code is not in main
memory?
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
9
How Does PeANUt Virtual Memory Work?
l to read some data (at say address 00101 01010)
1. the address is split into two parts: a page number (00101) and a page offset
(01010)
2. the status of the page is checked with the page table (entry 5):
Other information
D P Frame
3. A) if page is in main memory (P is set):
n lookup its page frame number (say page frame 3 (011))
B) otherwise, a page fault occurs:
(e.g. PF handler: vmfifo.ass)
n select a page frame number (say 011) to put the page into
n may require the removal of another page (the ’victim’), if present in frame 3
u copying back to secondary memory if ‘dirty’
n fetch (5th) page from secondary memory into main memory
4. combine page frame number with the page offset to produce an 8 bit address
(011 01010), giving the location of the cell in main memory
l to write some data: almost the same procedure as above
(must also set the dirty bit (D) in the page table)
COMP2300: Virtual Memory and Paging
2015 JJ J • I II ×
10