TLB Structure and Major Issues
What’s Inside a TLB Entry?
A TLB is typically:
-
Small (32–128 entries)
-
Fully associative
→ Any translation can go in any slot
→ Hardware searches all entries in parallel
A basic TLB entry looks like:
| Field | Meaning |
|---|---|
| VPN | Virtual Page Number |
| PFN | Physical Frame Number |
| Valid bit | Is this entry usable? |
| Protection bits | R/W/X permissions |
| ASID (optional) | Identifies process |
| Dirty bit | Page modified? |
| Other bits | Cache/coherence control |
Important: TLB Valid Bit ≠ Page Table Valid Bit
They mean different things:
Page Table Valid Bit
-
If invalid → page not allocated
-
Access → segmentation fault
-
OS may kill process
TLB Valid Bit
-
Only means: “Is this entry currently filled?”
-
Used to clear TLB entries
-
Especially important during context switches
TLB Issue: Context Switches
The TLB contains translations for the currently running process.
Problem:
If Process P1 runs, then we switch to Process P2:
-
VPN 10 in P1 → PFN 100
-
VPN 10 in P2 → PFN 170
Without extra information, hardware cannot tell which mapping is correct.
That would break isolation between processes.
The Core Problem
How do we manage TLB contents during context switches?Solution #1: Flush the TLB
On every context switch:
-
Set all valid bits to 0
-
Clear entire TLB
✅ Correct
No wrong translations used.
❌ Costly
New process must rebuild its TLB from scratch.
Many TLB misses occur.
Frequent context switches → performance drops.
Solution #2: Address Space Identifiers (ASIDs)
Better solution:
Add an ASID field to each TLB entry.
Now entry looks like:
| VPN | PFN | Valid | Prot | ASID |
Each translation is tagged with the process identity.
Now both P1 and P2 entries can coexist:
Hardware checks:
-
VPN match
-
ASID match
No confusion.
Advantage
-
No flush needed
-
TLB entries survive context switches
-
Much better performance
Limitation
ASID size is limited.
Example:
-
8-bit ASID → max 256 active address spaces
If more processes exist:
-
OS must reuse ASIDs carefully
-
Possibly flush entries
Shared Pages
Sometimes two processes intentionally share memory:
Example:
-
Shared code pages
-
Shared libraries
Possible TLB entries:
Different virtual pages → same physical page.
This reduces memory usage.
Some systems also use a Global bit (G):
-
If set → entry shared across all processes
-
ASID ignored
TLB Replacement Policy
TLB is small → must evict entries.
The Big Question:
Which entry should we replace?
LRU (Least Recently Used)
-
Remove entry not used recently
-
Exploits temporal locality
-
Generally good
⚠ But can behave badly in certain patterns
(e.g., loop accessing n+1 pages with TLB size n)
Random Replacement
-
Replace random entry
-
Simple
-
Avoids pathological worst cases
Often surprisingly effective.
Real Example: MIPS TLB
Example architecture:
MIPS R4000
Key properties:
-
32-bit virtual addresses
-
4KB pages
-
64-bit TLB entry
-
Software-managed TLB
Fields include:
| Field | Purpose |
|---|---|
| VPN (19 bits) | Virtual page number |
| PFN (24 bits) | Physical frame number |
| G bit | Global shared page |
| ASID (8 bits) | Process ID |
| C bits | Cache control |
| D bit | Dirty (written?) |
| V bit | Valid |
Special Features
-
Some TLB entries reserved (“wired”) for OS
-
Used for:
-
Kernel code
-
TLB miss handler
-
-
Prevents infinite TLB miss loops
OS TLB Instructions
Because it's software-managed, MIPS provides:
-
TLBP → Probe TLB
-
TLBR → Read entry
-
TLBWI → Write specific entry
-
TLBWR → Write random entry
These are privileged instructions.
If user processes could modify the TLB:
-
They could remap memory
-
Break isolation
-
Take over the system
Replacement + Context Switch + ASID Summary
| Issue | Solution |
|---|---|
| Old translations during context switch | Flush or ASID |
| Limited TLB size | Replacement policy |
| Shared memory | Multiple VPNs → same PFN |
| TLB miss in handler | Wired entries |
“RAM Isn’t Always RAM” (Culler’s Law)
Even though RAM is called Random Access Memory, access time isn’t always uniform.
Why?
Because:
-
If translation is in TLB → fast
-
If TLB miss → page table access needed
-
Possibly multiple memory accesses
If program:
-
Randomly accesses many pages
-
Exceeds TLB coverage
Performance can collapse.
Accessing memory randomly can be much slower than accessing nearby memory.
Summary
The TLB is:
-
Small
-
Fast
-
Critical for performance
But it introduces new challenges:
-
Context switch handling
-
Replacement decisions
-
ASID management
-
Sharing pages safely
Well-designed TLB management is essential to making virtual memory fast and scalable.




Comments
Post a Comment