Translation Lookaside Buffer (TLB)
Translation Lookaside Buffer (TLB)
The Problem with Paging
Paging provides flexible and clean memory virtualization, but it introduces a serious performance issue:
-
Every memory access requires:
-
A lookup in the page table (in memory)
-
Then the actual data access
-
This means:
🔹 Two memory accesses per reference instead of one
Since page tables are stored in physical memory, each virtual address translation adds extra overhead — making paging potentially too slow for practical systems.
The Core Challenge
How do we speed up address translation?
We need a way to:
-
Avoid accessing the page table in memory on every reference
-
Keep translation fast
-
Preserve the benefits of paging
The solution is a special hardware structure called the:
Translation Lookaside Buffer (TLB)
A TLB is:
-
A small, fast hardware cache
-
Located inside the Memory Management Unit (MMU)
-
Stores recent virtual-to-physical translations
You can think of it as:
🗂 A cache for page table entries
How the TLB Speeds Things Up
For every virtual memory reference:
-
The hardware first checks the TLB
-
If the translation is found (TLB hit):
-
Physical address is obtained immediately
-
No page table access needed
-
-
If not found (TLB miss):
-
Hardware consults the page table in memory
-
Updates the TLB
-
Continues execution
-
Why TLB Is Crucial
Without a TLB:
-
Paging would require extra memory access for every reference
-
Performance would degrade significantly
With a TLB:
-
Most translations are served from the fast cache
-
Address translation becomes nearly as fast as direct memory access
Because of this:
✅ TLBs make virtual memory practical and efficient.
TLB Basic Algorithm
Step-by-Step TLB Algorithm
Extract the Virtual Page Number (VPN)
From the virtual address:
-
VPN identifies the virtual page.
-
Offset remains unchanged for later use.
Look Up the TLB
Two possibilities:
✅ Case 1: TLB Hit (Fast Path)
If the VPN is found in the TLB:
✔ Protection Check
If allowed:
✔ Form Physical Address
-
PFN comes from the TLB entry.
-
Offset remains the same.
-
Concatenate PFN + Offset.
✔ Access Memory
Translation completed quickly.
No page table access needed.
❌ Case 2: TLB Miss (Slow Path)
If the VPN is NOT found:
Now the hardware must consult the page table in memory.
Step A: Locate Page Table Entry (PTE)
This requires an extra memory access.
Step B: Check Validity & Protection
If invalid → segmentation fault
If access not allowed → protection fault
Step C: Update the TLB
If valid:
The translation is now cached.
Step D: Retry the Instruction
This time:
-
The VPN will be found in the TLB.
-
It becomes a TLB hit.
-
Memory access completes quickly.
Summary of Control Flow
Why TLB Is Important
Without TLB:
-
Every memory reference → Page table lookup
-
2 memory accesses minimum
With TLB:
-
Most references → 1 fast lookup
-
Only occasional misses require slow path
Performance Insight
TLB works like a cache:
-
TLB Hit → Very fast
-
TLB Miss → Expensive (extra memory access)
If misses are frequent:
-
Performance degrades significantly.
Thus:
The system relies on high TLB hit rates for good performance.
Key Takeaway
The TLB basic algorithm:
-
Extract VPN
-
Check TLB
-
If hit → Translate immediately
-
If miss → Access page table
-
Insert into TLB
-
Retry instruction
It transforms paging from slow but flexible into fast and practical virtual memory.

Comments
Post a Comment