Translation Lookaside Buffer (TLB)

 

Translation Lookaside Buffer (TLB)

The Problem with Paging

Paging provides flexible and clean memory virtualization, but it introduces a serious performance issue:

  • Every memory access requires:

    1. A lookup in the page table (in memory)

    2. Then the actual data access

This means:

🔹 Two memory accesses per reference instead of one

Since page tables are stored in physical memory, each virtual address translation adds extra overhead — making paging potentially too slow for practical systems.


The Core Challenge

How do we speed up address translation?

We need a way to:

  • Avoid accessing the page table in memory on every reference

  • Keep translation fast

  • Preserve the benefits of paging


The solution is a special hardware structure called the:

Translation Lookaside Buffer (TLB)

A TLB is:

  • A small, fast hardware cache

  • Located inside the Memory Management Unit (MMU)

  • Stores recent virtual-to-physical translations

You can think of it as:

🗂 A cache for page table entries


How the TLB Speeds Things Up

For every virtual memory reference:

  1. The hardware first checks the TLB

  2. If the translation is found (TLB hit):

    • Physical address is obtained immediately

    • No page table access needed

  3. If not found (TLB miss):

    • Hardware consults the page table in memory

    • Updates the TLB

    • Continues execution


Why TLB Is Crucial

Without a TLB:

  • Paging would require extra memory access for every reference

  • Performance would degrade significantly

With a TLB:

  • Most translations are served from the fast cache

  • Address translation becomes nearly as fast as direct memory access

Because of this:

✅ TLBs make virtual memory practical and efficient.


TLB Basic Algorithm

Figure 19.1 shows a rough sketch of how hardware might handle a virtual address translation, assuming a simple linear page table (i.e., the page table is an array) and a hardware-managed TLB (i.e., the hardware handles much of the responsibility of page table accesses.


Step-by-Step TLB Algorithm

Extract the Virtual Page Number (VPN)

From the virtual address:

VPN = (VirtualAddress & VPN_MASK) >> SHIFT
  • VPN identifies the virtual page.

  • Offset remains unchanged for later use.


Look Up the TLB

(Success, TlbEntry) = TLB_Lookup(VPN)

Two possibilities:


✅ Case 1: TLB Hit (Fast Path)

If the VPN is found in the TLB:

if (Success == True)

✔ Protection Check

if (CanAccess(TlbEntry.ProtectBits) == True)

If allowed:

✔ Form Physical Address

Offset = VirtualAddress & OFFSET_MASK PhysAddr = (TlbEntry.PFN << SHIFT) | Offset
  • PFN comes from the TLB entry.

  • Offset remains the same.

  • Concatenate PFN + Offset.

✔ Access Memory

AccessMemory(PhysAddr)

     Translation completed quickly.

No page table access needed.


❌ Case 2: TLB Miss (Slow Path)

If the VPN is NOT found:

else // TLB Miss

Now the hardware must consult the page table in memory.


Step A: Locate Page Table Entry (PTE)

PTEAddr = PTBR + (VPN * sizeof(PTE)) PTE = AccessMemory(PTEAddr)

This requires an extra memory access.


 Step B: Check Validity & Protection

if (PTE.Valid == False) RaiseException(SEGMENTATION_FAULT) else if (CanAccess(PTE.ProtectBits) == False) RaiseException(PROTECTION_FAULT)

If invalid → segmentation fault
If access not allowed → protection fault


Step C: Update the TLB

If valid:

TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits)

The translation is now cached.


Step D: Retry the Instruction

RetryInstruction()

This time:

  • The VPN will be found in the TLB.

  • It becomes a TLB hit.

  • Memory access completes quickly.


Summary of Control Flow

Extract VPN ↓ Check TLB ↓ ┌───────────────┬────────────────┐ │ TLB Hit │ TLB Miss │ │ │ │ │ Form PA │ Access Page │ │ Access Memory │ Table in Mem │ │ │ Update TLB │ │ │ Retry Instr │ └───────────────┴────────────────┘

Why TLB Is Important

Without TLB:

  • Every memory reference → Page table lookup

  • 2 memory accesses minimum

With TLB:

  • Most references → 1 fast lookup

  • Only occasional misses require slow path


Performance Insight

TLB works like a cache:

  • TLB Hit → Very fast

  • TLB Miss → Expensive (extra memory access)

If misses are frequent:

  • Performance degrades significantly.

Thus:

The system relies on high TLB hit rates for good performance.


Key Takeaway

The TLB basic algorithm:

  1. Extract VPN

  2. Check TLB

  3. If hit → Translate immediately

  4. If miss → Access page table

  5. Insert into TLB

  6. Retry instruction

It transforms paging from slow but flexible into fast and practical virtual memory.

Comments

Popular posts from this blog

Operating Systems OS PCCST403 Semester 4 BTech KTU CS 2024 Scheme

Introduction to Operating System -Virtualization, Concurrency, and Persistence

Operating Systems PCCST403 Scheme and Syllabus