Multi-Level Feedback Queue (MLFQ)
Why Do We Need the Multi-Level Feedback Queue (MLFQ)?
1. The Core Scheduling Problem
In CPU scheduling, the operating system faces a fundamental dilemma:
How can we schedule processes efficiently without knowing how long they will run?
This problem arises because:
-
Some processes are short and interactive (e.g., text editors, shells).
-
Others are long-running and CPU-bound (e.g., scientific computations, data processing).
The OS does not know in advance:
-
How long a process will run
-
Whether it is interactive or CPU-intensive
This is known as the lack of perfect knowledge, which breaks many ideal scheduling algorithms.
2. Why Traditional Scheduling Algorithms Are Not Enough
Shortest Job First (SJF / STCF)
-
Optimal for turnaround time
-
Requires knowing job length in advance ❌
-
Unrealistic in real systems
Round Robin (RR)
-
Excellent response time
-
Poor turnaround time
-
CPU-bound jobs suffer ❌
Thus, we need a scheduler that:
-
Feels responsive for users
-
Is fair to long-running jobs
-
Works without knowing job lengths
👉 This is exactly why MLFQ exists.
3. What Is MLFQ?
Multi-Level Feedback Queue (MLFQ) is a priority-based scheduler that:
-
Uses multiple queues, each with a different priority
-
Learns from process behavior
-
Adjusts priorities dynamically using feedback
📌 Key idea:
“Observe the past behavior of a job to predict its future behavior.”
4. Design Goals of MLFQ
MLFQ tries to solve two conflicting goals simultaneously:
-
Minimize response time for interactive jobs
-
Minimize turnaround time for short jobs
-
Ensure fairness and avoid starvation
This makes MLFQ one of the most practical and influential schedulers in OS design.
MLFQ: Basic Rules
Rule 1: Priority-Based Scheduling
If Priority(A) > Priority(B), A runs
-
Higher-priority jobs always run first
-
Lower-priority jobs wait
Rule 2: Round Robin Within Same Priority
If Priority(A) = Priority(B), run them in Round Robin
-
Prevents starvation among jobs of equal priority
-
Uses a time slice (quantum)
Rule 3: New Jobs Start at the Top
When a job enters the system, it is placed at the highest priority queue
Why?
-
New jobs are assumed to be short or interactive
-
If they finish quickly → great!
-
If not → they will be demoted
📌 This is how MLFQ approximates SJF without knowing job length
Rule 4: Priority Reduction Based on CPU Usage
If a job uses up its time allotment at a level, its priority is reduced(i.e., it moves down one queue).
Important refinement:
-
It does not matter how many times the job gives up the CPU
-
Total CPU usage at that level is what matters
This prevents:
-
Scheduler gaming
-
CPU monopolization by tricking the scheduler with I/O
Rule 5: Periodic Priority Boost
After a fixed time period S, move all jobs to the highest priority queue
Why is this needed?
Problem 1: Starvation
-
Too many interactive jobs can starve CPU-bound jobs
Problem 2: Phase Changes
-
CPU-bound jobs may later become interactive
✔ Priority boost solves both:
-
Guarantees progress
-
Allows behavior changes to be recognized
Why MLFQ Works Well
MLFQ succeeds because it:
-
Learns dynamically
-
Makes no assumptions about job length
-
Balances:
-
Interactivity
-
Throughput
-
Fairness
-
📌 This is why many real systems use MLFQ-like schedulers:
-
BSD UNIX
-
Solaris
-
Windows NT
-
Variants of Linux scheduling policies
Summary: Why We Need MLFQ
| Problem | MLFQ Solution |
|---|---|
| Unknown job length | Learn from execution |
| Interactive responsiveness | High-priority queues |
| Fair CPU sharing | Priority demotion |
| Starvation | Priority boosting |
| Scheduler gaming | Accurate CPU accounting |
Final Takeaway for Students
MLFQ is powerful because it adapts.
It watches how processes behave and schedules them accordingly—without requiring impossible foreknowledge.
This makes MLFQ one of the most important scheduling algorithms in operating systems.


Comments
Post a Comment