Scheduling Real-Time Systems

These are lecture notes from my Computer Science course. For learning about real-time systems, I recommend Real-Time Systems and Programming Languages.


In general, a scheduling scheme provides 2 features:

  1. An algorithm for ordering the use of system resources
  2. A means for predicting worst-case behaviour when the algorithm is applied.

Cyclic Executives

Most hard real-time systems use a cyclic executive, though there are plenty of criticisms of them and larger/modern systems use process based systems.

Cyclic Executives are fully deterministic. Does what it sounds like. Each minor cycle is just a sequence of procedure calls. No actual concurrency at runtime. This is actually handy because it means you don’t need to worry about shared variables.

Problems with them

  • All task periods must be a multiple of the cycle time.
  • Tasks that have long periods really screw it up; other tasks will have to wait for ages! Makes the bin packing problem harder too.
  • Sporadic activities are difficult (impossible!) to incorporate.
  • Difficult to construct. NP-hard problem; bin packing.

Task-Based Scheduling

Far more flexible. Maintain concurrency at runtime. Tasks are supported by real-time OS or run-time.

Each task is either:

  • Runnable/running.
  • Suspended and waiting for a timing event.
  • Suspended and waiting for a non-timing event.

If you did OPS, this might sound familiar.

3 Different scheduling approaches:

  • Fixed-Priority Scheduling (FPS): Most widely used approach, main focus of this course. Each task has a fixed priority which is computed pre-run-time. Execute tasks in priority order. The priority is derived from its temporal requirements, not its importance to the correct functioning of the system. Don’ forget that.
  • Earliest Deadline First (EDF): Not too important to know. Runnable tasks are executed in the order determined by the absolute deadlines of the tasks. Pretty obvious really. Dynamic (deadlines are computed at runtime).
  • Value-based Scheduling (VBS): Assign a value to each task and employ an on-line value based scheduling algorithm.


Most of the time we’ll be talking about preemptive schemes, where you immediately switch to a higher-priority task. This is generally better as you are more responsive to higher-priority tasks, and hence they are ‘preferred’.

Scheduling Characteristics

Not sure about this.

  • Sufficient: pass the test, meet the deadlines.
  • Necessary: fail the test, miss deadlines.
  • Exact: Necessary and sufficient.
  • Sustainable: System stays schedulable if conditions ‘improve’.

Simple Task Model

  • Application is a fixed set of tasks.
  • Tasks have well known periods of operation.
  • Tasks are completely independent.
  • System overheads are ignored.
  • Task deadline = Task period.
  • Tasks have fixed worst-case execution time.

There’s a bunch of notation to do with scheduling. Slide 21:

  • B: Worst-case blocking time for the task (if applicable)
  • C: Worst-case computation time (WCET) of the task
  • D: Deadline of the task
  • I: The interference time of the task
  • J: Release jitter of the task
  • N: Number of tasks in the system
  • P: Priority assigned to the task (if applicable)
  • R: Worst-case response time of the task
  • T: Minimum time between task releases, jobs, (task period)
  • U: The utilization of each task (equal to C/T)
  • a-z: The name of a task

Rate Monotonic Priority Assignment

Each task is assigned a (unique) priority based on its period; the shorter the period the higher the priority.

IMPORTANT: If you pick up a paper about this, 1 will be highest priority and n will be the lowest priority. However in Java or Ada, 1 is the lowest priority and n is the highest priority. e.g. the bigger the integer the higher the priority. The textbook also fits with Java and Ada (and this course).

Note: from here on I was absent for 2 lectures

Utilization-Based Analysis

In order to analyse task sets (simple ones, where the deadline = the period) you can use utilization-based analysis. It’s a formula on slide 24. There are certain maximum maximum bounds for utilization based on the number of tasks, that eventually reaches 69.3% (starting at 100%). You can run this test on potential combinations of task period, computation time, and priority. In some way this can tell you if the tasks will all meet their deadlines or not.


This analysis/test isn’t exact, or general, but it is O(N) and so it is sufficient.

Response-Time Analysis

Here, task i’s worst-case response time R is calculated first and then checked with its deadline. It is calculated by adding the worst-case computation time with the time taken up by interference from higher priority tasks.

Therefore during this R each higher priority task will execute a certain number of times given by:

Number of Releases = ceil(R_i / T_j)

Total interference is then given by: ceil(R_i / T_j) * C_j

Makes sense, computation time multiplied by the number of releases (number of times the task runs).

Therefore, R_i is given by

R_i = C_i + sum(j in hp(i), ceil(R_i / T_j) * C_j)

where hp(i) is the set of tasks with priority higher than i.

I’m no maths person, that looks like a bit of a weird equation (R_i on both sides?). I guess you solve this, magically, by forming a ‘recurrence relationship’. These equations are getting a bit hefty, so check the slides (slide 37). There’s also a pseudocode algorithm for calculating this.

This test can then be run on every process/period/computationtime/priority set. Unlike Response-Time Analysis, this test is sufficient and necessary. It is exact. But obviously it’s a bit more complicated than Response-Time Analysis.

Sporadic Tasks

Not sure about this.

Hard and Soft Tasks

In many situations the worst-case figures for sporadic tasks are considerably higher than the averages. For example, interrupts often arrive in bursts. Abnormal sensor readings might lead to significant additional computation to correct.

Therefore measuring schedulability using worst-case figures may lead to very low processor utilizations being observed in the actual running system, because you’re just being too conservative.

Therefore you probably want to use average execution and arrival rates instead. But that might mean you can’t get all deadlines (this is known as transient overload).

Solve this by having hard and soft tasks, where hard tasks must never miss their deadlines. Sounds simple but probably isn’t.

Aperiodic Tasks

Aperiodic tasks have no minimum inter-arrival times, e.g. external events I guess?

These can be run at a priority below the priorities assigned to hard processes, therefore they can’t steal (in a pre-emptive system) resources from important tasks. However this doesn’t provide adequate support to soft tasks, and they’ll often miss their deadlines.

To improve this, you can use a ‘server’.

Execution-time Servers

  • Has capacity/budget of C that is available to its client tasks (usually aperiodic).
  • When a client runs it uses up the budget.
  • The server can replenish its budget.
  • If there is no budget, clients don’t run.
  • Therefore it protects other tasks from excessive aperiodic activity.

Essentially you’re wrapping aperiodic activity around something that regulates it, just in case there is too much of it.

There are different types of Execution-time Server.

  • Periodic Server: Budget replenishes after a certain period, repeatedly. Not bandwidth preserving; tasks come along at the start of a period and then suspended.
  • Deferrable Server: Clients can execute at any point, clients are suspended when bandwidth is exhausted. Retains capacity as long as possible.
  • Sporadic Server: Minimum separation time for sporadic tasks.


There was a proof of DMPO. IT’s quite long (Slides 51 to 55).

This entry was posted in lecture, rts. Bookmark the permalink.