Scheduling This section describes classes that control scheduling.

See:
          Description

Packages
javax.realtime  

 

Scheduling

This section describes classes that control scheduling. These classes:

Definitions and Abbreviations

Schedulable objects include three execution states: executing, blocked, and eligible-for-execution.

Each type of schedulable object defines its own release events, for example, the release events for a periodic SO occur with the passage of time.

Release is the changing of the state of a schedulable object from blocked-for-release-event to eligible-for-execution. If the state of an SO is blocked-for-release-event when a release event occurs then the state of the SO is changed to eligible-for-execution. Otherwise, a state transition from blocked-for-release-event to eligible-for-execution is queued—this is known as a pending release. When the next transition of the SO into state blocked-for-release-event occurs, and there is a pending release, the state of the SO is immediately changed to eligible-for-execution. (Some actions implicitly clear any pending releases.)

Completion is the changing of the state of a schedulable object from executing to blocked-for-release-event. Each completion corresponds to a release. A real-time thread is deemed to complete its most recent release when it terminates.

Deadline refers to a time before which a schedulable object expects to complete. The ith deadline is associated with the ith release event and a deadline miss occurs if the ith completion would occur after the ith deadline.

Deadline monitoring is the process by which the implementation responds to deadline misses. If a deadline miss occurs for a schedulable object, the deadline miss handler, if any, for that SO is released. This behaves as if there were an asynchronous event associated with the SO, to which the miss handler was bound, and which was fired when the deadline miss occurred.

Periodic, sporadic, and aperiodic are adjectives applied to schedulable objects which describe the temporal relationship between consecutive release events. Let Ri denote the time at which an SO has had the ith release event occur. Ignoring the effect of release jitter:

The cost of a schedulable object is an estimate of the maximum amount of CPU time that the SO requires between a release and its associated completion.

The current CPU consumption of a schedulable object is the amount of CPU time that the SO has consumed since its last release.

A cost overrun occurs when the schedulable object's current CPU consumption becomes greater than, or equal to, its cost.

Cost monitoring is the process by which the implementation tracks CPU consumption and responds to cost overruns. If a cost overrun occurs for a schedulable object, the cost overrun handler, if any, for that SO is released. This behaves as if there were an asynchronous event associated with the SO, to which the overrun handler was bound, and which was fired when the cost overrun occurred. (Cost monitoring is an optional facility in an implementation of the RTSJ.)

The base priority of a schedulable object is the priority given in its associated PriorityParameters object; the base priority of a Java thread is the priority returned by its getPriority method.

The active priority of a schedulable object, or a Java thread, is the maximum of its base priority and any priority it has acquired due to the action of priority inversion avoidance algorithms (see the Synchronization Chapter).

A processing group is a collection of schedulable objects whose combined execution has further time constraints which the scheduler uses to govern the group's execution eligibility.

A scheduler manages the execution of schedulable objects: it detects deadline misses, and performs admission control and cost monitoring. It also manages the execution of Java threads.

The base scheduler is an instance of the PriorityScheduler class as defined in this specification. This is the initial default scheduler.

Overview

The scheduler required by this specification is fixed-priority preemptive with at least 28 unique priority levels. It is represented by the class PriorityScheduler and is called the base scheduler.

The schedulable objects required by this specification are defined by the classes RealtimeThread, NoHeapRealtimeThread, AsyncEventHandler and BoundAsyncEventHandler. The base scheduler assigns processor resources according to the schedulable objects' release characteristics, execution eligibility, and processing group values. Subclasses of the schedulable objects are also schedulable objects and behave as these required classes.

An instance of the SchedulingParameters class contains values of execution eligibility. A schedulable object is considered to have the execution eligibility represented by the SchedulingParameters object currently bound to it. For implementations providing only the base scheduler, the scheduling parameters object is an instance of PriorityParameters (a subclass of SchedulingParameters).

An instance of the ReleaseParameters class or its subclasses, PeriodicParameters, AperiodicParameters, and SporadicParameters, contains values that define a particular release characteristic. A schedulable object is considered to have the release characteristics of a single associated instance of the ReleaseParameters class. In all cases the base scheduler uses these values to perform its feasibility analysis over the set of schedulable objects and admission control for the schedulable object.

For a real-time thread the scheduler defines the behavior of the real-time thread's waitForNextPeriod and waitForNextPeriodInterruptible methods, and monitors cost overrun and deadline miss conditions based on its release parameters. For asynchronous event handlers, the scheduler monitors cost overruns and deadline misses.

Release parameters also govern the treatment of the minimum interarrival time for sporadic schedulable objects.

An instance of the ProcessingGroupParameters class contains values that define a temporal scope for a processing group. If a schedulable object has an associated instance of the ProcessingGroupParameters class, it is said to execute within the temporal scope defined by that instance. A single instance of the ProcessingGroupParameters class can be (and typically is) associated with many SO's. If the implementation supports cost monitoring, the combined processor demand of all of the SO's associated with an instance of the ProcessingGroupParameters class must not exceed the values in that instance (i.e., the defined temporal scope). The processor demand is determined by the Scheduler.

Semantics and Requirements

This section establishes the semantics and requirements that are applicable across the classes of this chapter, and also defines the required scheduling algorithm. Semantics that apply to particular classes, constructors, methods, and fields will be found in the class description and the constructor, method, and field detail sections.

Semantics and Requirements Governing all Schedulers

  1. Schedulers other than the base scheduler may change the execution eligibility of the schedulable objects which they manage according to their scheduling algorithm.
  2. If an implementation provides any public schedulers other than the base scheduler it shall provide documentation describing each scheduler's semantics in language and constructs appropriate to the provided scheduling algorithms. This documentation must include the list of classes that constitute schedulable objects for the scheduler unless that list is the same as the list of schedulable objects for the base scheduler.
  3. This specification does not require any particular feasibility algorithm be implemented in the Scheduler object. The default algorithm always returns success for sporadic and periodic schedulable objects, as it assumes adequate resources, but it always returns false for aperiodic schedulable objects since no pool of resources would render such a load feasible.
  4. Implementations that provide a scheduler with a feasibility algorithm other than the default are required to document the behavior of that algorithm and any assumptions it makes.

Semantics and Requirements Governing the Base Scheduler

The semantics for the base scheduler assume a uni-processor execution environment. While implementations of the RTSJ are not precluded from supporting multi-processor execution environments, no explicit consideration for such environments has been given in this specification.

The base scheduler supports the execution of all schedulable objects and Java threads, but it only controls the release of periodic real-time threads, and aperiodic asynchronous event handlers.

Priorities

The execution scheduling semantics described in this section are defined in terms of a conceptual model that contains a set of queues of schedulable objects that are eligible for execution. There is, conceptually, one queue for each priority. No implementation structures are necessarily implied by the use of this conceptual model. It is assumed that no time elapses during operations described using this model, and therefore no simultaneous operations are possible.

  1. The base scheduler must support at least 28 distinct values (real-time priorities) that can be stored in an instance of PriorityParameters in addition to the values 1 through 10 required to support the priorities defined by java.lang.Thread. The real-time priority values must be greater than 10, and they must include all integers from the base scheduler's getMinPriority() value to its getMaxPriority() value inclusive. The 10 priorities defined for java.lang.Thread must effectively have lower execution eligibility than the real-time priorities, but beyond this, their behavior is as defined by the specification of java.lang.Thread.
  2. Higher priority values in an instance of PriorityParameters have a higher execution eligibility.
  3. Assignment of any of the real-time priority values to any schedulable object controlled by the base priority scheduler is legal. It is the responsibility of application logic to make rational priority assignments.
  4. If two schedulable objects have different active priorities, the schedulable object with the higher active priority will always execute in preference to the schedulable object with the lower value when both are eligible for execution.
  5. A schedulable object that is executing will continue to execute until it either blocks, or is preempted by a higher-priority schedulable object.
  6. The base scheduler does not use the importance value in the ImportanceParameters subclass of PriorityParameters.
  7. The dispatching mechanism must allow the preemption of the execution of schedulable objects and Java threads at a point not governed by the preempted object.
  8. For schedulable objects managed by the base scheduler the implementation must not change the execution eligibility for any reason other than
    1. Implementation of a priority inversion avoidance algorithm or
    2. As a result of a program's request to change the priority parameters associated with one or more schedulable objects; e.g., by changing a value in a scheduling parameter object that is used by one or more schedulable objects, or by using setSchedulingParameters() to give a schedulable object a different SchedulingParameters value.
  9. Use of Thread.setPriority(), any of the methods defined for schedulable objects, or any of the methods defined for parameter objects must not affect the correctness of the priority inversion avoidance algorithms controlled by PriorityCeilingEmulation and PriorityInheritance - see the Synchronization chapter.
  10. A schedulable object that is preempted by a higher priority schedulable object is placed in the queue for its active priority, at a position determined by the implementation. The implementation must document the algorithm used for such placement. It is recommended that a preempted schedulable object be placed at the front of the appropriate queue.
  11. A real-time thread that performs a yield() is placed at the tail of the queue for its active priority level.
  12. A blocked schedulable object that becomes eligible for execution is added to the tail of the queue for that priority. This behavior also applies to the initial release of a schedulable object.
  13. For a schedulable object whose active priority is changed as a result of explicitly setting its base priority (through PriorityParameters setPriority() method, RealtimeThread's setSchedulingParameters() method, or Thread's setPriority() method), this schedulable object is added to the tail of the queue for its new priority level. Queuing when priorities are adjusted by priority inversion avoidance algorithms is governed by semantics specified in the Synchronization chapter.
  14. If schedulable object A managed by the base scheduler creates a Java thread, B, then the initial base priority of B is the priority value returned by the getMaxPriority method of B's java.lang.ThreadGroup object.
  15. For real-time threads managed by the base scheduler, priority limits set by java.lang.ThreadGroup objects are not enforced.
  16. PriorityScheduler.getNormPriority() shall be set to:
    ((PriorityScheduler.getMaxPriority() - PriorityScheduler.getMinPriority())/3) + PriorityScheduler.getMinPriority()

Parameter Values

The scheduler uses the values contained in the different parameter objects associated with a schedulable object to control the behavior of the schedulable object. The scheduler determines what values are valid for the schedulable objects it manages, which defaults apply and how changes to parameter values are acted upon by the scheduler. Invalid parameter values result in exceptions, as documented in the relevant classes and methods.

  1. The default values for the base scheduler are:
    1. Scheduling parameters are copied from the creating SO if possible; if the creating SO does not have scheduling parameters the default is an instance of the default priority parameters value.
    2. Release parameters default to an instance of the default aperiodic parameters (see AperiodicParameters).
    3. Memory parameters default to null which signifies that memory allocation by the schedulable object is not constrained by the scheduler.
    4. Processing group parameters default to null which signifies that the schedulable object is not a member of any processing group and is not subject to processing group based limits on processor utilization.
    5. The default scheduling parameter values for parameter objects created by an SO controlled by the base scheduler are: (see PriorityScheduler)
      Attribute
      Default Value
      Priority parameters
      priority norm priority
      Importance parameters
      importance No default. A value must be supplied.
  2. All numeric or RelativeTime attributes in parameter values must be greater than or equal to zero.
  3. Values of period must be greater than zero.
  4. Deadline values in ReleaseParameters objects must be less than their period values (where applicable), but the deadline may be greater than the minimum interarrival time in a SporadicParameters object.
  5. Changes to scheduling, release, memory, and processing group parameters (by methods on the schedulable objects bound to the parameters or by altering the parameter objects themselves) have two effects:
    1. They immediately affect the feasibility test of the scheduler.
    2. They potentially modify the behavior of the scheduler with regard to those schedulable objects. When such changes in behavior take effect depends on the parameter in question, and the type of schedulable object, as described below.
  6. Changes to scheduling, release, memory, and processing group parameters are acted upon by the base scheduler as follows:
    1. Changes to scheduling parameters take effect immediately except as provided by priority inversion avoidance algorithms.
    2. Changes to release parameters depend on the parameter being changed, the type of release parameter object and the type of schedulable object:
      1. Changes to the deadline and the deadline miss handler take effect at each release event as follows: if the ith release event occurred at a time ti, then the ith deadline is the time ti+Di, where Di is the value of the deadline stored in the schedulable object's release parameters object at the time ti. If a deadline miss occurs then it is the deadline miss handler that was installed in the schedulable object's release parameters at time ti that is released.
      2. Changes to cost and the cost overrun handler take effect immediately.
      3. Changes to the period and start time values in PeriodicParameters objects are described in "Periodic Release of Real-time Threads" below. (The base scheduler does not manage the release of periodic schedulable objects other than periodic real-time threads.)
      4. Changes to the additional values in AperiodicParameters objects and SporadicParameters are described, respectively, in "Aperiodic Release Control" and "Sporadic Release Control", below. (The base scheduler does not manage the release of aperiodic schedulable objects other than aperiodic asynchronous event handlers.)
      5. Changes to the type of release parameters object generally take effect after completion, except as documented in the following sections.
    3. Changes to memory parameters take effect immediately.
    4. Changes to processing group parameters take effect as described in "Processing Groups" below.
    5. Changes to the scheduler responsible for a schedulable object take effect at completion.

Cost Monitoring

Cost monitoring is an optional facility in the implementation of the RTSJ, but when supported it must conform to the requirements and definitions as presented in this section.

  1. The cost of an SO is defined by the value returned by invoking the getCost method of the SO's release parameters object.
  2. When an SO is initially released it's current CPU consumption is zero and as the SO executes, the current CPU consumption increases. The current CPU consumption is set to zero in response to certain actions as described below.
  3. If at any time, due to either execution of the SO or a change in the SO's cost, the current CPU consumption becomes greater than, or equal to, the current cost of the SO, then a cost overrun is triggered. The implementation is required to document the granularity at which the current CPU consumption is updated.
  4. When a cost overrun is triggered, the cost overrun handler associated with the SO, if any, is released. If the most recent release of the SO is the ith release, and the i+1 release event has not yet occurred, then:
    1. If the state of the SO is either executing or eligible-for-execution, then the SO is placed into the state blocked-by-cost-overrun.
    2. Otherwise, the SO must have been blocked for a reason other than blocked-by-cost-overrun. In this case, the state change to blocked-by-cost-overrun is left pending: if the blocking condition for the SO is removed, then its state changes to blocked-by-cost-overrun.
    Otherwise, if the i+1 release event has occurred, the current CPU consumption is set to zero, the SO remains in its current state and the cost monitoring system considers the most recent release to now be the i+1 release.
  5. When the ith release event occurs for an SO, the action taken depends on the state of the SO:
    1. If the SO is blocked-by-cost-overrun then the cost monitoring system considers the most recent release to be the ith release, the current CPU consumption is set to zero and the SO is made eligible for execution;
    2. Otherwise, if the SO is blocked for a reason other than blocked-by-cost-overrun then:
      1. If there is a pending state change to blocked-by-cost-overrun then: the pending state change is removed, the cost monitoring system considers the most recent release to be the ith release, the current CPU consumption is set to zero and the SO remains in its current blocked state;
      2. Otherwise, no cost monitoring action occurs.
    3. Otherwise no cost monitoring action occurs.
  6. When the ith release of an SO completes, and the cost monitoring system considers the most recent release to be the ith release, then the current CPU consumption is set to zero and the cost monitoring system considers the most recent release to be the i+1 release. Otherwise, no cost monitoring action occurs.
  7. Changes to the cost parameter take effect immediately:
    1. If the new cost is less than or equal to the current CPU consumption, and the old cost was greater than the current CPU consumption, then a cost overrun is triggered.
    2. If the new cost is greater than the current CPU consumption:
      1. If the SO is blocked-by-cost-overrun, then the SO is made eligible for execution;
      2. Otherwise, if the SO is blocked for a reason other than blocked-by-cost-overrun, and there is a pending state change to blocked-by-cost-overrun, then the pending state change is removed;
      3. Otherwise, no cost monitoring action occurs.
  8. The state of the cost monitoring system for an SO can be reset by the scheduler (see 5c in the Periodic Release of Real-time Threads section, below). If the most recent release of the SO is considered to be the mth release, and the most recent release event for the SO was the nth release event (where n > m), then a reset causes the cost monitoring system to consider the most recent release to be the nth release, and to zero the current CPU consumption.

Periodic Release of Real-time Threads

A schedulable object with release parameters of type PeriodicParameters is expected to be released periodically. For asynchronous event handlers this would occur if the associated asynchronous event fired periodically. For real-time threads periodic release behavior is achieved by executing in a loop and invoking the RealtimeThread.waitForNextPeriod method, or its interruptible equivalent RealtimeThread.waitForNextPeriodInterruptible within that loop. For simplicity, unless otherwise stated, the semantics in this section apply to both forms of that method.

  1. A periodic real-time thread's release characteristics are determined by the following:
    1. The invocation of the real-time thread's start method.
    2. The action of the RealtimeThread methods waitForNextPeriod, waitForNextPeriodInterruptible, schedulePeriodic and deschedulePeriodic;
    3. The occurrence of deadline misses and whether or not a miss handler is installed; and
    4. The passing of time that generates periodic release events
  2. The initial release event of a periodic real-time thread occurs in response to the invocation of the its start method, in accordance with the start time specified in its release parameters - see PeriodicParameters.
  3. Changes to the start time in a real-time thread's PeriodicParameters object only have an effect on its initial release time. Consequently, if a PeriodicParameters object is bound to multiple real-time threads, a change in the start time may affect all, some or none, of those threads, depending on whether or not start has been invoked on them.
  4. Subsequent release events occur as each period falls due, except as described below in 5(e), at times determined as follows: if the ith release event occurred at a time ti, then the i+1 release event occurs at the time ti+Ti, where Ti is the value of the period stored in the real-time thread's PeriodicParameters object at the time ti.
  5. The implementation should behave effectively as if the following state variables were added to a real-time thread's state, and manipulated by the actions in 1 as described below: boolean descheduled, integer pendingReleases, integer missCount, and boolean lastReturn.
    1. Initially: descheduled = false, pendingReleases = 0, missCount = 0, and lastReturn = true.
    2. When the real-time thread's deschedulePeriodic method is invoked: set the value of descheduled to true.
    3. When the real-time thread's schedulePeriodic method is invoked: set the value of descheduled to false; then if the thread is blocked-for-release-event, set the value of pendingReleases to zero, and tell the cost monitoring system to reset for this thread.
    4. When descheduled is true, the real-time thread is said to be descheduled.
    5. A real-time thread that has been descheduled and is blocked-for-release-event will not receive any further release events until after it has been rescheduled by a call to schedulePeriodic; this means that no deadline misses can occur until the thread has been rescheduled. The descheduling of a real-time thread has no effect on its initial release.
    6. When each period is due:
      1. If the state of the real-time thread is blocked-for-release-event (that is, it is waiting in waitForNextPeriod), then if the thread is descheduled then do nothing, else increment the value of pendingReleases, inform cost monitoring that the next release event has occurred, and notify the thread to make it eligible for execution;
      2. Otherwise, increment the value of pendingReleases, and inform cost monitoring that the next release event has occurred.
    7. On each deadline miss:
      1. If the real-time thread has a deadline miss handler: set the value of descheduled to true, atomically release the handler with its fireCount increased by the value of missCount+1 and zero missCount;
      2. Otherwise add one to the missCount value.
    8. When the waitForNextPeriod method is invoked by the current real-time thread there are two possible behaviors depending on the value of missCount:
      1. If missCount is greater than zero: decrement the missCount value; then if the lastReturn value is false, completion occurs: apply any pending parameter changes, decrement pendingReleases, inform cost monitoring the real-time thread has completed and return false; otherwise set the lastReturn value to false and return false.
      2. Otherwise, apply any pending parameter changes, inform cost monitoring of completion, and then wait while descheduled is true, or pendingReleases is zero. Then set the lastReturn value to true, decrement pendingReleases, and return true.
  6. An invocation of the waitForNextPeriodInterruptible method behaves as described above with the following additions:
    1. If the invocation commences when an instance of AsynchronouslyInterruptedException (AIE) is pending on the real-time thread, then the invocation immediately completes abruptly by throwing that pending instance as an InterruptedException. If this occurs, the most recent release has not completed. If the pending instance is the generic AIE instance then the interrupt state of the real-time thread is cleared.
    2. If an instance of AIE becomes pending on the real-time thread while it is blocked-for-release-event, and the real-time thread is descheduled, then the AIE remains pending until the real-time thread is no longer descheduled. Execution then continues as in (c).
    3. If an instance of AIE becomes pending on the real-time thread while it is blocked-for-release-event, and it is not descheduled, then this acts as a release event:
      1. The real-time thread is made eligible for execution.
      2. Upon execution the invocation completes abruptly by throwing the pending AIE instance as an InterruptedException. If the pending instance is the generic AIE instance then the interrupt state of the real-time thread is cleared.
      3. If the AIE becomes pending at a time tint then:
        • The deadline associated with this release is the time tint+Dint, where Dint is the value of the deadline stored in the real-time thread's release parameters object at the time tint.
        • The next release time for the real-time thread will be tint+Tint, where Tint is the value of the period stored in the real-time thread's release parameters object at the time tint.
      4. Cost monitoring is informed of the release event
    When the thrown AIE instance is caught, the AIE becomes pending again (as per the usual semantics for AIE) until it is explicitly cleared.
  7. If an aperiodic real-time thread has its release parameters set to periodic parameters, then calls waitForNextPeriod, the change from non-periodic to periodic scheduling effectively takes place between the call to waitForNextPeriod and the first periodic release. The first periodic release is determined by the start time specified in the real-time thread's periodic parameters. If that start time is an absolute time in the future, then that is the first periodic release time; if it is an absolute time in the past then the time at which waitForNextPeriod was called is the first periodic release time and the release occurs immediately. If the start time is a relative time, then it is relative to the time at which waitForNextPeriod was called; if that time is in the past then the release occurs immediately.
  8. If a periodic real-time thread has its release parameters set to be other than an instance of PeriodicParameters then the change from periodic to non-periodic scheduling effectively takes place immediately, unless the thread is blocked-for-release-event, in which case the change takes place after the next release event. When this change occurs, the deadline for the real-time thread is that which was in effect for the most recent release.
Pseudo-Code for Periodic Thread Actions

The semantics of the previous section can be more clearly understood by viewing them in pseudo-code form for each of the methods and actions involved. In the following no mechanism for blocking and unblocking a thread is prescribed. The use of the wait and notify terminology in places is purely an aid to expressing the desired semantics in familiar terms.

// These values are part of thread state.
boolean descheduled     = false;
int     pendingReleases = 0;
boolean lastReturn      = true;
int     missCount       = 0;

deschedulePeriodic(){
    descheduled = true;
}

schedulePeriodic(){
    descheduled = false;
    if (blocked-for-release-event) {
        pendingReleases = 0;
        costMonitoringReset();
    }
}

onNextPeriodDue(){
    if (blocked-for-release-event) {
        if (descheduled) {
            ; // do nothing
        }
        else {
            pendingReleases++;
            notifyCostMonitoringOfReleaseEvent();
            notify it;  // make eligible for execution
        }
    } 
    else {
        pendingReleases++;
        notifyCostMonitoringOfReleaseEvent();
    }
}

onDeadlineMiss(){
    if (there is a miss handler) {
        descheduled = true;
        release miss handler with fireCount increased by missCount+1
        missCount = 0;
    } 
    else {
        missCount++;
    }
}

waitForNextPeriod{
    assert(pendingReleases >= 0);
    if (missCount > 0 ) {  
        // Missed a deadline without a miss handler
        missCount--;
        if (lastReturn == false) {
            //  Changes "on completion" take place here
            performParameterChanges();
            pendingReleases--;
            notifyCostMonitoringOfCompletion();
        }
        lastReturn = false;
        return false;
    } 
    else { 
        //  Changes "on completion" take place here
        performParameterChanges();
        notifyCostMonitoringOfCompletion();
        wait while (descheduled || pendingReleases == 0); // blocked-for-release-event
        pendingReleases--;
        lastReturn = true;
        return true;
    }
}

Aperiodic Release Control

Aperiodic schedulable objects are released in response to events occurring, such as the starting of a real-time thread, or the firing of an associated asynchronous event for an asynchronous event handler. The occurrence of these events, each of which is a potential release event, is termed an arrival, and the time that they occur is termed the arrival time.

The base scheduler behaves effectively as if it maintained a queue, called the arrival time queue, for each aperiodic schedulable object. This queue maintains information related to each release event from its "arrival" time until the associated release completes, or another release event occurs - whichever is later. If an arrival is accepted into the arrival time queue, then it is a release event and the time of the release event is the arrival time. The initial size of this queue is an attribute of the schedulable object's aperiodic parameters, and is set when the parameter object is associated with the SO. Over time the queue may become full and its behavior in this situation is determined by the queue overflow policy specified in the SO's aperiodic parameters. There are four overflow policies defined:

Policy
Action on Overflow
IGNORE Silently ignore the arrival. The arrival is not accepted, no release event occurs, and, if the arrival was caused programmatically (such as by invoking fire on an asynchronous event), the caller is not informed that the arrival has been ignored.
EXCEPT Throw an ArrivalTimeQueueOverflowException. The arrival is not accepted, and no release event occurs, but if the arrival was caused programmatically, the caller will have ArrivalTimeQueueOverflowException thrown.
REPLACE The arrival is not accepted and no release event occurs. If the completion associated with the last release event in the queue has not yet occurred, and the deadline has not been missed, then the release event time for that release event is replaced with the arrival time of the new arrival. This will alter the deadline for that release event. If the completion associated with the last release event has occurred, or the deadline has already been missed, then the behavior of the REPLACE policy is equivalent to the IGNORE policy.
SAVE Behave effectively as if the queue were expanded as necessary to accommodate the new arrival. The arrival is accepted and a release event occurs.

Under the SAVE policy the queue can grow and shrink over time.

Changes to the queue overflow policy take effect immediately. When an arrival occurs and the queue is full, the policy applied is the policy as defined at that time.

Aperiodic Real-time Threads

Aperiodic real-time threads executing under the base scheduler have the following characteristics:

  1. The initial release event occurs when start is invoked upon it.
  2. There are no subsequent release events.
  3. Completion occurs only through termination.
  4. When a deadline miss occurs, the deadline miss handler, if any, is released.
  5. If a cost overrun occurs the overrun handler, if any, is released and the real-time thread is placed in the state blocked-by-cost-overrun. It can become eligible for execution again only through a change to its cost parameter.

Sporadic Release Control

Sporadic parameters include a minimum interarrival time, MIT, that characterizes the expected frequency of releases. When an arrival is accepted implementation behaves as if it calculates the earliest time at which the next arrival could be accepted, by adding the current MIT to the arrival time of this accepted arrival. The scheduler guarantees that each sporadic schedulable object it manages, is released at most once in any MIT. It implements two mechanisms for enforcing this rule:

The effective release time of a release event i is the earliest time that the handler can be released in response to that release event. It is determined for each release event based on the MIT policy in force at the release event time:

The scheduler will delay the release associated with the release event at the head of the arrival time queue until the current time is greater than or equal to the effective release time of that release event.

Changes to minimum interarrival time and the MIT violation policy take effect immediately, but only affect the next expected arrival time, and effective release time, for release events that occur after the change.

Aperiodic and Sporadic Release Control for Asynchronous Event Handlers

Asynchronous event handlers can be associated with one or more asynchronous events. When an asynchronous event is fired, all handlers associated with it are released, according to the semantics below:

  1. Each firing of an associated asynchronous event is an arrival. If the handler has release parameters of type AperiodicParameters, then the arrival may become a release event for the handler, according to the semantics given in "Aperiodic Release Control" above. If the handler has release parameters of type SporadicParameters, then the arrival may become a release event for the handler, according to the semantics given in "Sporadic Release Control" above. If the handler has release parameters of a type other than SporadicParameters then the arrival is a release event, and the arrival-time is the release event time.
  2. For each release event that occurs for a handler, an entry is made in the arrival-time queue and the handler's fireCount is incremented by one.
  3. Initially a handler is considered to be blocked-for-release-event and its fireCount is zero.
  4. Releases of a handler are serialized by having its handleAsyncEvent method invoked repeatedly while its fireCount is greater than zero:
    1. Each invocation of handleAsyncEvent, in this way, is a release.
    2. The return from handleAsyncEvent is the completion of a release: the fireCount is decremented and the front entry (if still present) removed from the arrival-time queue.
    3. Processing of any exceptions thrown by handleAsyncEvent occurs prior to completion.
  5. The deadline for a release is relative to the release event time and determined at the release event time according to the value of the deadline contained in the handler's release parameters. This value does not change, except as described previously for handlers using a REPLACE policy for MIT violation or arrival-time queue overflow.
  6. The application code invoked by handleAsyncEvent can directly modify the fireCount as follows:
    1. The getAndDecrementPendingFireCount method decreases the fireCount by one (if it was greater than zero), and returns the old value. This removes the front entry from the arrival-time queue but does not constitute a completion: the deadline of the most current release is unchanged and the current CPU consumption is not set to zero.
    2. The getAndClearPendingFireCount method is functionally equivalent to invoking getAndDecrementPendingFireCount until it returns zero, and returning the original fireCount value.
    3. The getAndIncrementPendingFireCount method attempts to increase the fireCount by one, and returns the old value. It effectively generates an arrival for this handler, and if that arrival is accepted, it becomes a release event that is added to the arrival-time queue and the fireCount is incremented by one. If the handler is not active, that is execution is not within the flow of control of handleAsyncEvent, at the time this method is called, then the handler may not be released in response to this new release event, until an additional release event is generated by firing the associated asynchronous event.
  7. The scheduler may delay the invocation of handleAsyncEvent to ensure the effective release time honors any restrictions imposed by the MIT violation policy, if applicable, of that release event.
  8. Cost monitoring for an asynchronous event handler interacts with release events and completions as previously defined with the added requirement that at the completion of handleAsyncEvent, if the fireCount is now zero, then the cost monitoring system is told to reset for this handler.

Processing Groups

A processing group is defined by a processing group parameters object, and each SO that is bound to that parameter object is called a member of that processing group.

Processing groups are only functional in a system that implements processing group enforcement. Although the processing group itself does not consume CPU time, it acts as a proxy for its members.

  1. The deadline of a processing group is defined by the value returned by invoking the getDeadline method of the processing group parameters object.
  2. A deadline miss for the processing group is triggered if any member of the processing group consumes CPU time at a time greater than the deadline for the most recent release of the processing group.
  3. When a processing group misses a deadline:
    1. If the processing group has a miss handler, it is released for execution
    2. If the processing group has no miss handler, no action is taken.
  4. The cost of a processing group is defined by the value returned by invoking the getCost method of the processing group parameters object.
  5. When a processing group is initially released, it's current CPU consumption is zero and as the members of the processing group execute, the current CPU consumption increases. The current CPU consumption is set to zero in response to certain actions as described below.
  6. If at any time, due to either execution of the members of the processing group or a change in the parameter group's cost, the current CPU consumption becomes greater than, or equal to, the current cost of the processing group, then a cost overrun is triggered. The implementation is required to document the granularity at which the current CPU consumption is updated.
  7. When a cost overrun is triggered, the cost overrun handler associated with the processing group, if any, is released, and the processing group enters the enforced state. For each member of the processing group:
    1. If the state of the SO is either executing or eligible-for-execution, then the SO is placed into the state blocked-by-group-cost-overrun.
    2. Otherwise, the SO must have been blocked for a reason other than blocked-by-group-cost-overrun. In this case, the state change to blocked-by-group-cost-overrun is left pending: if the blocking condition for the SO is removed, then its state changes to blocked-by-group-cost-overrun.
  8. When the a release event occurs for a processing group, the action taken depends on the state of the processing group:
    1. If the processing group is not in the enforced state then the current CPU consumption for the group is set to zero;
    2. Otherwise the processing group is in the enforced state. It is removed from the enforced state, the current CPU consumption of the group is set to zero, and for each member of the processing group:
      1. If there is a pending state change to blocked-by-group-cost-overrun then: the pending state change is removed, and the SO remains in its current blocked state;
      2. If the SO is in the block-by-group-cost-overrun state, it is made eligible for execution.
      3. Otherwise no cost monitoring action is taken for that SO.
  9. Changes to the cost parameter take effect immediately:
    1. If the new cost is less than or equal to the current CPU consumption, and the old cost was greater than the current CPU consumption, then a cost overrun is triggered.
    2. If the new cost is greater than the current CPU consumption:
      1. If the processing group is enforced, then the processing group behaves as defined in semantic 8.
      2. Otherwise, no cost monitoring action occurs.
  10. Changes to other parameters take place as follows:
    1. Start: can only be changed before the parameters group is started; i.e., before the start time or before the parameter object is associated with any SO. Changes take effect immediately.
    2. Period: at each release the next period is set based on the current value of the processing group's period.
    3. Deadline: at each release the next deadline is set based on the current value of the processing group's deadline.
    4. OverrunHandler: at each release the overrunHandler is set based on the current value of the processing group's overrunHandler.
    5. MissHandler: at each release the missHandler is set based on the current value of the processing group's missHandler.
  11. Changes to the membership of the processing group take effect immediately.
  12. The start time for the processing group may be relative or absolute.
    1. If the start time is absolute, the processing group behaves effectively as if the initial release time were the start time.
    2. If the start time is relative, the initial release time is computed relative to the time start or fire (as appropriate) is first called for a member of the processing group.

Note: Until a processing group starts, its budget cannot be replenished, but its members will be enforced if they exceed the initial budget. Also, once a processing group is started it behaves effectively as if it continued running continuously until the defining ProcessingGroupParameters object is freed.

Rationale

As specified the required semantics and requirements of this section establish a scheduling policy that is very similar to the scheduling policies found on the vast majority of real-time operating systems and kernels in commercial use today. The semantics and requirements for the base scheduler accommodate existing practice, which is a stated goal of the effort.

There is an important division between priority schedulers that force periodic context switching between tasks at the same priority, and those that do not cause these context switches. By not specifying time slicing behavior this specification calls for the latter type of priority scheduler. In POSIX terms, SCHED_FIFO meets the RTSJ requirements for the base scheduler, but SCHED_RR does not meet those requirements.

Although a system may not implement the first release (start) of a schedulable object as unblocking that schedulable object, under the base scheduler those semantics apply; i.e., the schedulable object is added to the tail of the queue for its active priority.

Some research shows that, given a set of reasonable common assumptions, 32 distinct priority levels are a reasonable choice for close-to-optimal scheduling efficiency when using the rate-monotonic priority assignment algorithm (256 priority levels provide better efficiency). This specification requires at least 28 distinct priority levels as a compromise noting that implementations of this specification will exist on systems with logic executing outside of the Java Virtual Machine and may need priorities above, below, or both for system activities.

In order not to undermine any feasibility analysis, the default behavior for implementations that support cost monitoring is that a schedulable object receives no more than cost units of CPU time during each release. The programmer must explicitly change the cost attribute to override the scheduler.