See:
Description
Packages | |
---|---|
javax.realtime |
This section describes classes that control scheduling. These classes:
AsyncEventHandler
and RealtimeThread
classes.
Schedulable objects include three execution states: executing, blocked, and eligible-for-execution.
Each type of schedulable object defines its own release events, for example, the release events for a periodic SO occur with the passage of time.
Release is the changing of the state of a schedulable object from blocked-for-release-event to eligible-for-execution. If the state of an SO is blocked-for-release-event when a release event occurs then the state of the SO is changed to eligible-for-execution. Otherwise, a state transition from blocked-for-release-event to eligible-for-execution is queued—this is known as a pending release. When the next transition of the SO into state blocked-for-release-event occurs, and there is a pending release, the state of the SO is immediately changed to eligible-for-execution. (Some actions implicitly clear any pending releases.)
Completion is the changing of the state of a schedulable object from executing to blocked-for-release-event. Each completion corresponds to a release. A real-time thread is deemed to complete its most recent release when it terminates.
Deadline refers to a time before which a schedulable object expects to complete. The ith deadline is associated with the ith release event and a deadline miss occurs if the ith completion would occur after the ith deadline.
Deadline monitoring is the process by which the implementation responds to deadline misses. If a deadline miss occurs for a schedulable object, the deadline miss handler, if any, for that SO is released. This behaves as if there were an asynchronous event associated with the SO, to which the miss handler was bound, and which was fired when the deadline miss occurred.
Periodic, sporadic, and aperiodic are adjectives applied to schedulable objects which describe the temporal relationship between consecutive release events. Let Ri denote the time at which an SO has had the ith release event occur. Ignoring the effect of release jitter:
The cost of a schedulable object is an estimate of the maximum amount of CPU time that the SO requires between a release and its associated completion.
The current CPU consumption of a schedulable object is the amount of CPU time that the SO has consumed since its last release.
A cost overrun occurs when the schedulable object's current CPU consumption becomes greater than, or equal to, its cost.
Cost monitoring is the process by which the implementation tracks CPU consumption and responds to cost overruns. If a cost overrun occurs for a schedulable object, the cost overrun handler, if any, for that SO is released. This behaves as if there were an asynchronous event associated with the SO, to which the overrun handler was bound, and which was fired when the cost overrun occurred. (Cost monitoring is an optional facility in an implementation of the RTSJ.)
The base priority of a schedulable object is the priority given in its associated PriorityParameters
object; the base priority of a Java thread is the priority returned by its getPriority
method.
When it is not in the enforced state, the active priority of a schedulable object or a Java thread is the maximum of its base priority and any priority it has acquired due to the action of priority inversion avoidance algorithms (see the Synchronization Chapter),
A processing group is a collection of schedulable objects whose combined execution has further time constraints which the scheduler uses to govern the group's execution eligibility.
A scheduler manages the execution of schedulable objects: it detects deadline misses, and performs admission control and cost monitoring. It also manages the execution of Java threads.
The base scheduler is an instance of the PriorityScheduler
class as defined in this specification. This is the initial default scheduler.
The scheduler required by this specification is fixed-priority preemptive with at least 28 unique priority levels. It is represented by the class PriorityScheduler
and is called the base scheduler.
The schedulable objects required by this specification are defined by the classes RealtimeThread
, NoHeapRealtimeThread
, AsyncEventHandler
and BoundAsyncEventHandler
. The base scheduler assigns processor resources according to the schedulable objects' release characteristics, execution eligibility, and processing group values. Subclasses of the schedulable objects are also schedulable objects and behave as these required classes.
An instance of the SchedulingParameters
class contains values of execution eligibility. A schedulable object is considered to have the execution eligibility represented by the SchedulingParameters
object currently bound to it. For implementations providing only the base scheduler, the scheduling parameters object is an instance of PriorityParameters
(a subclass of SchedulingParameters
).
An instance of the ReleaseParameters
class or its subclasses, PeriodicParameters
, AperiodicParameters
, and SporadicParameters
, contains values that define a particular release characteristic. A schedulable object is considered to have the release characteristics of a single associated instance of the ReleaseParameters
class. In all cases the base scheduler uses these values to perform its feasibility analysis over the set of schedulable objects and admission control for the schedulable object.
For a real-time thread the scheduler defines the behavior of the real-time thread's waitForNextPeriod
and waitForNextPeriodInterruptible
methods, and monitors cost overrun and deadline miss conditions based on its release parameters. For asynchronous event handlers, the scheduler monitors cost overruns and deadline misses.
Release parameters also govern the treatment of the minimum interarrival time for sporadic schedulable objects.
An instance of the ProcessingGroupParameters
class contains values that define a temporal scope for a processing group. If a schedulable object has an associated instance of the ProcessingGroupParameters
class, it is said to execute within the temporal scope defined by that instance. A single instance of the ProcessingGroupParameters
class can be (and typically is) associated with many SO's. If the implementation supports cost monitoring, the combined processor demand of all of the SO's associated with an instance of the ProcessingGroupParameters
class must not exceed the values in that instance (i.e., the defined temporal scope). The processor demand is determined by the Scheduler.
This section establishes the semantics and requirements that are applicable across the classes of this chapter, and also defines the required scheduling algorithm. Semantics that apply to particular classes, constructors, methods, and fields will be found in the class description and the constructor, method, and field detail sections.
The semantics for the base scheduler assume a uni-processor execution environment. While implementations of the RTSJ are not precluded from supporting multi-processor execution environments, no explicit consideration for such environments has been given in this specification.
The base scheduler supports the execution of all schedulable objects and Java threads, but it only controls the release of periodic real-time threads, and aperiodic asynchronous event handlers.
The execution scheduling semantics described in this section are defined in terms of a conceptual model that contains a set of queues of schedulable objects that are eligible for execution. There is, conceptually, one queue for each priority. No implementation structures are necessarily implied by the use of this conceptual model. It is assumed that no time elapses during operations described using this model, and therefore no simultaneous operations are possible.
java.lang.Thread
. The real-time priority values must be greater than 10, and they must include all integers from the base scheduler's getMinPriority()
value to its getMaxPriority()
value inclusive. The 10 priorities defined for java.lang.Thread
must effectively have lower execution eligibility than the real-time priorities, but beyond this, their behavior is as defined by the specification of java.lang.Thread.
importance
value in the ImportanceParameters
subclass of PriorityParameters
.
setSchedulingParameters()
to give a schedulable object a different SchedulingParameters
value.
Thread.setPriority()
, any of the methods defined for schedulable objects, or any of the methods defined for parameter objects must not affect the correctness of the priority inversion avoidance algorithms controlled by PriorityCeilingEmulation
and PriorityInheritance
- see the Synchronization chapter.
PriorityParameters setPriority()
method, RealtimeThread
's setSchedulingParameters()
method, or Thread
's setPriority()
method), this schedulable object is added to the tail of the queue for its new priority level. Queuing when priorities are adjusted by priority inversion avoidance algorithms is governed by semantics specified in the Synchronization chapter.
java.lang.ThreadGroup
objects are not enforced.
PriorityScheduler.getNormPriority()
shall be set to:((PriorityScheduler.getMaxPriority() - PriorityScheduler.getMinPriority())/3) + PriorityScheduler.getMinPriority()
The scheduler uses the values contained in the different parameter objects associated with a schedulable object to control the behavior of the schedulable object. The scheduler determines what values are valid for the schedulable objects it manages, which defaults apply and how changes to parameter values are acted upon by the scheduler. Invalid parameter values result in exceptions, as documented in the relevant classes and methods.
AperiodicParameters
).
PriorityScheduler
)
Attribute
|
Default Value
|
---|---|
Priority parameters
|
|
priority | norm priority |
Importance parameters
|
|
importance | No default. A value must be supplied. |
RelativeTime
attributes in parameter values must be greater than or equal to zero.
ReleaseParameters
objects must be less than or equal to their period values (where applicable), but the deadline may be greater than the minimum interarrival time in a SporadicParameters
object.
PeriodicParameters
objects are described in "Periodic Release of Real-time Threads" below. (The base scheduler does not manage the release of periodic schedulable objects other than periodic real-time threads.)
AperiodicParameters
objects and SporadicParameters
are described, respectively, in "Aperiodic Release Control" and "Sporadic Release Control", below. (The base scheduler does not manage the release of aperiodic schedulable objects other than aperiodic asynchronous event handlers.)
Cost monitoring is an optional facility in the implementation of the RTSJ, but when supported it must conform to the requirements and definitions as presented in this section.
getCost
method of the SO's release parameters object.
A schedulable object with release parameters of type PeriodicParameters
is expected to be released periodically. For asynchronous event handlers this would occur if the associated asynchronous event fired periodically. For real-time threads periodic release behavior is achieved by executing in a loop and invoking the RealtimeThread.waitForNextPeriod method, or its interruptible equivalent RealtimeThread.waitForNextPeriodInterruptible within that loop. For simplicity, unless otherwise stated, the semantics in this section apply to both forms of that method.
start
method.
RealtimeThread
methods waitForNextPeriod
, waitForNextPeriodInterruptible
, schedulePeriodic
and deschedulePeriodic
;
start
method, in accordance with the start time specified in its release parameters - see PeriodicParameters
.
PeriodicParameters
object only have an effect on its initial release time. Consequently, if a PeriodicParameters
object is bound to multiple real-time threads, a change in the start time may affect all, some or none, of those threads, depending on whether or not start
has been invoked on them.
PeriodicParameters
object at the time ti.
descheduled
, integer pendingReleases
, integer missCount
, and boolean lastReturn
.
descheduled
= false, pendingReleases
= 0, missCount
= 0, and lastReturn
= true.
deschedulePeriodic
method is invoked: set the value of descheduled
to true.
schedulePeriodic
method is invoked: set the value of descheduled
to false; then if the thread is blocked-for-release-event, set the value of pendingReleases
to zero, and tell the cost monitoring system to reset for this thread.
descheduled
is true, the real-time thread is said to be descheduled.
schedulePeriodic
; this means that no deadline misses can occur until the thread has been rescheduled. The descheduling of a real-time thread has no effect on its initial release.
waitForNextPeriod
), then if the thread is descheduled then do nothing, else increment the value of pendingReleases
, inform cost monitoring that the next release event has occurred, and notify the thread to make it eligible for execution;
pendingReleases
, and inform cost monitoring that the next release event has occurred.
descheduled
to true, atomically release the handler with its fireCount
increased by the value of missCount+1
and zero missCount
;
missCount
value.
waitForNextPeriod
method is invoked by the current real-time thread there are two possible behaviors depending on the value of missCount
:
missCount
is greater than zero: decrement the missCount
value; then if the lastReturn
value is false, completion occurs: apply any pending parameter changes, decrement pendingReleases
, inform cost monitoring the real-time thread has completed and return false; otherwise set the lastReturn
value to false and return false.
descheduled
is true, or pendingReleases
is zero. Then set the lastReturn
value to true, decrement pendingReleases
, and return true.
waitForNextPeriodInterruptible
method behaves as described above with the following additions:
AsynchronouslyInterruptedException
(AIE) is pending on the real-time thread, then the invocation immediately completes abruptly by throwing that pending instance as an InterruptedException
. If this occurs, the most recent release has not completed. If the pending instance is the generic AIE instance then the interrupt state of the real-time thread is cleared.
InterruptedException
. If the pending instance is the generic AIE instance then the interrupt state of the real-time thread is cleared.
waitForNextPeriod
, the change from non-periodic to periodic scheduling effectively takes place between the call to waitForNextPeriod
and the first periodic release. The first periodic release is determined by the start time specified in the real-time thread's periodic parameters. If that start time is an absolute time in the future, then that is the first periodic release time; if it is an absolute time in the past then the time at which waitForNextPeriod
was called is the first periodic release time and the release occurs immediately. If the start time is a relative time, then it is relative to the time at which waitForNextPeriod
was called; if that time is in the past then the release occurs immediately.
PeriodicParameters
then the change from periodic to non-periodic scheduling effectively takes place immediately, unless the thread is blocked-for-release-event, in which case the change takes place after the next release event. When this change occurs, the deadline for the real-time thread is that which was in effect for the most recent release.
The semantics of the previous section can be more clearly understood by viewing them in pseudo-code form for each of the methods and actions involved. In the following no mechanism for blocking and unblocking a thread is prescribed. The use of the wait and notify terminology in places is purely an aid to expressing the desired semantics in familiar terms.
// These values are part of thread state.
boolean descheduled = false;
int pendingReleases = 0;
boolean lastReturn = true;
int missCount = 0;
deschedulePeriodic(){
descheduled = true;
}
schedulePeriodic(){
descheduled = false;
if (blocked-for-release-event) {
pendingReleases = 0;
costMonitoringReset();
}
}
onNextPeriodDue(){
if (blocked-for-release-event) {
if (descheduled) {
; // do nothing
}
else {
pendingReleases++;
notifyCostMonitoringOfReleaseEvent();
notify it; // make eligible for execution
}
}
else {
pendingReleases++;
notifyCostMonitoringOfReleaseEvent();
}
}
onDeadlineMiss(){
if (there is a miss handler) {
descheduled = true;
release miss handler with fireCount increased by missCount+1
missCount = 0;
}
else {
missCount++;
}
}
waitForNextPeriod{
assert(pendingReleases >= 0);
if (missCount > 0 ) {
// Missed a deadline without a miss handler
missCount--;
if (lastReturn == false) {
// Changes "on completion" take place here
performParameterChanges();
pendingReleases--;
notifyCostMonitoringOfCompletion();
}
lastReturn = false;
return false;
}
else {
// Changes "on completion" take place here
performParameterChanges();
notifyCostMonitoringOfCompletion();
wait while (descheduled || pendingReleases == 0); // blocked-for-release-event
pendingReleases--;
lastReturn = true;
return true;
}
}
Aperiodic schedulable objects are released in response to events occurring, such as the starting of a real-time thread, or the firing of an associated asynchronous event for an asynchronous event handler. The occurrence of these events, each of which is a potential release event, is termed an arrival, and the time that they occur is termed the arrival time.
The base scheduler behaves effectively as if it maintained a queue, called the arrival time queue, for each aperiodic schedulable object. This queue maintains information related to each release event from its "arrival" time until the associated release completes, or another release event occurs - whichever is later. If an arrival is accepted into the arrival time queue, then it is a release event and the time of the release event is the arrival time. The initial size of this queue is an attribute of the schedulable object's aperiodic parameters, and is set when the parameter object is associated with the SO. Over time the queue may become full and its behavior in this situation is determined by the queue overflow policy specified in the SO's aperiodic parameters. There are four overflow policies defined:
Policy
|
Action on Overflow
|
---|---|
IGNORE | Silently ignore the arrival. The arrival is not accepted, no release event occurs, and, if the arrival was caused programmatically (such as by invoking fire on an asynchronous event), the caller is not informed that the arrival has been ignored. |
EXCEPT | Throw an ArrivalTimeQueueOverflowException . The arrival is not accepted, and no release event occurs, but if the arrival was caused programmatically, the caller will have ArrivalTimeQueueOverflowException thrown. |
REPLACE | The arrival is not accepted and no release event occurs. If the completion associated with the last release event in the queue has not yet occurred, and the deadline has not been missed, then the release event time for that release event is replaced with the arrival time of the new arrival. This will alter the deadline for that release event. If the completion associated with the last release event has occurred, or the deadline has already been missed, then the behavior of the REPLACE policy is equivalent to the IGNORE policy. |
SAVE | Behave effectively as if the queue were expanded as necessary to accommodate the new arrival. The arrival is accepted and a release event occurs. |
Under the SAVE policy the queue can grow and shrink over time.
Changes to the queue overflow policy take effect immediately. When an arrival occurs and the queue is full, the policy applied is the policy as defined at that time.
Aperiodic real-time threads executing under the base scheduler have the following characteristics:
start
is invoked upon it.
Sporadic parameters include a minimum interarrival time, MIT, that characterizes the expected frequency of releases. When an arrival is accepted implementation behaves as if it calculates the earliest time at which the next arrival could be accepted, by adding the current MIT to the arrival time of this accepted arrival. The scheduler guarantees that each sporadic schedulable object it manages, is released at most once in any MIT. It implements two mechanisms for enforcing this rule:
Policy
|
Action on Violation
|
---|---|
IGNORE | Silently ignore the violating arrival. The arrival is not accepted, no release event occurs, and, if the arrival was caused programmatically (such as by invoking fire on an asynchronous event), the caller is not informed that the arrival has been ignored. |
EXCEPT | Throw a MITViolationException . The arrival is not accepted, and no release event occurs, but if the arrival was caused programmatically, the caller will have MITViolationException thrown. |
REPLACE | The arrival is not accepted and no release event occurs. If the completion associated with the last release event in the queue has not yet occurred, and the deadline has not been missed, then the release event time for that release event is replaced with the arrival time of the new arrival. This will alter the deadline for that release event. If the completion associated with the last release event has occurred, or the deadline has already been missed, then the behavior of the REPLACE policy is equivalent to the IGNORE policy. |
The effective release time of a release event i is the earliest time that the handler can be released in response to that release event. It is determined for each release event based on the MIT policy in force at the release event time:
The scheduler will delay the release associated with the release event at the head of the arrival time queue until the current time is greater than or equal to the effective release time of that release event.
Changes to minimum interarrival time and the MIT violation policy take effect immediately, but only affect the next expected arrival time, and effective release time, for release events that occur after the change.
Asynchronous event handlers can be associated with one or more asynchronous events. When an asynchronous event is fired, all handlers associated with it are released, according to the semantics below:
AperiodicParameters
, then the arrival may become a release event for the handler, according to the semantics given in "Aperiodic Release Control" above. If the handler has release parameters of type SporadicParameters
, then the arrival may become a release event for the handler, according to the semantics given in "Sporadic Release Control" above. If the handler has release parameters of a type other than SporadicParameters
then the arrival is a release event, and the arrival-time is the release event time.
fireCount
is incremented by one.
fireCount
is zero.
handleAsyncEvent
method invoked repeatedly while its fireCount
is greater than zero:
fireCount
is decremented and the front entry (if still present) removed from the arrival-time queue.
handleAsyncEvent
, in this way, is a release.
handleAsyncEvent
is the completion of a release.
handleAsyncEvent
occurs prior to completion.
fireCount
as follows:
getAndDecrementPendingFireCount
method decreases the fireCount
by one (if it was greater than zero), and returns the old value. This removes the front entry from the arrival-time queue but otherwise has no effect on the scheduling of the current schedulable object, nor the handler itself.getAndClearPendingFireCount
method is functionally equivalent to invoking getAndDecrementPendingFireCount
until it returns zero, and returning the original fireCount
value.
getAndIncrementPendingFireCount
method attempts to increase the fireCount
by one , and returns the old value. It behaves effectively as if a private event, associated only with this handler, were fired, in accordance with semantic (1) above. This pseudo-firing is treated as a normal firing with respect to the other semantics in this section
handleAsyncEvent
to ensure the effective release time honors any restrictions imposed by the MIT violation policy, if applicable, of that release event.
handleAsyncEvent
, if the fireCount
is now zero, then the cost monitoring system is told to reset for this handler.
A processing group is defined by a processing group parameters object, and each SO that is bound to that parameter object is called a member of that processing group.
Processing groups are only functional in a system that implements processing group enforcement. Although the processing group itself does not consume CPU time, it acts as a proxy for its members.
The enforced priority of a schedulable object is a priority with no execution eligibility.
getDeadline
method of the processing group parameters object.getCost
method of the processing group parameters object.
start
or fire
(as appropriate) is first called for a member of the processing group.
Note: Until a processing group starts, its budget cannot be replenished, but its members will be enforced if they exceed the initial budget. Also, once a processing group is started it behaves effectively as if it continued running continuously until the defining ProcessingGroupParameters
object is freed.
As specified the required semantics and requirements of this section establish a scheduling policy that is very similar to the scheduling policies found on the vast majority of real-time operating systems and kernels in commercial use today. The semantics and requirements for the base scheduler accommodate existing practice, which is a stated goal of the effort.
There is an important division between priority schedulers that force periodic context switching between tasks at the same priority, and those that do not cause these context switches. By not specifying time slicing behavior this specification calls for the latter type of priority scheduler. In POSIX terms, SCHED_FIFO meets the RTSJ requirements for the base scheduler, but SCHED_RR does not meet those requirements.
Although a system may not implement the first release (start) of a schedulable object as unblocking that schedulable object, under the base scheduler those semantics apply; i.e., the schedulable object is added to the tail of the queue for its active priority.
Some research shows that, given a set of reasonable common assumptions, 32 distinct priority levels are a reasonable choice for close-to-optimal scheduling efficiency when using the rate-monotonic priority assignment algorithm (256 priority levels provide better efficiency). This specification requires at least 28 distinct priority levels as a compromise noting that implementations of this specification will exist on systems with logic executing outside of the Java Virtual Machine and may need priorities above, below, or both for system activities.
In order not to undermine any feasibility analysis, the default behavior for implementations that support cost monitoring is that a schedulable object receives no more than cost units of CPU time during each release. The programmer must explicitly change the cost attribute to override the scheduler.
Cost enforcement may be deferred while the overrun schedulable object holds locks that are out of application control, such as locks used to protect garbage collection. Applications should include the resulting jitter in any analysis that depends on cost enforcement.
When a schedulable object is enforced because of cost overrun in a processing group the enforced priority is used for scheduling instead of the schedulable object's base priority. The enforced priority's application is limited. The enforced priority is not returned as the schedulable object's priority from methods such as getPriority(), and the semantics of the active priority continue to operate when a schedulable object is enforced.