This section defines classes directly related to memory and memory management.
This section defines classes directly related to memory and memory management. These classes:
- Allow the definition of regions of memory outside of the traditional Java heap.
- Allow the definition of regions of scoped memory, that is, memory regions with a limited lifetime.
- Allow the definition of regions of memory containing objects whose lifetime matches that of the application.
- Allow the definition of regions of memory mapped to specific physical addresses.
- Allow the specification of maximum memory area consumption and maximum allocation rates for individual schedulable objects.
- Allow the programmer to query information characterizing the behavior of the garbage collection algorithm, and to some limited ability, alter the behavior of that algorithm.
Schedulable objects that use the
enter method of
MemoryArea behave effectively as if they kept the memory areas they enter in a scope stack which
enter pushes and pops.
This chapter defines memory area classes. Two memory areas may be associated with each
MemoryArea instance, the memory area containing the instance and the backing memory that contains memory managed by the
Some memory area classes implement portals. These are a tool that associates a reference value with a memory area. It is normally used to give code that has a reference to the memory area a way to go from that to a reference to an object stored in that memory area.
Semantics and Requirements
The following list establishes the semantics and requirements that are applicable across the classes of this section. Semantics that apply to particular classes, constructors, methods, and fields will be found in the class description and the constructor, method, and field detail sections.
- Some MemoryArea classes are required to have linear (in object size) allocation time. The linear time attribute requires that, ignoring performance variations due to hardware caches or similar optimizations and ignoring execution time of any static initializers, the execution time of new must be bounded by a polynomial, f(n), where n is the size of the object and for all n>0, f(n) <= Cn for some constant C.
- Execution time of object constructors, and time spent in class loading and static initialization are not governed by bounds on object allocation in this specification, but setting default initial values for fields in the instance (as specified in The Java Virtual Machine Specification, Second Edition, section 2.5.1, "Each class variable, instance variable, and array component is initialized with a default value when it is created.") is considered part of object allocation and included in the time bound.
The allocation context
- A memory area is represented by an instance of a subclass of the
MemoryArea class. When a memory area, m, is entered by calling
m.enter (or another method from the family of enter-like methods in
ScopedMemory) m becomes the allocation context of the current schedulable object. When control returns from the enter method, the allocation context is restored to the value it had immediately before enter was called.
- When a memory area, m, is entered by calling m's executeInArea method, m becomes the current allocation context of the current schedulable object. When control returns from the executeInArea method, the allocation context is restored to the value it had before executeInArea was called.
- The initial allocation context for a schedulable object when it is first released, is the memory area that was designated the initial memory area when the schedulable object was constructed. This initial allocation context becomes the current allocation context for that schedulable object when the schedulable object first becomes eligible for execution. For async event handlers, the initial allocation context is the same on each release; for real-time threads, in releases subsequent to the first, the allocation context is the same as it was when the real-time thread became blocked-for-release-event.
- All object allocation through the new keyword will use the current allocation context, but note that allocation can be performed in a specific memory area using the
- Schedulable objects behave as if they stored their memory area context in a structure called the scope stack. This structure is manipulated by creation of schedulable objects, and the following methods from the
ScopedMemory classes: all the
executeInArea, and both
newInstance methods. See the semantics in Maintaining the Scope Stack for details.
- The scope stack is accessible through a set of static methods on RealtimeThread. These methods allow outer allocation contexts to be accessed by their index number. Memory areas on a scope stack may be referred to as inner or outer relative to other entries in that scope stack. An "outer scope" is further from the current allocation context on the current scope stack and has a lower index.
executeInArea, newInstance and
newArray methods, when invoked on an instance of
ScopedMemory require that instance to be an outer allocation context on the current schedulable object's current scope stack.
- An instance of
ScopedMemory is said to be in use if it has a non-zero reference count as defined by semantic (17) below.
The Parent Scope
- Instances of ScopedMemory have special semantics including definition of parent. If a ScopedMemory object is not in use, it has no parent scope. When a
ScopedMemory object becomes in use, its parent is the nearest
ScopedMemory object outside it on the current scope stack. If there is no outside
ScopedMemory object in the current scope stack, the parent is the primordial scope which is not actually a memory area, but only a marker that constrains the parentage of
- Instances of ScopedMemory that become in use must satisfy the single parent rule which requires that each scoped memory has a unique parent as defined in semantic (11.)
Memory areas and schedulable objects
- Pushing a scoped memory onto a scope stack is always subject to the single parent rule.
- Each schedulable object has an initial memory area which is that object's initial allocation context. The default initial memory area is the current allocation context in effect during execution of the schedulable object's constructor, but schedulable objects may supply constructors that override the default.
- A Java thread cannot have a scope stack; consequently it can only be created and execute within heap or immortal memory. An attempt to create a Java thread in a scoped memory area throws
- A Java thread may use
executeInArea, and the
newArray methods from the
HeapMemory classes. These methods allow it to execute with an immortal current allocation context, but semantic (15) applies even during execution of these methods.
Scoped memory reference counting
- Each instance of the class
ScopedMemory or its subclasses must maintain a reference count which is greater than zero if and only if either:
- the scoped memory area is the current allocation context or an outer allocation context for one or more execution contexts; or else
- the scoped memory area is the initial memory area for a schedulable object. In this context, a schedulable objects ceases to be a source of a non-zero reference count on its initial memory area when either:
- the schedulable object is a
RealtimeThread and it terminates; or else
- the schedulable object is de-allocated from its memory area.
- For purposes of this semantic the following are treated as execution contexts:
RealtimeThread objects that have been started and have not terminated,
- AsyncEventHandler objects that are currently in a released state,
- AsyncEvent objects that are bound to happenings,
- Timer objects that have been started and have not been destroyed,
- other schedulable objects that control an execution engine
- When the reference count for an instance of the class ScopedMemory is ready to be decremented from one to zero, all unfinalized objects within that area are considered ready for finalization. If after the finalizers for all unfinalized objects in the scoped memory area run to completion, the reference count for the memory area is still ready to be decremented to zero, then it is decremented to zero and the memory scope is emptied of all objects. The RTSJ implementation must complete finalization of objects in the scope and, if the reference count is zero after finalizers run, deletion of the objects in the scope before that memory scope can again become the current allocation context for any schedulable object. (This is a special case of the finalization implementation specified in The Java Language Specification, second edition, section 12.6.1)
- Finalization of objects in scoped memory shall take place in a schedulable entity that can reference each object with assignment and reference rules no more restrictive than those in place when the object was created. Finalization may start when the all unfinalized objects in the scope are ready for finalization. The current allocation context for the finalizers is equal to the finalizing scope with a scope stack that is valid under the single parent rule. The scope stack need not be minimal; it may contain heap and immortal memory areas. The finalizing schedulable entity may be heap, or no-heap, and it may run at any real-time priority, but it is subject to boosting to avoid priority inversion, for instance when a higher-priority thread is waiting to enter the memory area.
- From the time objects in a scope are deleted until the portal on the scope is successfully set to a non-null value with setPortal, the value returned by getPortal on that scoped memory object must be null.
- Objects created in any immortal memory area are unexceptionally referenceable from all Java threads, and all schedulable objects, and the allocation and use of objects in immortal memory is never subject to garbage collection delays.
- An implementation may execute finalizers for immortal objects when it determines that the application has terminated. Finalizers will be executed by a thread or schedulable object whose current allocation context is not scoped memory. Regardless of any call to
runFinalizersOnExit, the system need not execute finalizers for immortal objects that remain unfinalized when the JVM begins termination.
- Class objects, the associated static memory, and interned Strings behave effectively as if they were allocated in immortal memory with respect to reference rules, assignment rules, and preemption delays by no-heap schedulable objects. Static initializers are executed effectively as if the current thread performed
r is a Runnable that executes the
<clinit> method of the class being initialized.
Maintaining referential integrity
- Assignment rules placed on reference assignments prevent the creation of dangling references, and thus maintain the referential integrity of the Java runtime. The restrictions are listed in the following table:
Reference to Heap
Reference to Immortal
Reference to Scoped
Permit, if the reference is from the same scope, or an outer scope
For this table,
ImmortalPhysicalMemory are equivalent, and all sub-classes of
ScopedMemory are equivalent.
- An implementation must ensure that the above checks are performed on every assignment statement before the statement is executed. (This includes the possibility of static analysis of the application logic). Checks for operations on local variables are not required because a potentially invalid reference would be captured by the other checks before it reached a local variable.
- Static initializers run with the immortal memory area as their allocation context.
- The current allocation context in a constructor for an object is the memory area in which the object is allocated. For new, this is the current allocation context when new was called. For members of the m.newinstance family, the current allocation context is memory area m.
Maintaining the Scope Stack
This section describes maintenance of a data structure that is called the scope stack. Implementations are not required to use a stack or implement the algorithms given here. It is only required that an implementation behave with respect to the ordering and accessibility of memory scopes effectively as if it implemented these algorithms.
The scope stack is implicitly visible through the assignment rules, and the stack is explicitly visible through the static getOuterMemoryArea(int index) method on RealtimeThread.
Four operations effect the scope stack: the enter methods in
MemoryArea and ScopedMemory, construction of a new schedulable object, the executeInArea method in MemoryArea, and the new instance methods in MemoryArea.
- The memory area at the top of a schedulable object's scope stack is the schedulable object's current allocation context.
- When a schedulable object, t, creates a schedulable object, nt, in a ScopedMemory object's allocation area, nt acquires a copy of the scope stack associated with t at the time nt is constructed including all entries from up to and including the memory area containing nt. If nt is created in heap, immortal, or immortal physical memory, nt is created with a scope stack containing only heap, immortal, or immortal physical memory respectively.
- When a memory area,
ma, is entered by calling a
ma is pushed on the scope stack and becomes the allocation context of the current schedulable object. When control returns from the
enter method, the allocation context is popped from the scope stack
- When a memory area,
m, is entered by calling
executeInArea method or one of the
m.newInstance methods the scope stack before the method call is preserved and replaced with a scope stack constructed as follows:
When control returns from the
ma is a scoped memory area the new scope stack is a copy of the schedulable object's previous scope stack up to and including
ma is not a scoped memory area the new scope stack includes only
executeInArea method, the scope stack is restored to the value it had before
ma.newInstance was called.
- For the purposes of these algorithms, stacks grow up.
- The representative algorithms ignore important issues like freeing objects in scopes.
- In every case, objects in a scoped memory area are eligible to be freed when the reference count for the area is zero after finalizers for that scope are run.
- Informally, any objects in a scoped memory area must be freed and their finalizers run before the reference count for the memory area is incremented from zero to one.
push ma on the scope stack belonging to the current
schedulable object -- which may throw
execute logic.run method
pop ma from the scope stack
executeInArea or newInstance
For ma.executeInArea(logic), ma.newInstance(), or ma.newArray():
if ma is an instance of heap immortal or
start a new scope stack containing only ma
make the new scope stack the scope stack for
the current schedulable object
else ma is scoped
if ma is in the scope stack for the
current schedulable object
start a new scope stack containing ma and all
scopes below ma on the scope stack.
make the new scope stack the scope stack for
the current schedulable object
execute logic.run or construct the object
restore the previous scope stack for the
current schedulable object
discard the new scope stack
Construct a Schedulable Object
For construction of a schedulable object in memory area
cma with initial memory area of
if cma is heap, immortal or ImmortalPhysicalMemory
create a new scope stack containing cma
start a new scope stack containing the
entire current scope stack
if ima != cma
push ima on new scope stack --
which may throw ScopedCycleException
The above pseudocode illustrates a straightforward implementation of this specification's semantics, but any implementation that behaves effectively like this one with respect to reference count values of zero and one is permissible. An implementation may be eager or lazy in maintenance of its reference count provided that it correctly implements the semantics for reference counts of zero and one.
The Single Parent Rule
Every push of a scoped memory type on a scope stack requires reference to the single parent rule, this enforces the invariant that every scoped memory area has no more than one parent.
The parent of a scoped memory area is identified by the following rules (for a stack that grows up):
Except for the primordial scope, which represents heap, immortal and immortal physical memory, only scoped memory areas are visible to the single parent rule.
- If the memory area is not currently on any scope stack, it has no parent
- If the memory area is the outermost (lowest) scoped memory area on any scope stack, its parent is the primordial scope.
- For all other scoped memory areas, the parent is the first scoped memory area outside it on the scope stack.
The operational effect of the single parent rule is that when a scoped memory area has a parent, the only legal change to that value is to "no parent." Thus an ordering imposed by the first assignments of parents of a series of nested scoped memory areas is the only nesting order allowed until control leaves the scopes; then a new nesting order is possible. Thus a schedulable object attempting to enter a scope can only do so by entering in the established nesting order.
Scope Tree Maintenance
The single parent rule is enforced effectively as if there were a tree with the primordial scope (representing heap, immortal, and immortal physical memory) at its root, and other nodes corresponding to every scoped memory area that is currently on any schedulable object's scope stack.
Each scoped memory has a reference to its parent memory area, ma.parent. The parent reference may indicate a specific scoped memory area, no parent, or the primordial parent.
On Scope Stack Push of ma
The following procedure could be used to maintain the scope tree and ensure that push operations on a schedulable object's scope stack do not violate the single parent rule.
precondition: ma.parent is set to the correct parent
(either a scoped memory area or the primordial scope)
or to noParent
findFirstScope is a convenience function that looks down the scope stack for the next entry that is a reference to an instance of ScopedMemoryArea.
t.scopeStack is the scope stack of
the current schedulable object
if ma is scoped
parent = findFirstScope(t.scopeStack)
if ma.parent == noParent
ma.parent = parent
else if ma.parent != parent
for s = top of scope stack to
bottom of scope stack
if s is an instance of scopedMemory
return primordial scope
On Scope Stack Pop of ma
ma = t.scopeStack.pop()
if ma is scoped
ma.parent = noParent
Languages that employ automatic reclamation of blocks of memory allocated in what is traditionally called the heap by program logic also typically use an algorithm called a garbage collector. Garbage collection algorithms and implementations vary in the amount of non-determinacy they add to the execution of program logic. Rather than require a garbage collector, and require it to meet real-time constraints that would necessarily be a compromise, this specification constructs alternative systems for "safe" management of memory. The scoped and immortal memory areas allow program logic to allocate objects in a Java-like style, ignore the reclamation of those objects, and not incur the latency of the implemented garbage collection algorithm.
The term scope stack might mislead a reader to infer that it contains only scoped memory areas. This is incorrect. Although the scope stack may contain scoped memory references, it may also contain heap and immortal memory areas. Also, although the scope stack's behavior is specified as a stack, an implementation is free to use any data structure that preserves the stack semantics.
This specification does not specifically address the lifetime of objects allocated in immortal memory areas. If they were reclaimed while they were still referenced, the referential integrity of the JVM would be compromised which is not permissible. Recovering immortal objects only at the termination of the application, or never recovering them under any circumstances is consistent with this specification.