Memory Management This section defines classes directly related to memory and memory management.




Memory Management

This section defines classes directly related to memory and memory management. These classes:


Schedulable objects that use the enter method of MemoryArea behave effectively as if they kept the memory areas they enter in a scope stack which enter pushes and pops.

This chapter defines memory area classes. Two memory areas may be associated with each MemoryArea instance, the memory area containing the instance and the backing memory that contains memory managed by the MemoryArea instance.

Some memory area classes implement portals. These are a tool that associates a reference value with a memory area. It is normally used to give code that has a reference to the memory area a way to go from that to a reference to an object stored in that memory area.

For purposes of scoped memory reference counting, the following are treated as execution contexts:

The initial memory area for a schedulable object is non-default if it is not the memory area where the schedulable object was created.

An AsyncEventHandler is fireable whenever there is an agent that can release it. This includes cases when the AsyncEventHandler is:

Semantics and Requirements

The following list establishes the semantics and requirements that are applicable across the classes of this section. Semantics that apply to particular classes, constructors, methods, and fields will be found in the class description and the constructor, method, and field detail sections.

Allocation time

  1. Some MemoryArea classes are required to have linear (in object size) allocation time. The linear time attribute requires that, ignoring performance variations due to hardware caches or similar optimizations and ignoring execution time of any static initializers, the execution time of new must be bounded by a polynomial, f(n), where n is the size of the object and for all n>0, f(n) <= Cn for some constant C.
  2. Execution time of object constructors, and time spent in class loading and static initialization are not governed by bounds on object allocation in this specification, but setting default initial values for fields in the instance (as specified in The Java Virtual Machine Specification, Second Edition, section 2.5.1, "Each class variable, instance variable, and array component is initialized with a default value when it is created.") is considered part of object allocation and included in the time bound.

    The allocation context

  3. A memory area is represented by an instance of a subclass of the MemoryArea class. When a memory area, m, is entered by calling m.enter (or another method from the family of enter-like methods in MemoryArea or ScopedMemory) m becomes the allocation context of the current schedulable object. When control returns from the enter method, the allocation context is restored to the value it had immediately before enter was called.
  4. When a memory area, m, is entered by calling m's executeInArea method, m becomes the current allocation context of the current schedulable object. When control returns from the executeInArea method, the allocation context is restored to the value it had before executeInArea was called.
  5. The initial allocation context for a schedulable object when it is first released, is the memory area that was designated the initial memory area when the schedulable object was constructed. This initial allocation context becomes the current allocation context for that schedulable object when the schedulable object first becomes eligible for execution. For async event handlers, the initial allocation context is the same on each release; for real-time threads, in releases subsequent to the first, the allocation context is the same as it was when the real-time thread became blocked-for-release-event.
  6. All object allocation through the new keyword will use the current allocation context, but note that allocation can be performed in a specific memory area using the newInstance and newArray methods.
  7. Schedulable objects behave as if they stored their memory area context in a structure called the scope stack. This structure is manipulated by creation of schedulable objects, and the following methods from the MemoryArea and ScopedMemory classes: all the enter and joinAndEnter methods, executeInArea, and both newInstance methods. See the semantics in Maintaining the Scope Stack for details.
  8. The scope stack is accessible through a set of static methods on RealtimeThread. These methods allow outer allocation contexts to be accessed by their index number. Memory areas on a scope stack may be referred to as inner or outer relative to other entries in that scope stack. An "outer scope" is further from the current allocation context on the current scope stack and has a lower index.
  9. The executeInArea, newInstance and newArray methods, when invoked on an instance of  ScopedMemory require that instance to be an outer allocation context on the current schedulable object's current scope stack.
  10. An instance of ScopedMemory is said to be in use if it has a non-zero reference count as defined by semantic (17) below.
  11. The Parent Scope

  12. Instances of ScopedMemory have special semantics including definition of parent. If a ScopedMemory object is neither in use nor the initial memory area for a schedulable object, it has no parent scope.
  13. Instances of ScopedMemory must satisfy the single parent rule which requires that each scoped memory has a unique parent as defined in semantic (11.)

    Memory areas and schedulable objects

  14. Pushing a scoped memory onto a scope stack is always subject to the single parent rule.
  15. Each schedulable object has an initial memory area which is that object's initial allocation context. The default initial memory area is the current allocation context in effect during execution of the schedulable object's constructor, but schedulable objects may supply constructors that override the default.
  16. A Java thread cannot have a scope stack; consequently it can only be created and execute within heap or immortal memory. An attempt to create a Java thread in a scoped memory area throws IllegalAssignmentError.
  17. A Java thread may use executeInArea, and the newInstance and newArray methods from the ImmortalMemory and HeapMemory classes. These methods allow it to execute with an immortal current allocation context, but semantic (15) applies even during execution of these methods.

    Scoped memory reference counting

  18. Each instance of the class ScopedMemory or its subclasses must maintain a reference count which is greater than zero if and only if either:
  19. When the reference count for an instance of the class ScopedMemory is ready to be decremented from one to zero, all unfinalized objects within that area are considered ready for finalization. If after the finalizers for all such unfinalized objects in the scoped memory area run to completion, the reference count for the memory area is still ready to be decremented to zero, any newly created unfinalized objects are considered ready for finalization and the process is repeated until no new objects are created or the scoped memory's reference count is no longer ready to be decremented from one to zero. When the scope contains no unfinalized objects and its reference count is ready to be decremented from one to zero, then the reference count is decremented to zero and the memory scope is emptied of all objects. The RTSJ implementation must complete finalization of objects in the scope and, if the reference count is zero after finalizers run, deletion of the objects in the scope before that memory scope can again become the current allocation context for any schedulable object. (This is a special case of the finalization implementation specified in The Java Language Specification, second edition, section 12.6.1)
  20. Finalization may start when all unfinalized objects in the scope are ready for finalization. Finalizers are executed with the current allocation context set to the finalizing scope and are executed by the schedulable object in control of the scope when its reference count is ready to be decremented from one to zero. If finalizers are executed because a real-time thread terminates or an AsyncEventHandler becomes non-fireable, that real-time thread or AsyncEventHandler is considered in control of the scope and must execute the finalizers.
  21. From the time objects in a scope are deleted until the portal on the scope is successfully set to a non-null value with setPortal, the value returned by getPortal on that scoped memory object must be null.

    Immortal memory

  22. Objects created in any immortal memory area are unexceptionally referenceable from all Java threads, and all schedulable objects, and the allocation and use of objects in immortal memory is never subject to garbage collection delays.
  23. An implementation may execute finalizers for immortal objects when it determines that the application has terminated. Finalizers will be executed by a thread or schedulable object whose current allocation context is not scoped memory. Regardless of any call to runFinalizersOnExit, the system need not execute finalizers for immortal objects that remain unfinalized when the JVM begins termination.
  24. Class objects, the associated static memory, and interned Strings behave effectively as if they were allocated in immortal memory with respect to reference rules, assignment rules, and preemption delays by no-heap schedulable objects. Static initializers are executed effectively as if the current thread performed ImmortalMemory.instance().executeInArea(r) where r is a Runnable that executes the <clinit> method of the class being initialized.

    Maintaining referential integrity

  25. Assignment rules placed on reference assignments prevent the creation of dangling references, and thus maintain the referential integrity of the Java runtime. The restrictions are listed in the following table:
    Stored In
    Reference to Scoped
    Permit, if the reference is from the same scope, or an outer scope
    Local Variable

    For this table, ImmortalMemory and ImmortalPhysicalMemory are equivalent, and all sub-classes of ScopedMemory are equivalent.

  26. An implementation must ensure that the above checks are performed on every assignment statement before the statement is executed. (This includes the possibility of static analysis of the application logic). Checks for operations on local variables are not required because a potentially invalid reference would be captured by the other checks before it reached a local variable.

    Object initialization

  27. Static initializers run with the immortal memory area as their allocation context.
  28. The current allocation context in a constructor for an object is the memory area in which the object is allocated. For new, this is the current allocation context when new was called. For members of the m.newinstance family, the current allocation context is memory area m.

Maintaining the Scope Stack

This section describes maintenance of a data structure that is called the scope stack. Implementations are not required to use a stack or implement the algorithms given here. It is only required that an implementation behave with respect to the ordering and accessibility of memory scopes effectively as if it implemented these algorithms.

The scope stack is implicitly visible through the assignment rules, and the stack is explicitly visible through the static getOuterMemoryArea(int index) method on RealtimeThread.

Four operations effect the scope stack: the enter methods in MemoryArea and ScopedMemory, construction of a new schedulable object, the executeInArea method in MemoryArea, and the new instance methods in MemoryArea.

  1. The memory area at the top of a schedulable object's scope stack is the schedulable object's current allocation context.
  2. When a schedulable object, t, creates a schedulable object, nt, in a ScopedMemory object's allocation area, nt acquires a copy of the scope stack associated with t at the time nt is constructed including all entries from up to and including the memory area containing nt. If nt is created in heap, immortal, or immortal physical memory, nt is created with a scope stack containing only heap, immortal, or immortal physical memory respectively. If nt has a non-default initial memory area, ima, then ima is pushed on nt's newly-created scope stack.
  3. When a memory area, ma, is entered by calling a ma.enter method, ma is pushed on the scope stack and becomes the allocation context of the  current schedulable object. When control returns from the enter  method, the allocation context is popped from the scope stack
  4. When a memory area, m, is entered by calling m's executeInArea method or one of the m.newInstance methods the scope stack before the method call is preserved and replaced with a scope stack constructed as follows: When control returns from the executeInArea method, the scope stack is restored to the value it had before ma.executeInArea or ma.newInstance was called.



For ma.enter(logic):
push ma on the scope stack belonging to the current
    schedulable object -- which may throw
execute method
pop ma from the scope stack

executeInArea or newInstance

For ma.executeInArea(logic), ma.newInstance(), or ma.newArray():
if ma is an instance of heap immortal or 
    start a new scope stack containing only ma
    make the new scope stack the scope stack for
        the current schedulable object
else ma is scoped
    if ma is in the scope stack for the 
            current schedulable object
        start a new scope stack containing ma and all
            scopes below ma on the scope stack.
        make the new scope stack the scope stack for 
            the current schedulable object
        throw InaccessibleAreaException
execute or construct the object
restore the previous scope stack for the 
        current schedulable object
discard the new scope stack

Construct a Schedulable Object

For construction of a schedulable object in memory area cma with initial memory area of ima:
if cma is heap, immortal or ImmortalPhysicalMemory
    create a new scope stack containing cma
    start a new scope stack containing the
        entire current scope stack
if ima != cma
    push ima on the new scope stack --
        which may throw ScopedCycleException

The above pseudocode illustrates a straightforward implementation of this specification's semantics, but any implementation that behaves effectively like this one with respect to reference count values of zero and one is permissible. An implementation may be eager or lazy in maintenance of its reference count provided that it correctly implements the semantics for reference counts of zero and one.

The Single Parent Rule

Every push of a scoped memory type on a scope stack requires reference to the single parent rule, this enforces the invariant that every scoped memory area has no more than one parent.

The parent of a scoped memory area is identified by the following rules (for a stack that grows up):

Except for the primordial scope, which represents heap, immortal and immortal physical memory, only scoped memory areas are visible to the single parent rule.

The operational effect of the single parent rule is that when a scoped memory area has a parent, the only legal change to that value is to "no parent." Thus an ordering imposed by the first assignments of parents of a series of nested scoped memory areas is the only nesting order allowed until control leaves the scopes; then a new nesting order is possible. Thus a schedulable object attempting to enter a scope can only do so by entering in the established nesting order.

Scope Tree Maintenance

The single parent rule is enforced effectively as if there were a tree with the primordial scope (representing heap, immortal, and immortal physical memory) at its root, and other nodes corresponding to every scoped memory area that is currently on any schedulable object's scope stack.

Each scoped memory has a reference to its parent memory area, ma.parent. The parent reference may indicate a specific scoped memory area, no parent, or the primordial parent.

If a scoped memory area is the non-default initial memory area of an async event handler, or the non-default initial memory area of a real-time thread that has not terminated, it is referred to as pinned.

On Scope Stack Push of ma
The following procedure could be used to maintain the scope tree and ensure that push operations on a schedulable object's scope stack do not violate the single parent rule.
precondition: ma.parent is set to the correct parent 
    (either a scoped memory area or the primordial scope) 
    or to noParent      
t.scopeStack is the scope stack of 
    the current schedulable object

if ma is scoped
   parent = findFirstScope(t.scopeStack)
   if ma.parent == noParent
       ma.parent = parent
   else if ma.parent != parent
       throw ScopedCycleException
findFirstScope is a convenience function that looks down the scope stack for the next entry that is a reference to an instance of ScopedMemoryArea.
findFirstScope(scopeStack) {
   for s = top of scope stack to 
         bottom of scope stack
       if s is an instance of scopedMemory
           return s
   return primordial scope                            
On Scope Stack Pop of ma
ma = t.scopeStack.pop()
if ma is scoped
    if !(ma.in_use || ma.pinned)
        ma.parent = noParent

The Rationale

Languages that employ automatic reclamation of blocks of memory allocated in what is traditionally called the heap by program logic also typically use an algorithm called a garbage collector. Garbage collection algorithms and implementations vary in the amount of non-determinacy they add to the execution of program logic. Rather than require a garbage collector, and require it to meet real-time constraints that would necessarily be a compromise, this specification constructs alternative systems for "safe" management of memory. The scoped and immortal memory areas allow program logic to allocate objects in a Java-like style, ignore the reclamation of those objects, and not incur the latency of the implemented garbage collection algorithm.

The term scope stack might mislead a reader to infer that it contains only scoped memory areas. This is incorrect. Although the scope stack may contain scoped memory references, it may also contain heap and immortal memory areas. Also, although the scope stack's behavior is specified as a stack, an implementation is free to use any data structure that preserves the stack semantics.

This specification does not specifically address the lifetime of objects allocated in immortal memory areas. If they were reclaimed while they were still referenced, the referential integrity of the JVM would be compromised which is not permissible. Recovering immortal objects only at the termination of the application, or never recovering them under any circumstances is consistent with this specification.

If a scoped memory area is used by both heap and non-heap SOs, there could be cases where a finalizer executed in non-heap context could attempt to use a heap reference left by a heap-using SO. The code in the finalizer would throw a memory access error. If that exception is not caught in the finalizer, it will be handled by the implementation so finalization will continue undisturbed, but the problem in finalizer that caused the illegal memory access could be hard to locate. So, catch clauses in finalizers for objects allocated in scoped memory are even more useful than they are for normal finalizers.