2016/12/13 - Apache Etch has been retired.

For more information, please explore the Attic.

Connection Lifecycle at the Transport Stack

The Transport Stack is the framework which ties together the various pieces of the etch architecture within a binding. It also is the key to binding implementation consistency and to deploying cross platform services with the features we want.

Before going any further, please study the Transport Stack Architecture. Here are some proposals which affect the model of the transport stack.

Auto Start, Reconnect, and Idle

When a client connects to a service it may use one of two models. The first is the temporary (acute) need model, the second is the continuous (chronic) need model.

Temporary Need Model

The temporary need model is based on the idea of occasionally needing one service or another to satisfy an immediate concern, such as a database query. Processing cannot continue until the need is satisfied:

... oops, need a service ...
server = BlahHelper.newServer( ... );
server._startAndWaitUp( 4000 );
answer = server.doSomethingForMe( ... );
server._stopAndWaitDown( 4000 );
server = null;
... further work.

When the service is not in active use it is stopped and does not consume any resources. There is no particular dependence upon the state of a continuously existing connection.

While straightforward, this model has a few warts. Once the service is started it must be stopped to correctly release the resources. Any exception thrown which might prevent _stopAndWaitDown from being called must be neutralized, else a dangling server object is left connected. Let's fix the code to account for this:

... need some service ...
server = BlahHelper.newServer( ... );
try
{
    server._startAndWaitUp( 4000 );
    answer = server.doSomethingForMe( ... );
}
finally
{
    server._stopAndWaitDown( 4000 );
    server = null;
}
... further work.

Another wart occurs when there are closely spaced back to back needs for the service. The service is started and stopped only to be started again shortly. This is wasteful of resources on both ends of the connection, both in the creation of the object to manage the session and also in the network resources required to establish the connection

There are a few things we can do. Instead of creating and destroying the service stack on demand we could create one instance, only starting and stopping it as needed. This removes the need to have any of the parameters to newServer handy, one only needs to pass around server (or put it in a global):

... need some service ...
try
{
    server._startAndWaitUp( 4000 );
    answer = server.doSomethingForMe( ... );
}
finally
{
    server._stopAndWaitDown( 4000 );
}
... further work.

This introduces a problem, though, which is one of shared access. If any other thread might also desire access via server, we have to block it until we are done:

... need some service ...
synchronized (server)
{
    try
    {
        server._startAndWaitUp( 4000 );
        answer = server.doSomethingForMe( ... );
    }
    finally
    {
        server._stopAndWaitDown( 4000 );
    }
}
... further work.

This is better, but we're still starting and stopping a connection, perhaps to start and stop it again soon. Also we are blocking other uses of service when we might not need to. Wouldn't it be cool if the connection would start automatically if it was down and we made a request, and if after a period of inactivity it would stop automatically. Then we could just write this:

... need some service ...
answer = server.doSomethingForMe( ... );
...further work.

Now there is no need for the synchronize unless I'm going to make two back to back calls which must not be interrupted by any intermediate state changes (and all such calls must be similarly protected). Because the connection may go down between calls, there cannot be any dependence upon long term server statefulness. This applies even while we might have what we think of as a transaction going on:

... need some service ...
answer = server.doSomethingForMe( ... );
... dialog with the user ...
server.doSomethingElseForMe( ... );
... further work.

During the dialog with the user, the connection may automatically shut down because it is idle too long. The api doSomethingElseForMe cannot depend upon any state established by the api doSomethingForMe unless we somehow force the connection to stay up and block other simultaneous state changing requests.

... need some service ...
synchronized (server)
{
    try
    {
        server.transportControl( INCREMENT_IDLE_BLOCK );
        answer = server.doSomethingForMe( ... );
        ... dialog with the user ...
        server.doSomethingElseForMe( ... );
    }
    finally
    {
        server.transportControl( DECREMENT_IDLE_BLOCK );
    }
}
... further work.

The idle block, while non-zero, blocks any automatic idle connection shut down. As you can see, we're almost back where we started. Better to use a stateless api here.

In summary, there were two concepts mentioned here which may be interesting: AutoStart and IdleStop. These are primarily interesting when used in combination with stateless apis.

Initialization of Otherwise Stateless APIs

A small note which might be helpful. While some apis are often easily rendered as stateless, some depend upon some initialization nonethess. An example is opening a connection to the Configuration service and then loading our assigned config resource. After that, we'd be good as the rest of the api is stateless.

We can achieve nirvana here if we realize that the session UP and DOWN messages can get us past this issue. When our session comes UP, we can immediately setup our initial state before any other requests are processed. This may require some changes to implement to a useful level of refinement, but that is easy.

Continuous Need Mode

This is the regular connection mode that keep up the service connection for a long time.