2016/12/13 - Apache Etch has been retired.

For more information, please explore the Attic.

Transfer of Big Messages

During the development of etch, it was thought to be useful that the generated interfaces should be able to support more than one request outstanding at a time. Some requests might be very quick, some might take a long time. Normally requests are processed sequentially by the message reader thread. Etch offers options to manage long running requests while maintaining reactivity for quick requests made from another thread. This involves marking some requests to be dispatched to a thread pool, allowing the message reader thread to go back and process another request.

Both requests and responses are messages with identical structure. So, the transport layers are not about requests and responses, they are about messages. The name of the game is moving messages from here to there. Each direction (towards client, and towards server) are more or less independent. Messages sent are generally transmitted using the thread which originated the message. While a message is being transmitted, another thread also wanting to transmit a message must wait. On the receiving end there is a dedicated message receiver thread which reads one message at a time and dispatches it to a handler. So you can see, the wire, the medium of message transmission, can only be used by one thread at a time.

Now suppose there is a request which returns a big response. Other messages will be blocked waiting for the big message to pass over the wire. So big messages reduce our reactivity.

For example, a 10 kb message over 100 megabit link takes 1 ms to transit the link (assuming no hops). So another message behind that one will experience up to a 1 ms delay just for transit. But if the first message is 10 mb then the second message will experience a 1 s delay for transit. These numbers can be much worse when you consider multiple hops, congestion, etc.

Another issue with big messages is that they consume big memory while being processed. This is because the entire message must be buffered up before it can be parsed and delivered. This causes issues within the heap and also within a server which has perhaps thousands of clients. A common denial of service attack is to open a connection to the server, send all the data of a request except the last byte. Open another connection, repeat. Soon the server will be out of memory and all processing will stop.

If a server allows up to 1 mb messages, then if I open 1,000 connections and send 999,999 bytes of a 1,000,000 byte message to each one, I've soaked up 1 gb of memory on the server.

Possible Solutions

Don't allow big messages

Etch currently enforces a limit on the size of messages. The default value is around 16k. This limit can be adjusted. By keeping this number as small as is reasonable, you limit the impact of a denial of service attack.

Don't send big messages

This might seem easy, and it is a good idea to try, but it isn't always possible. The idea is to not request large blobs, rather to incrementally request smaller pieces. I'll give a couple of examples.

Reading a file

You're trying to read a file over the network. Read 8k bytes at a time instead of the whole file. Because etch allows multiple requests to be outstanding at once, you can even use a double buffering scheme to make it nearly as fast as one single request. Here is an example of single buffering:

String id = server.openFileForRead( "blah.jpg" );
    byte[] buf;
    while ((buf = server.readFile( id, 8192 )).length > 0)
        processData( buf );
    server.closeFileForRead( id );

Database query

Instead of reading hundreds or thousands of rows from a table, read a few rows at a time. Most databases support the notion of indexed result sets, so this can be pretty efficient. The server also has the option of caching the query result vs. rerunning the query. There might be issues with concurrent updates, beware.

int index = 0;
List rows;
while ((rows = server.query( "select * from foo", index, 20 )).size() > 0)
    processRows( rows );
    index += rows.size();

The second and third parameters to query are the offset into the result set and the count of items to return.

Break up big messages

Sometimes it isn't possible for us at the api level to break up a big message. It might have deep structure which would be difficult to handle incrementally.

We could automatically break a big message up into smaller chunks, then send each chunk as a separate sub-message, then reassemble them on the other end. Other messages could slip in between and reactivity would be preserved.

We still have the denial of service problem whereby n-1 of our chunks have arrived. It is compounded because now we could also have many partial messages being buffered. How long do we hold a partial message before give up?

Incrementally parse messages

Etch currently buffers up all the bytes of a message in a single large buffer before de-serializing it. The resulting single large buffer can constipate the heap. It also requires twice the storage, or more, to de-serialize a message, as we must de-serialize the entire message before we can free the buffer. If messages were buffered in chunks and parsed incrementally, buffers which have already been parsed may be discarded back to the heap sooner. Less constipation, and nearly half the storage requirement.

Timeout partial messages

A timeout mechanism on a connection should always be used. Etch's KeepAlive filter works for this. If the connection fails to make progress and the pipes become jammed, the connection should be closed and any partial buffers discarded. Where partial messages are allowed to exist, some perhaps similar mechanism needs to test for their presence and shutdown the connection if they get to be too old (because it is a denial of service attack).