SYNOPSIS

Potential \s-1XPA\s0 race conditions and how to avoid them.

DESCRIPTION

Currently, there is only one known circumstance in which \s-1XPA\s0 can get (temporarily) deadlocked in a race condition: if two or more \s-1XPA\s0 servers send messages to one another using an \s-1XPA\s0 client routine such as XPASet(), they can deadlock while each waits for the other server to respond. (This can happen if the servers call XPAPoll() with a time limit, and send messages in between the polling call.) The reason this happens is that both client routines send a string to the other server to establish the handshake and then wait for the server response. Since each client is waiting for a response, neither is able to enter its event-handling loop and respond to the other's request. This deadlock will continue until one of the timeout periods expire, at which point an error condition will be triggered and the timed-out server will return to its event loop.

Starting with version 2.1.6, this rare race condition can be avoided by setting the \s-1XPA_IOCALLSXPA\s0 environment variable for servers that will make client calls. Setting this variable causes all \s-1XPA\s0 socket \s-1IO\s0 calls to process outstanding \s-1XPA\s0 requests whenever the primary socket is not ready for \s-1IO\s0. This means that a server making a client call will (recursively) process incoming server requests while waiting for client completion. It also means that a server callback routine can handle incoming \s-1XPA\s0 messages if it makes its own \s-1XPA\s0 call. The semi-public routine oldvalue=XPAIOCallsXPA(newvalue) can be used to turn this behavior off and on temporarily. Passing a 0 will turn off \s-1IO\s0 processing, 1 will turn it back on. The old value is returned by the call.

By default, the \s-1XPA_IOCALLSXPA\s0 option is turned off, because we judge that the added code complication and overhead involved will not be justified by the amount of its use. Moreover, processing \s-1XPA\s0 requests within socket \s-1IO\s0 can lead to non-intuitive results, since incoming server requests will not necessarily be processed to completion in the order in which they are received.

Aside from setting \s-1XPA_IOCALLSXPA\s0, the simplest way to avoid this race condition is to multi-process: when you want to send a client message, simply start a separate process to call the client routine, so that the server is not stopped. It probably is fastest and easiest to use fork() and then have the child call the client routine and exit. But you also can use either the system() or popen() routine to start one of the command line programs and do the same thing. Alternatively, you can use \s-1XPA\s0's internal launch() routine instead of system(). Based on fork() and exec(), this routine is more secure than system() because it does not call /bin/sh.

Starting with version 2.1.5, you also can send an XPAInfo() message with the mode string \*(L"ack=false\*(R". This will cause the client to send a message to the server and then exit without waiting for any return message from the server. This UDP-like behavior will avoid the server deadlock when sending short XPAInfo messages.

RELATED TO xparace…

See xpa(7) for a list of \s-1XPA\s0 help pages