[MUD-Dev] Sockets

Mark Gritter mark at erdos.Stanford.EDU
Wed May 12 02:55:51 CEST 1999


Jon A. Lambert writes:
> 
> The common select/poll loop for non-blocking sockets that processes 
> incoming, exceptions, reads, writes, and other mud processes is not
> very efficient.  Firstly setting up the parameters to select and checking 
> the FD_SETs involved alone is probably higher overhead than threading.
> Secondly if the poll time is too short you end up burning processor 
> cycles in the poll loop for no good reason.  And if the time is to long
> you end up sleeping while your mud could be processing ripe events that
> are issued from other functions in the mud (assuming that not all your 
> events are initiated immediately for user input).   Yes, some good 
> implementations attempt to dynamically adjust the polling time depending 
> on how active the server is/was.  Still there is enough overhead here
> to more than compensate for the overhead incurred through thread locks.
> 

This is beginning to sound like one of those OS qualifying exam questions 
I've been studying for...  :)  (And before I get any further, I feel
obligated to point out that the "right answer" is _highly_ dependent
on what sort of architecture and operating system you're using... and,
of course, on the exact nature of the workload.)

The above analysis of poll time only makes sense if there is always
extra work to be done.  If I only have 5ms of "internal" event
processing for  every 10ms of  clock time, then getting it all done
right away (not waiting in select()) isn't going to buy me anything.
(If I _have_ I/O waiting, then select() will return right away and I'm
not wasting any time.  If I _don't_ have I/O waiting, then it doesn't
matter whether I do extra processing at the beginning or end of my
window--- assuming the system's not overloaded, of course.)

The only way threads "win" in efficiency is if the overhead of
handling K threading events (with no "fixed", per-timeslice overhead)
is less than the  overhead of handling K I/O events in a polling
mechanism.

(Apologies if the below seems hopelessly academic.)

Here's how I'd model it.  There are two options for threading--- I/O
could take precedence over "internal" events (using thread priority),
or we  could put them all at the same level.
  Let K = average number of I/O events per "time slice" (= whatever
          time scale internal events happen at)
      N = number of I/O sources
      L = number of lock operations in the internal thread
      L'= average number of lock operations in an I/O handler

---Threading, equal priorities:--- In the best (for efficiency) case,
I/O is "queued" to run after the the internal thread, something like
this:
           AAAAAAAAAABBCCDDIIII
 A = internal thread,  BCD = even threads, I = idle overhead is K
context switches, L + KL' lock operations

---Threading, unequal priorities:--- I/O events may interrupt the
internal thread at any time, but not each other.
           AABBAAAACCAAAAIIDDII (for example) overhead is K to 2K
context switches, L + KL' lock operations
(I'm ignoring the problem of contention... this could add at most
2L extra context switches, I think, since in this simple model, only events
and the internal thread can wait on the same lock.)

---Polling---
I/O events are delayed until the next "select" call.  This doesn't affect
average number, though, since essentially we get all event from the last
cycle delayed until the start of the next cycle.
           BBCCDDAAAAAAAAAAIIII
1 system call, three loops (though FD_SET) of size N  (one in the kernel)

Leaving aside the implementation-specific question of what the fixed
overhead is... polling overhead would seem to scale with the number of
_possible_ I/O events, while threading overhead increases with the
number of _actual_ I/O events.  (Mapping an event to an fd is not 
significantly easier than mapping an event to an fd and then looking at 
which thread is waiting on it, so both types pay the same "per-connection" 
overhead, if any, in the kernel.  And scheduling typically only needs 
consider the _active_ threads.)

For short time scales, I'd certainly prefer the latter--- *IF* the constants 
on all the variable terms are small enough.  So I guess I talked myself into 
agreeing with Jon.  :)

The natural question is whether select() could be made more efficient.
I think the answer is yes--- have fd's registered to a particular "ready 
queue", have the kernel put them on that queue when they become readable,
and then have system calls to pull fd's off the ready queue one by one.

Mark Gritter
mark at erdos.stanford.edu


_______________________________________________
MUD-Dev maillist  -  MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev




More information about the mud-dev-archive mailing list