[MUD-Dev] Multi-threading ( was: TECH DGN: Re: a few mud server design questions (long))

Jon Lambert tychomud at ix.netcom.com
Mon Jul 30 00:25:42 CEST 2001


Robert Zubek wrote:
> Sean Kelly writes:
 
>> Be careful with threads.  Often programmers consider the best
>> thing since swiss cheeses without considering the cost involved.
>> It takes a measurable time for the operating system to switch
>> processing between threads.  In a pervasively multithreaded
>> program, it's quite possible that you could end up spending as
>> much time switching between threads than you spend processing the
>> threads themselves.  Potential design needs aside, the ideal is
>> still one thread per CPU.

> good point. i'm beginning to strongly lean towards a single thread
> of execution for the main engine, and handling temporally extended
> events via some sort of cooperative yielding. (i have a reply to
> the calendar idea in a separate post.)  but i think i'll need
> still separate threads for each of the player network interfaces,
> and probably one for each of the smarter NPCs (the ones that
> actually need to communicate with players - the dumb background
> NPCs can be coded up as just objects with an extra bit of
> autonomy).

I simply must leap in here, meep, gibber, and wave my arms about
wildly.  :-P

The ideal is NOT "one thread per CPU" no more than the ideal is "one
process per CPU".  The ideal is that a CPU should always have a unit
of work available and ready to run.  No more than one unit of work
and it should be ripe for the plucking at the exact moment when the
CPU becomes available.  Naturally we're always going to fall short
of that ideal.  :-)

It should be obvious, but the fundamental concept that transformed
the world from single-tasking operating systems to multi-tasking
operating systems still holds true today.  Multi-tasking OS's are
indeed much more efficient than their single-tasking ancestors.
Throughput is the measurement of an interactive server application
like a mud, not CPU usage.  Multi-tasking OSs were not merely
designed for the convenience of the shell user.  There is no reason
at all to believe that the finer granularity of application
threading is any less important than the larger grain of processes,
and does not further maximizing efficiency and throughput by
insuring there is always something the CPU _could_ be doing instead
of waiting around picking it's virtual belly button.  What you want
to maximize is utilization.  And that occurs when there are enough
threads to hide latency not cause latency.  It's more than 1, and
it's not like you have a real choice in the matter of context
switches anyways if you're running NT, *nix, VMS or OS/390.  You
probably run them because they context switch. :-P

The measure of how overloaded a system is the length of the run
queue not the number of threads or processes in the system.  Most of
our desktops will rarely hit a run queue length of over 2.  Right
now as I'm typing this there are 119 threads running on my system
with an average run queue length of slightly less than 1.  CPU is
trivial around 5%.  4% of that the mud BTW.

Now if one designs an application in such a way to attempt to
guarantee that all threads are busy then that design will likely
guarantee the worst possible performance on a single-processor
machine.  That's an embarrassingly parallel case.  For example,
breaking a CPU bound rendering process into multiple threads is a
classic example where multi-threading on a single CPU is ummm less
than intelligent.  Reasonable context switching actually hides
latency.  That's really the ideal.  Over context switching is the
price one pays for designing an application in such a way as to have
too many CPU bound threads.

IMO a server application requires just as many threads as are
necessary to maximize it's average throughput regardless of whether
it runs in a single processor or multiprocessor environment.  That
balance is of course different for each environment thus it should
also be flexible and configurable.  Yes I'm certainly itching to
burn 5% more of CPU time if it drops average user response time by a
half second.

I use thread pooling for ALL operations that do I/O whether it be
database or network I/O.  It's not one thread per user or one thread
per I/O request, it's a fixed pool of N pre-created threads serving
all the requests for a particular class of I/O.  N is of course
something that can be tuned depending where and what I eventually
end up running it on.  :-)

  "Threads are for people who can't program a state machine" - Alan
  Cox

For some reason I always think of Aber.  :-P

  "Threads are for people who understand how their apps should work
  better than the brain dead OS does." - me

--
--* Jon A. Lambert - TychoMUD        Email:jlsysinc at ix.netcom.com *--
--* Mud Server Developer's Page <http://tychomud.home.netcom.com> *--
--* If I had known it was harmless, I would have killed it myself.*--

_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list