[MUD-Dev] Re: Ethernet NICS, maximum connections..mud testing.

Chris Gray cg at ami-cg.GraySage.Edmonton.AB.CA
Sat Aug 8 23:56:30 CEST 1998


[Ben Greear:]

 >Would anyone be interested in my java tester program?  I think it could
 >be modified easily to work with most any MUD, providing you know something
 >about programming and or java.

Yes! Although no hurry - I won't be ready for straight telnet-style
testing for a while.

Which brings up a related technical question I've bumped into...

On the old Amiga version of my system, I used Amiga Exec messages for
communicating on the local machine between a client and the server.
Since the system has no VM, that was a pretty efficient mechanism.

On Linux, I'm using sockets. I have my simple client mostly working,
using the same old binary protocol as before (I'd like to stay
compatible). So, a big test is to use that client to build the scenario
from the 30,000 or so lines of scenario source. That works, but, the
kicker is that its *slow*! It's not that much faster on this 300 MHz
P-II than it was on a 25 MHz 68040. But, the CPU is mostly idle while
this is going on. My perfmeter shows nothing, and if I run 'top' or
'ps' while the activity is going on, they show that the server and
the client together use under 2% of the CPU time, with very occasionally
going to nearly 20%. Now, I'm using a SCSI-II disk on an Ultra-SCSI
controller, and I can cat the sources to /dev/null in under a second
(presumeably from disk buffers), so the slowdown isn't disk I/O (the
disk light only very occasionally lights during the process). Using
'time' on the server and client shows similar results - a very low
CPU usage, both for user time and system time.

I've added a bit of instrumentation to the server, by measuring the
time for the 'select' call, and comparing that against the requested
delay. It's never more than 1/100 second greater (that's the
resolution I use for queued events). So, what is happening? Does the
Linux kernel (I'm running 2.0.30) impose an artificial delay between
putting a message into one socket and reading it from another in another
process? If so, is there anything I can do about it?

The only other measurement I can think of doing is to have a simple
'ping' message from client to server, and time the round-trip. If that
is excessive, that would seemingly confirm the above. If something
like this is indeed the problem, what does it imply for packets coming
in from and going out to remote systems? Will they also have that
delay imposed on them? I was hoping that my server would be efficient
enough (it runs everything via an interpreted language) that network
stuff would be the bottleneck, but I wasn't expecting this kind of
bottleneck!

--
Chris Gray     cg at ami-cg.GraySage.Edmonton.AB.CA




More information about the mud-dev-archive mailing list