[MUD-Dev] Re: Ethernet NICS, maximum connections..mud testing.

Ben Greear greear at cyberhighway.net
Sun Aug 9 12:09:09 CEST 1998


On Sat, 8 Aug 1998, Chris Gray wrote:

> [Ben Greear:]
> 
>  >Would anyone be interested in my java tester program?  I think it could
>  >be modified easily to work with most any MUD, providing you know something
>  >about programming and or java.
> 
> Yes! Although no hurry - I won't be ready for straight telnet-style
> testing for a while.

Check my web page (see .sig at bottom).  Its on the Downloads link.

> 
> Which brings up a related technical question I've bumped into...
> 
> On the old Amiga version of my system, I used Amiga Exec messages for
> communicating on the local machine between a client and the server.
> Since the system has no VM, that was a pretty efficient mechanism.
> 
> On Linux, I'm using sockets. I have my simple client mostly working,
> using the same old binary protocol as before (I'd like to stay
> compatible). So, a big test is to use that client to build the scenario
> from the 30,000 or so lines of scenario source. That works, but, the
> kicker is that its *slow*! It's not that much faster on this 300 MHz
> P-II than it was on a 25 MHz 68040. But, the CPU is mostly idle while
> this is going on. My perfmeter shows nothing, and if I run 'top' or
> 'ps' while the activity is going on, they show that the server and
> the client together use under 2% of the CPU time, with very occasionally
> going to nearly 20%. Now, I'm using a SCSI-II disk on an Ultra-SCSI
> controller, and I can cat the sources to /dev/null in under a second
> (presumeably from disk buffers), so the slowdown isn't disk I/O (the
> disk light only very occasionally lights during the process). Using
> 'time' on the server and client shows similar results - a very low
> CPU usage, both for user time and system time.
> 
> I've added a bit of instrumentation to the server, by measuring the
> time for the 'select' call, and comparing that against the requested
> delay. It's never more than 1/100 second greater (that's the
> resolution I use for queued events). So, what is happening? Does the
> Linux kernel (I'm running 2.0.30) impose an artificial delay between
> putting a message into one socket and reading it from another in another
> process? If so, is there anything I can do about it?

Not that I know of.  You do know that select sleeps if there is
nothing available right?  Other than that, I don't know enough
about your code to make a better guess. 

Some questions:

How big are the packets you send between client and server?

Are you running between two machines?  If so, try to benchmark it
against bulk transfer (ftp).

Do check ping.

Try some other server/client interaction and see how it performs.
(ftp might work here as well, depending on your setup.) 

> 
> The only other measurement I can think of doing is to have a simple
> 'ping' message from client to server, and time the round-trip. If that
> is excessive, that would seemingly confirm the above. If something
> like this is indeed the problem, what does it imply for packets coming
> in from and going out to remote systems? Will they also have that
> delay imposed on them? I was hoping that my server would be efficient
> enough (it runs everything via an interpreted language) that network
> stuff would be the bottleneck, but I wasn't expecting this kind of
> bottleneck!
> 
> -- 
> Chris Gray     cg at ami-cg.GraySage.Edmonton.AB.CA
> 
> -- 
> MUD-Dev: Advancing an unrealised future.
> 


Ben Greear (greear at cyberhighway.net)  http://www.primenet.com/~greear 
Author of ScryMUD:  mud.primenet.com 4444
http://www.primenet.com/~greear/ScryMUD/scry.html







More information about the mud-dev-archive mailing list