[MUD-Dev] Re: Disk v. Mem

Nathan Yospe yospe at hawaii.edu
Wed May 14 12:04:17 CEST 1997


On Tue, 13 May 1997, Cynbe ru Taren wrote:

:| what are the advantages of swapping out objects
:| to disk as opposed to keeping them in memory?
                                               
:Excellent question -- and the follow-on excellent question
:is "Ok, so why not just let the host virtual memory hardware
:do it for you?" :)

Which you will address later in this message...

:| do you really need to save that much memory?
:| do you need it for something else?

:At today's ram prices, depending on your budget, you may well not need
:to.  A large mud can run 100Meg + of ram, though, much of it
:infrequently used in many cases, and you might not want to devote ~$500
:worth of ram to the db if you can avoid it with a simple software
:fix...

I've been fighting to keep my database size down without restricting the
world (in other words, lazy copies all over the place, a memory manager to
handle dead objects _immediately_, and small objects, as much as possible)
but I've done very little toward actual swapping. The design of my system
(and the active _process_ (or thread, more accurately) swapping is another
matter.) I would like to initiate swapping, but this strays out of my area
of expertice, at least on the target systems. I've done this sort of work
on macintoshes, and on BeOS boxes, but that's it.

:| are objects so large that 40,000 of them fill mem quickly?

:On some servers, at least, objects take a minimum of 100 bytes
:each, and the sky is the limit after that as properties are
:added:  There may be hundreds of thousands of strings of about
:sixteen bytes each average size.

Yeerk! Suddenly glad I'm doing string compression, at least rudimentarily.

:| does too full a mem size inhibit the driver speed?

:Not for most practical purposes, as long as it all fits in
:physical memory.  If it gets big enough to start paging out
:to disk via the host virtual memory system, performance can
:drop off -dramatically-.  (I speek as someone who has on
:occasion worked with hundreds of megabytes of volume MRI
:datasets on machines with only dozens of megabytes of
:physical ram *wrygrin*...)

Try a 2 gig subset of conditions for a digital filter, and only 16 megs of
ram. Doing Forier transforms of the matrix of conditions. Swapping
_crawls_.

:| isn't it more costly in speed to write to disk?

:Absolutely -- a factor of a million or so.  (Milliseconds
:to access disk vs nanoseconds to access ram.)  Meaning that
:if your server suddenly starts running at disk speeds
:instead of ram speeds, it may suddenly look a million times
:lower.

:Humans are amazingly sensitive to a slowdown of just
:a constant factor of a million. :)

*chuckle*

:| what kind of savings do you get versus this extra cost?
                                                        
:Depending on your situation, you may buy nothing at all,
:you may be able to support a bigger db than you could
:otherwise afford, you may be able to do a better job of
:virtual memory than the host OS/hardware would otherwise
:do, thus reducing the time spent waiting for disk, or
:you may be able to save money on hardware by not having
:to buy an extra gigabyte of ram.

A gigabyte. Shoot me if my database ever gets that big, would you?

:If you really need to access all of your db every second
:or so, then diskbasing just aint gonna work for you. This
:may be true of some combat-style muds with simulation going
:on in every room on every cycle, say.

This is why, as mentioned in J C's reply a few messages back, I swap out
inactive areas from the update threads.

:If much of your db goes untouched for minutes, hours or days
:at a time, you may be able to save lots of ram by keeping the
:unused parts on disk, and only the frequently touched stuff in
:ram. (Which still isn't free, no?  Else I'd have a lot more of
:it floating around the house than I do. :)

True. On the other hand, not keeping _anything_ in ram is bad. I use a
disk based system with the GURU project, but I keep locally active sectors
in ram cache.

:"Why not just let the host OS page the unused stuff out to disk?"

:The basic problem is that modern hardware swaps out units of 4K,
:whereas objects in muds tend to be 16-32 bytes long (given a
:reasonable server design - 100 bytes or so if its spendthrift).

Mmm. Also given the problem that on some (like my own) systems, an object
may actually be composed in large part of pointers to other sectors in
memory... which is how I keep my objects small, but it does present some
problems. I might overload new to use local sectors and compress used
memory... worth a shot.

:So if a 4K page contains only one 40-byte object in use,
:it still has to stay in ram, even though 99% of it is not
:in use.  If you're willing to buy 100x more ram than is
:logically needed, this isn't a problem:  Otherwise, a software
:solution that swaps out smaller units of ram can be a big win.

:A viable alternative is to have your server move all the objects
:in frequent use into one spot in ram:  This leaves lots of pages
:which are 100% unused instead of 99% unused, which the host 
:OS/hardware can then swap to disk for you.  This is a very viable
:approach which for some reason I don't seem to see anyone using...

This is suddenly sounding _very_ appealing. Could you explain in more
detail how you would approach this? My first instinct, something I have
done before, is allocating memory in chunks, then running a sort of
copyover to clean and compress chunks as new ones are called for, in large
enough sizes to stimulate swapping.

   __    _   __  _   _   ,  ,  , ,  
  /_  / / ) /_  /_) / ) /| /| / /\            First Light of a Nova Dawn
 /   / / \ /_  /_) / \ /-|/ |/ /_/            Final Night of a World Gone
Nathan F. Yospe - University of Hawaii Dept of Physics - yospe at hawaii.edu




More information about the mud-dev-archive mailing list