[MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence

Daniel.Harman at barclayscapital.com Daniel.Harman at barclayscapital.com
Fri May 11 13:35:16 CEST 2001


-----Original Message-----
From: brian price [mailto:brianleeprice at hotmail.com]
Sent: 11 May 2001 00:27
To: mud-dev at kanga.nu
Subject: [MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence

> For purposes of fault tolerance, we need a datastore that can
> periodically be backed up in a fast and efficient manner, preferably
> without stalling the server.  Note that transaction capability (in
> db terms) is *not* a requirement, the capability of generating
> checkpoints by writing out the 'dirtied' (changed) portions of the
> database periodically will satisfy the datastore backup requirement.
> Checkpoints can be restored simply by starting with the last full
> backup and applying (in order) the saved changes since that backup
> occurred (can be done offline).

I disagree with you on a lot of points here, but I'd start here. I
think transactions are important in a MUD. Its the best way to prevent
duplicates through synchronisation problems (which is how most of the
EQ ones I heard about worked). If someone giving an item to someone
else can be made with a call to a single transactional
'switchObjOwnership()' type method, then you aren't going to get
either dupes or item loss when passing items.

> Thus our datastore requirements are:

>  1) frequent writes of dirty objects
>  2) infrequent reads of collections of objects
>  3) large number of classes
>  4) deep inheritance trees
>  5) fast and efficient backups thru use of checkpoints or equivalent

> An OTS or OS RDBMS is not overkill given these requirements, in
> fact, the entire class of available RDBMS solutions are
> underpowered, inefficient, slow, and expensive.  An oft touted
> feature of most RDBMS - SQL - is, in this case, completely
> unnecessary *and* imposes a significant performance hit for zero
> gain.  Worse, the use of efficient objects and resultant class bloat
> is practically impossible to represent in RDB terms without
> investing an insane amount of development time.

You previously said that infrequent reads were required. Thus I don't
see how the performance of an RDBMS is going to impact your proposed
solution.  Writes are generally fairly fast, its the queries that are
slow. By not going for an RDBMS you have made any type of reporting
functionality many times more difficult to implement. If you have a
large game, then I would imagine functionality to measure how many
warriors have weapons of greater than 'x' affectiveness is something
you might want to find out infrequently enough to make writing a
bespoke tool a pain, but frequently enough that having sql is a
feature. The same for economy reports and such like. With a bespoke
object store, any type of data-mining is just hideous. Anyway, a well
tuned and designed database can be remarkably fast.

> IMO, the best solution is a persistent object store.  Not a full
> fledged OODB (whatever that is), but a collection based storage and
> retrieval system for serializable objects.  In C++, such a system is
> fairly easy to develop: combine a RTTI based object persistence
> layer with the idea of 'data objects' using proxy/accessor pattern
> (to hide object memory presence and control object memory lifetime)
> with an object cache and a simple db store consisting of one index
> (object ids) and one table with records of the form: <object id>,
> <class id>, <serialized object data>.

RTTI is plain slow and seldom justified. If you work with an
interface/implementation design pattern, then its better to have a
persistence interface imho. Personally I'd have a couple of methods
that could stream and unstream an object (for transport between
servers - after all we are talking distributed large scale muds here
right?), and a persist method to get it to write itself to the
db. None of these are a great deal of work. If you were to go towards
Java or C#, you could make this even more trivial with the object
reflection.

> I've heard all the arguments against OODBMS over the years and all
> the arguments for RDBMS, and in this case at least, *none* of them
> hold any water.

I disagree. I think an RDBMS with a bespoke in-memory cache would be
the optimal solution.

What about failover? A proper RDBMS will faciliate this. I get ill
thinking about having to write one of these for some kind of bespoke
flat file object store.

Its interesting, because I have worked on two version of a large(ish)
scale distributed fat-client system, one where we used sysbase, and
another where we did use a bespoke flat file system with in memory
cache for 'performance' reasons. The flat file system whilst initally
fast, was in fact more trouble than it was worth for the following
reasons:

  A) No failover.

  B) Half arsed transactions (i.e. they weren't always
  transactions...).

  C) Being non-relational, integrity was constantly being broken by
  people editing.

  D) Writing simple reports meant pulling out the compiler.

  E) It didn't scale well. Whilst fast for a few users, it wasn't
  scaleable.  With a RDBMS, whilst you take an initial performance
  hit, it scales breadthwise.

  F) Start up of the application was SLOW, as all the caches were loaded to
the fat client.

  G) Couldn't have multiple instances of the DB that were
  synchronised, they tried to implement it, but it didn't work.

  H) It wasn't as reliable as a proper db - it was a complex piece of
  code.

  I) You had to learn a whole new api to use it.

  J) Locking issues. If you have concurrent processes, this is another
  problem.

Now whilst you could write your system to avoid some (but I doubt all)
of this, I would need a lot of convincing that it would be wise.

Dan

_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list