[MUD-Dev] Quake II has gone GPL

J C Lawrence claw at kanga.nu
Thu Jan 17 03:15:49 CET 2002


On Wed, 16 Jan 2002 23:06:00 +0000 
Nicholas E Walker <new at gnu.org> wrote:
> On Wed, Jan 16, 2002 at 03:18:41PM -0800, J C Lawrence wrote:

> I am a free software bigot.  

<nod>

> I am also a good software bigot.  If you keep the hit-points or
> item list of my character stored in my client, then I can and may
> change it, with or without the source code.  

There are ways to detect and (largely) prevent this.

> What comes to mind as the best way to secure a system is to trust
> only the code that will be under your control, and to design your
> interfaces properly.  If some person can modify their client, or
> write a new one, and come out with an advantage or a way to crash
> a remote process (maybe one of your game servers), then something
> is obviously wrong.

Quite very true, its also very simplistic.

  Designing secure compartmentalised complex protocols is not easy.

  Designing internally secure compartmentalised logical systems that
  make no cross-compartment exposures is hard.

  Proving that a design is correct for a non-trivial (which
  basically means unbounded) is also hard.

  Maintaining the correctness of such a design across growth,
  requirements changes, marketing demands, versionitus, etc is very
  hard.

  Demonstrating that an implementation is a faithful version of the
  design and makes no further exposure than the design does is
  very hard.

Been there, done that.  There are very good reasons that time to
release for MoD/DoD projects in this sort of space (usually SEI
level 4 or 5 projects) spend multiple years in just the design
stage, and then further multiple years in the implementation design
state (not a byte of code has been written int his time), and then
yet further years doing a pseudo code implementation and then a
couple more years after that doing a real code implementation.

  Real life example: 

    1.3Million LOC project, 36 competent software engineers (no
    junior engineers, total time for project from inception to
    delivery: 7.5 years.

    That's an average of 4248 lines of code per engineer per year.

    FWIW They spent the first 3 years in design, a year in review,
    the next two and half years in pseudo code and pseudo code
    review, and then two years translating the pseudo code to real
    code and reviewing that.

FWVLIW the case cited also came in on schedule and under budget.

Remember: The exploiting client has no effective runtime CPU,
resource, or cost limits -- but you do have those limits, and quite
hard ones.
    
> If I am showing my ignorance of the design of MMORPG systems,
> please forgive me.  

Its not that, its that the problem looks simple from the outside,
but rapidly becomes a tarbaby once you try and work with it --
especially for something with as softly defined boundaries as a MUD.

> In my experience with developing distributed systems (I mean
> systems where multiple processes are participating, not "peer to
> peer", just to clear that up), designing with secure interfaces
> and appropriatly located logic is standard stuff that happens
> before any code is written.

True, but that's a much more constrained problem set where
inter-node and link latency is not an issue, link reliability is not
an issue, user performance perception is less an issue, and you have
a reasonable control over the RPC/IPC space and rates.

> For instance, can you imagine an on-line ordering system where the
> total cost of your order was stored on the client, and that is
> what got billed?  Even if the cost is stored on the client so that
> it can be conveniently viewed, the server must (i hope!) do some
> accounting and recalculate the cost before the billing is done.
> If a gaming system (any system) performs critical information on
> untrusted data, something is wrong.

Which is the reason Raph has this as one of his laws.

> So, as a free software bigot, I respond that the only way to write
> secure software is to write good code with secure interfaces.
> Even bad code with good interfaces shouldn't cause you any
> problems.  Security through obscurity is an excuse for poorly
> designed software.

Security is all about risk management, and in particular risk versus
cost/benefit management -- or as I like to put it, its all about
intelligent assessment.

  Security thru obscurity is not the best tool, but it is a tool and
  can be a valuable and effective tool.  

The problem is that it is rarely used well or with correct analysis
of its values.

Security thru obscurity works, and works more often that not if
carefully applied.  The problem is that its unreliable and the
metrics for predicting when it will fail are also unreliable.
However, that doesn't mean that SWAGs aren't made and those weighed
against the cost/benefits of "Doing It Right".

Stupid example:

  For several years I ran SSH 1.2.26 on a public server.  That
  version of SSH had well known (by me) to have security holes (root
  compromises), large enough to drive to drive a truck thru.  I did
  nothing to defend myself against an SSHd exploit.  The box was
  regularly probed over the years with several scores of thousands
  of attempted exploits run against it.

  This went on for several years.  The system was never cracked.

    Reason: It was an AlphaStation.  The exploits for SSHd which
    were in wide distribution were x86 specific.  They didn't work
    on non-32bit systems, and especially didn't work on non-x86
    instruction set systems.

  I knew this.  It was also why I didn't bother upgrading.  I knew
  the exploit was possible, but for various RL reasons upgrading the
  box at the time was difficult and inconvenient so I didn't.  The
  probability of risk was very low (it would require a near-custom
  exploit), my exposure was small (I run very tight and twitchy
  HIDs), and the implications of a crack serious but not
  catastrophic.  So, assessing the risks, the costs, and the
  benefits, leaving the hole in place and not upgrading the box came
  out significantly cheaper.

  Security thru Obscurity won, and rightly so.

Of course Diablo also learned that you do have to really think the
obscurity process thru and not just dismiss it as, "nobody will
bother".  Then again, Diablo was a much more attractive target than
I was (which fact was part of my calculation).

This also explains why for otherwise well administered secure
systems I prefer running non-x86 non-SPARC non-32bit CPUs.  They
cheaply increase the size of my breathing window given latency
between exploit creation and patch creation/installation.

> I am afraid that one day a software developer may be able to trust
> personal computers to keep information hidden from their users.

Actually not a terribly difficult thing to do, but that's another
matter.

--
J C Lawrence                
---------(*)                Satan, oscillate my metallic sonatas. 
claw at kanga.nu               He lived as a devil, eh?		  
http://www.kanga.nu/~claw/  Evil is a name of a foeman, as I live.
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list