[MUD-Dev] AI not worth doing in our games?

Sasha Hart hart.s at attbi.com
Mon Dec 9 20:46:56 CET 2002


[Jeff Freeman]

> I want to say: In most cases we don't need smarter AI.  We can
> achieve the same or better results with more diverse AI and/or
> some slight-of-hand.

It depends on the problem you are trying to solve. A screwdriver
makes a very poor hammer, even if it is a very good screwdriver.
Sometimes you can fake it, sometimes you can't.

> e.g. A "real" AI for pigeons in the park would be big, complex,
> and result in flocks of pigeons wandering around the park.

Not at all.

First, it need not be big or complex. There are many approaches
under the heading of 'AI,' even more under the label as applied in
the context of games - although the latter, which is also the
context of the current conversation, usually excludes giants like
ACT-R or even, for that matter, neural networks.

Second, leaving that aside, if you wrote something even remotely
like a pigeon, you would get a lot more possibilities out of it than
a random walk of complete identical pigeons, stuck in the corners,
bobbing their heads in unison, and allowing players to step on them.
(I am highlighting the stupidities because it is not obvious to me
that we need 'smart' so much as we need 'not horribly stupid.'
Horribly stupid is almost the state of the art right now, which is
why I appreciate attempts to solve such problems).

> We can make pigeons do that without a big, complex AI, though.  So
> if that's the desired result, we should just do that.

That's usually not how it works. "If we want to get the input from
the user, we should just do that. Why worry about handling these
wacky ASCII codes?" Because handling wacky ASCII codes is part of
the solution. You either take input from the user or you do not, you
do not 'fake it.' You might decide to represent only the 13 most
common letters of the alphabet, but you wouldn't be solving the
whole problem unless no one ever cared about having half the
alphabet missing.

This may be an argument that we don't need to attack ambitious goals
like "pigeons which learn to avoid people on the basis of the
appearance of their pants fabric and their past mistreatment of
individual pigeons." We really don't NEED to (although if someone
did a nice job with that, it might be pretty cool, and more
importantly it might raise the bar or provide techniques that are
generally usable; so I won't complain if they do). But certainly
that sounds unreasonably expensive to me, and not all that
desirable. These are valid considerations in my making the decision.

Certainly here we run into a problem that occurs in programming
generally, how much of tasks we want to be performed automatically
(aka how many levels of abstraction to use, how abstracted to get).
It is always a trade-off, and the trade-off is exactly between
initial effort in thinking about the problem and implementing its
solution versus size of the set of problems being solved by the
program and/or future extensibility and therefore the time which
will have to be spent in order to add essential future features, how
much time you have to spend refactoring or just rewriting code, etc.
It's tricky too in that it varies - sometimes you can do just a
little thinking and get a MASSIVE set of problems solved (y = x + 5
is an infinite series, it doesn't profit you to think about only the
numbers between 1 and 100), whereas sometimes you need to do a lot
of work to get even something simple done.

Many (even relatively simple) goals need constructs more abstract
than the determinate script, however. The sad thing is that we are
willing to abstract like crazy when it comes to technical features
(and I'm glad of it, look at the OS realm) but that, by comparison,
most of what ends up in even the NPC AI which is *supposed* to be
good is hard-wired scripting.

> I think in most cases, the desired behavior we want to see winds
> up being just like that: Easily accomplished from a top-down
> approach (where you just make the mob do what you want it to),

There's not really a difference except in degree of abstraction.

For example, you can consider backprop a method of function
approximation. If you think about it that way rather than "first we
make a neuron simulation, then magic happens and the AI talks"
(which is nonsense anyway) then it becomes apparent that AI often
*does* attack well defined goals, they are just more abstract goals
than what hard-wired scripts solve. Now, function approximation is
only useful if it is pretty general with regard to what functions
can be approximated, and that is really the point of it. But what it
does can be understood comparatively simply - given some kind of
interaction with a function, produce your own function which is as
similar as possible (minimizes error, etc.) This is top-down - it's
just abstract, so the description of the solution has lots of "this
thing we found out, whatever it is" rather than a certain part of
the contents of a certain register.

Abstraction is utterly useful and essential. Once you've written
fairly general code, you get a lot of outcomes that are kind of
compressed in one general formula. The program is kind of an
approximation of a function itself, a concise version of an enormous
set of mappings between input and output. The point is that you can
easily approach the same problems top-down, and to some extent you
have to if you aren't just writing something and then waiting for
stuff to emerge.

> and much more difficult to accomplish from a bottom-up approach
> (where you make the mob 'smart' and then try to convince it that
> it wants to do whatever it is you wanted it to do).

> It doesn't matter if the pigeons aren't really looking for food,
> as long as it looks like they're looking for food.

The usefulness of this observation varies as a function of
interactivity. The more freedom you give your players, the more it
matters.

If you spilled seed in front of the pigeon who was "looking for
seed" (wandering around and pecking) and he didn't even notice, or
if he pecked the seed which just stayed there, that would be a lousy
fake. Getting the pigeon to 'really look for food' is not much more
complex than having him move around and then hang out when he found
food and eat it (e.g., he makes pecking motions and it disappears,
even better if food disappears where he pecked rather than somewhere
else).

If you let a player get a pigeon and lock him in a cage and the
pigeon never dies, that's OK but it would be less weird if the
pigeon died (which really just means that his graphic fell over,
maybe made a plop sound, and he stopped doing things like eating and
moving. of course I am not demanding that we simulate changes in
brain chemistry). It would be less weird if the pigeon got full or
sick of eating seed than if you could literally feed him seed all
day (simple solution: keep a bit of state about how much has been
eaten, and if above some amount don't eat).

We probably don't need to model seed going down the gullet.  But the
only way you can really prevent people from noticing the above kind
of everyday stuff (pigeons don't die, don't look for food, don't
eat, don't move, can't fly, get stuck in corners) is if you
straitjacket them. If you build a city out of cardboard cutouts,
it's fine if people can't move behind them but if they can you're
lost, any kind of 'faking it' that can handle that case would
effectively be building up the whole thing anyway.

Again, consider the programming to be writing compressions of long
lists of possibilities and it becomes obvious. You can sometimes
reduce the complexity of your code by building the city out of
cardboard cutouts, as it were, but the problem here is that we are
often writing very interactive programs in which it often DOES pay
to handle lots of cases at once. Clearly there's a balance to be
struck, and clearly some things are better ignored than others (none
of my horses really need modeling of the femoral artery,
thanks). But without getting specific it's far too easy to take this
too far and begin concluding that everything should be build out of
cardboard cutouts. The problems with that are all too apparent in
implementation.

> The areas in which faking it just won't do are areas of
> competition with players.  But even in those areas, the players
> don't want the AI to be competitive anyway.

People beat each other and don't necessarily assume that the people
they beat are less intelligent proportional to how often or how
severely they are beaten. I have to repeat that beating the player
is completely orthogonal to apparent intelligence (which is itself a
different matter than making improvements like ability to
discriminate, remember, and adapt in even basic, simple,
computationally cheap, easy-to-code ways - which themselves need not
be oriented toward beating the player nor toward directly
entertaining the player).

> Nothing wrong with making smarter AI just for the sake of making
> smarter AI, though.

> Well, depending on your deadlines, I suppose.

Many soluble problems are not solved because people aren't
interested in them, are intimidated by them, or even because people
were discouraged by others from even trying by the claim that it
wasn't worth doing. I'd argue that these problems we want to fix in
game agents are not only worth attacking, but many of them are quite
tractable anyway. Even in a commercial context it is valuable to do
a better job and to at least reap the profits of other people's
research.

There really isn't any good argument for why no one should try. I
will easily concede that Everquest & co. probably shouldn't waste
their time (though I would be pleased if there were more
behavioral/causal detail in such games), or that an employee writing
a MUD shouldn't waste his company's money if what they really want
is rocks with voice acting and hitpoints with legs.

Sasha


_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list