[MUD-Dev] Quality Testing

Dave Rickey daver at mythicentertainment.com
Sat Oct 20 16:10:55 CEST 2001


-----Original Message-----
From: Koster, Raph <rkoster at soe.sony.com>
>> -----Original Message-----
>> From: Michael Tresca

>> Okay, THAT's disturbing.  If I understand this issue correctly:
>> 1) an effective QA methodology doesn't apply to a game with a
>> bazillion variables in it.  2) if you were to use said QA
>> methodology, it would unacceptably delay the game beyond its
>> release date -- such that it might no longer have a competitive
>> edge.

> That sounds like ALL games to me. Remember, the commercial games
> market is driven by graphics. It takes a year to six months for
> most games to go from "decent graphics" to "crap graphics" in the
> eyes of the hardcore market.  You've got a window--hit it. You
> also have a "drop dead" date in the fall.  If you don't get your
> game onto the shelf by late September, it might as well not be
> there until January, because it'll miss the Xmas crush--retailers
> often won't even take it because they will have filled their
> retail shelf space with something else. This makes the importance
> of deadlines all the greater.

But who really wants to try to predict the state of game hardware
technology 4 years in advance?  Two years seems to be the limit, you
can safely say that hardware that represents the bleeding edge now
will be standard or just below it in 2 years.  In 4, the rules can
change completely.  If you can't produce the game within two years
of committing to a client engine, you're in trouble.

>>  This leaves us with the "let the players test it" approach,
>>  which I'm all for with Beta tests.  But how effective are betas
>>  in cleaning the game up enough for release?

> A lot depends on how you run said beta. They're great for stress
> testing, and really, you can't really test some stuff any other
> way. For many other things, they are useless since the effort to
> actually track and verify all the input you get can be greater
> than the efforts of an in-house QA staff.  Basically, a lot
> depends on what you are looking to get out of the test.

Which brings us back to my original point, we got a *lot* more out
of our beta than stress testing or scaling information.  Our beta
testers *were* our QA.

Don't track, don't verify, just fix.  Quest developer A reads on the
Quest bug board that NPC B sends you to NPC C with item D, but NPC C
won't accept it.  Using the search facilities built into the content
development tools, developer A sees that developer D forgot to link
the quest trigger for NPC C to the right object number.  Developer A
fixes the problem, puts up a short "fixed, thanks" message in the
thread, and that is the end of the process.  No overhead, no lag, no
over-worked QA department slowing down the reaction time.  Directly
from beta tester to developer with no intermediaries.  The key to
the whole system is search functions that can take the information
likely to be provided by the player (in this case the NPC names) and
from that allow you to pull up all the data that comprised the
quest.

If your problem is more complex, then tell the testers what kind of
data you need to diagnose it.  You can even get a great deal of
analysis done for you by telling them the nature of the system and
what the useful parameters of the data are.

Here's the thing: The testers have a great deal more knowledge about
how your system actually works than you do.  Given an adequate
context, they can tell you what's wrong, and even give you
suggestions on how to fix it.  It takes far less time for you to
provide that context and digest their feedback than it would to
simply gather the raw data from them, never mind come up with a
reasonable analysis.  And they'll ask the questions you didn't think
of.

The testers do not want to be treated as mindless information
probes, if you try to use them as nothing more than that you'll get
little use out of them.  But if you allow them to do your first
stage analysis, and give them enough information and background to
make that analysis useful, they'll be overjoyed at the chance.

The most fundamental parts of the game (combat algorithm and the XP
curve) were built and rebuilt based on tester feedback.  On the XP
curve, they literally were provided *complete* information on how it
worked.  After they analyzed it, they found several issues with it,
*and* provided the solutions.  We're not talking about a peripheral
detail here, but the foundation that the rest of the game was built
on, and the system is as much the product of our testers as anyone
drawing a paycheck.

--Dave Rickey (It's their world, I just work here)

_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list