[MUD-Dev] The impact of the web on muds

Miroslav Silovic silovic at zesoi.fer.hr
Tue Jan 27 13:34:16 CET 1998


cg at ami-cg.GraySage.Edmonton.AB.CA (Chris Gray) writes:

> :Heh, maybe we should work on optimizing a ray-tracer. :)
> 
> Hasn't that already happened in some games? I'm pretty out of touch in
> the rendering field, but I thought most polygon engines essentially did
> ray-tracing of one ray for each polygon they draw, so as to get the
> proper colour/texture for that polygon.

In fact none of them does raytracing.

Doom engine traces a ray in a 2d grid (basically the walls) to get the
point on the vertical wall, then draws the entire column. So it traces
320 rays per frame.

Depth buffer engines (Mesa is one, I think) don't trace rays at all,
they simply clip and fill the polygons, and interpolate Z coordinate,
then they use array of one cell per pixel to store Z coordinates (so,
they compare the Z coordinate, and it it's closer to the eye than the
latest, they plot the pixel). You can find the code for this in my
raytracer (ftp://ftp.roguetrader.com/pub/miro/sart)

Finally, painter algorithm engines draw polygons in a sorted order,
starting from the furthest to the nearest. This is nice because if
you're using nontextured polygons, you can use X server to paint the
polygons with color (otherwise you still need texturing code, because
X server's textures aren't perspectively transformed). The sorting
itself is done using BSP tree structure. BSP is a binary tree with the
following properties:

	1) each node contains a single polygon

	2) left descendants of each node are on the same side of the
	   plane of that polygon. Same goes for the right descendants

You can construct BSP tree like this:

Pick a random polygon. Now split all other polygons into left and
right classes. Some polygons will have to be split by the plane of the
chosen polygon (you can find polygon splitting code in my raytracer,
it's used for clipping in zbuffering code). Now recursively process
left and right subtree if nonempty. The process will terminate because
total number of the splitting planes in the left and right subtree is
lower by one.

To render BSP tree, start with root. Find out which side of the plane
for that node contains the viewpoint. Now first render (recursively)
the subtree on the opposite side of the plane, then draw the polygon
within the node itself, then finally render the near side.

You can reduce the number of subdivided polygons (or perform other
optimizations) by selecting /several/ random polygons, subdivide by
each and see which one is the best. However, constructing absolutely
optimal BSP tree is NP-complete problem.

Rendering complexity is linear in the number of nodes, while
construction complexity is linear on average, quadratic in the worst
case.

There is another sorting algorythm, called Newel, Newel, and Sancha
(if memory serves me), but it's completely obsoleted by BSP trees.

You will find out that while my raytracer uses BSP trees to accelerate
raytracing, it uses completely different kind (and pretty useless for
polygon sorting). Proper raytracing (i.e. one ray per pixel, the way
POV or Radiance or my own raytracer (with zbuffering off) do) is much
too slow for any soft of game graphics except precalculated images or
animations.


As for the texturing, when you're doing realtime animation,
perspective transformation of the texture is generally rather slow
operation (mostly because most CPUs use LARGE number of cycles for
division). The usual answer is to calculate perspective transformation
for every 8-16 pixels and linearly interpolate between them.

Finally, lighting, the way Quake does it, is another issue. You can't
store all the lighting as extra textures because that'd a) kill all
the available memory (since lighting isn't repetitive the way textures
usualy are), and b) would be too big for hardware rendering (as most
cheap rendering cards can store only about 4 MB of textures). Instead,
you can triangulate the scene into sufficiently small triangles
(search through the web for adaptive radiosity algorithms and read
some articles on the topic) and store only the lighting for the
vertices.  Then use bilinear interpolation to get the lighting info on
the rest of the triangle and overlay the result over the textures.

--
I refuse to use .sig



More information about the mud-dev-archive mailing list