Info from John Carmack (5)

Bernd Kreimeier (Bernd.Kreimeier@NeRo.Uni-Bonn.DE)
Mon, 3 Jun 1996 19:38:32 +0200 (MET DST)

Date: Mon, 3 Jun 1996 19:38:32 +0200 (MET DST)
From: Bernd Kreimeier <Bernd.Kreimeier@NeRo.Uni-Bonn.DE>
Message-Id: <199606031738.TAA05046@marvin.nero.uni-bonn.de>
To: quake-dev@gamers.org, bernd@marvin.nero.uni-bonn.de
Subject: Info from John Carmack (5)

----- Begin Included Message -----

>From johnc@idcanon1.idsoftware.com Sat May 18 04:38 MET 1996
Mime-Version: 1.0 (NeXT Mail 3.3 v118.2)
Content-Transfer-Encoding: quoted-printable
X-Nextstep-Mailer: Mail 3.3 (Enhance 1.0)
From: John Carmack <johnc@idcanon1.idsoftware.com>
Date: Fri, 17 May 96 21:38:15 -0500
To: Bernd Kreimeier <Bernd.Kreimeier@NeRo.Uni-Bonn.DE>
Subject: Re: q&a (3)
Content-Type: text/plain; charset="us-ascii"
Content-Length: 7321

You wrote:
> Two shorts, one int. Two longs you might want to ignore.
>
> Q: you are using the "portals" mentioned in the worklog during PVS
> computation, as described in Seth Tellers PhD?

Similar idea, different implementation. Someone refered me to Teller's
papers after I had a non-robust version of Quake PVS running, and it
clarified some things in my mind, but was definatly a different tack
from what I would up doing. I certainly don't retain any of the portal
chain information for run time culling -- the map files are too huge
as it is. I use a gross leaf to leaf PVS bit table, then the hierarchal
BSP bounding volumes for frustom culling.

> Q: I understand the short circuiting of view control on the client.
> Is it correct that only yaw/pitch (determining movement along
> current LOS and aiming) are transfered to the server? Is this
> "hardwired" in the net packet structure?

The server only rarely sends view angle information to the clients
(enteting level, teleporters, etc). The client has full control over
the view angles (all of them) most of the time, and sends all three
view angles to the server each frame. This can be removed for things
like tracking the monster that killed you when you are on the ground.

>
> Q: if I understood correctly - mipmpas are created by qbsp each
> time anew, using a WAD file (old style? TEXTURE1/2, PNAMES?). There
> is no Mipmaps WAD lump.
>
> Suggestion: how about including the DOOM
> shareware/registered/commercial patches, PNAMES and textures on the
> Quake CD, too? Removing all the monster frame rotations will
> protect DOOM sales (as far as this is possible anyway). Permitting
> redistribution will ease conversions a lot. It increases the pool
> to draw [sic!] from. It looks good in the shops (added value). The
> artwork was a major part of DOOM's success, it deserves a mipmapped
> afterlife ;-).

The mipmaps are creaated by qlumpy into new format wad files (just
about the only remaining use of them -- they were intended to be used
heavily, but I wound up going with the much superior search path /
directory hierarchy aproach), but those are only referenced by QuakeEd
and qbsp, which copies the needed ones into the map .bsp file directly.

We used some DOOM graphics early on, but the palettes are different
enough that the artwork is damaged in the conversion. Artists tend
to be picky about showing degraded versions of their work :-)

Don't worry, there are lots of really good Quake graphics.

>
>
> Q: have you at some point thought about entities that are not
> occluded by surfaces, i.e. always rendered on top of everything
> else if within the view frustrum (no PVS)?
>
> Purpose: there is no automap for good reasons. I have tried a
> look-through 2D automap with a DOOM-style renderer and found it
> confusing, a 3D one is prolly even more useless.
>
> However, the concept of an (annotated) marker entity seems worth
> bothering: an object that can be dropped, or shot at/attached to
> another entity, that is always visible even through walls (in a
> certain view mode, at least). It gives the player a sense of
> direction to move along, but he still has to remember the structure
> of the map. The advantages for hunting games are obvious: a tagged
> target will always be visible, but is still difficult to hunt down
> and trap.
>

I don't think that is a good idea. When an object shows through
it's natural occluders, it give a pretty serious optical illusion
that the things it is showing through are translucent, damaging the
rock-solid feel of the world.

>
> Q: this is a not too specific remark (thus lengthy). If I
> understood correctly, RGBA support is supposed to be a major
> benefit of upcoming hardware (i.e. might be available in 1997 :).
> It seems to me that the current engine design is biased with
> respect to index color mode. Examples:
>
> + WAD2 Palette now contains 256 RGB values, it might as well
> contain a NoOfColors field (256, 4096), and RGBA entries (32bit or
> 48bit) at virtually no costs (ignored in index color mode)
>
> + Mipmaps and billboards implicitely reference one Palette, they
> might as well explicitely reference a Palette (ignored in index
> color mode)
>
> + billboards do not have mipmapping (no blending in indexed color
> mode), but they might as well have (if w/o blending, it is still
> LOD, and sprites will show more variation)
>
> Low overhead suggestion: bright objects (light sources) might
> (implicitely) use a second, different Palette in 24bit modes.
>
> Followup: How is the lightmap+mipmap -> surface cache processing
> done in 24 bit mode? Same LUT with same 256 colors? Or
> NoOfLightlevel*256 true color values?

Some comments: originally, we maintained 8, 16, and 32 bit code
for all of our drawing, and we worked in every conceivable video
mode. This was a major pain in the ass to maintain.

16 bit color looked noticably WORSE than 8 bit color, because while
we have smooth 16 color gradiants of some non primary colors that
look fine in 256 color mode (18 bits of color precision), in 15
or 16 bit mode there is a noticable hue change every time you are
forced to drift a bit off of your ideal color.

24 bit color looked better than 8 bit, but not a whole lot.

This is mostly due to good source images. For an arbitrary image
processing solution where you can chuck any old set of pixels at
it, any direct color mode is going to be a ton better, but when
you have control over your source data you can pick and choose to
make 8 bit outperform 16 bit, and save all the extra space and
processing time.

We do occasionally take a lose in the mip map creation process
for lack of good colors in the palette, but I do an error diffusion
during the resampling, which helps a lot.

Lighting direct color images hurts a fair amount, too. MMX is
really nice for that, but without it you either need huge tables
or lots of bit twiddling.

All that said, Quake is still my last indexed color engine. I still
think it is the right call for this moment in time, but the future
is definatly direct color.

> Followup: is the current 3D hardware worth to be taken into account
> in renderer design? I.e. are there ways to use RGBA e.g. for
> processing lightmaps (as Alpha-map implicitely blending with a
> solid black)? How does the lightmap+mipmap processing fit with
> on-board texture memory anyway?

A max-throughput engine is still going to use small, tiled textures
in hardware, just as in software. I think the image quality tradeofs
of surface caches are worth the speed cost, though.

3DFX has done some light map building demos with the hardware, and
rendition could certainly be programmed to do it. I suspect that
all of the Quake drivers will still build the surface caches in
software, because many of the current 3D boards have high setup
overhead per primitive, and drawing tons of little boxes would suck.

Rasterization hardware is definatly the future. Soon you will not
be able to buy a computer without it, because it can be implemented
in the same single chip video controllers currently used, and it will
become a checkbox item.

John Carmack

----- End Included Message -----