I will start to collect random thoughts and comments on details of the Quake renderer and engine here. This will not discuss details of GUI or gameplay. Instead, it will discuss the advantages and disadvantages of the data structures within the file format used by Quake, to find out limitations, and suggest changes. This is a Request For Comment in a sense, as I would like to have my reasoning checked on these issues. Feedback appreciated.
I sorted the current remarks into three categories:
Quoting John Carmack: "Levels cary their own textures with them. Quake never references the original wadfiles." This is bound to be a problem because of WWW/FTP bandwidth and disk space. Remember the situation with DOOM in 1994, prior to the release of DMADDS (and, later on, DeuSF and NWT): as separate distribution of sprite replacements was not possible, some people already planned on uploading PWADs that would have had to include a lot of IWAD sprites.
The DOOM engines supported a mechanism to define textures by separate TEXTURE1/2 lookups based on multiple patches, i.e. overlapping pixmaps mapped into the texture space. This mechanism had a couple of potential advantages:
The Quake qtest1 release seems to create the textures of a given surface as soon as that surface is potentially visible, by creating a lighted/shaded texture from the mipmap (selected appropriate to distance) and the lighting/shadow map (given by the surface itself). These preprocessed (shaded) textures are kept in memory, or, using hardware acceleration, would be transfered to on-board texture memory.
It seems that it would be possible to introduce an additional step to this preprocessing, enabling the engine to add one or more, possibly semi-transparent, smaller mipmaps to a larger, rectangular and opaque base mipmap. Efficiency is scalable by the number of surfaces actually using this feature. A pointer to a patch list (list of mipmaps) would be required for each surface.
Example of use: Without significantly increasing memory requirements, easy implementation of navigational aids and hints would be possible; e.g. letter indicating area, digits indicating level/storey, colored strips on wall, warning signs. All this is found in man-made environments for good reasins, but would be very memory-intensive if done by a multitude of separate mipmaps instead of patches.
There is a description of NATs on the Editing Info page. Quoting John Carmack: "There are a couple restrictions on texturing. In hindsight, there didn't need to be, but I don't have the time to fix it right now. Textures are "naturally aligned", rather than tied to a specific brush."
There are no real advantages using NAT's. In any sequence of adjacent surfaces, alignment is only done as long as the classification of the surface is not changed. Adjacent surfaces that flip from one orientation to the other while only slightly differently oriented with respect to each other will have completely unrelated texture space coordinates. E.g. a few degrees' change around 45, 135, 225, or 315 degrees in xy plane will do the trick. Imagine two adjacent walls, both approximately parallel to the x=y vertical plane (angle nearly 45 degrees). One will be classified as facing X, the other as facing Y. Texture space coordinates depend on the values of X,Y, and Z. In most cases there will be no alignment. Thus any decent editor will have to provide a preview and manual alignment anyway.
Consider non-tiling textures: imagine a a room like a bee hive in xy plane, with six doors. The door texture is BIGDOOR, width 128. In worst case, only 90 pixels will be used (angle of 45 degrees). There is no way to use identical door textures on all six doors, and have the doors equally sized.
Yet another example: BIGDOOR is a DOOM texture. NATs are a problem for conversion of WAD maps. The texture scaling and tiling width depend on the orientation within the world. A wall aligned with one of the x or y axis will be mapped 1:1 (they are all aligned with z). Depending on the angle alpha, the scale will increase as 1/cos(alpha), up to 1.41 for walls oriented at 45 degrees. Any texture alignment carefully done with the WAD file will be inevitably trashed.
Consider decals, i.e. faces, switches, symbols on walls. Their width will vary by 1.41 in worst case. This is barely visible with small details of a texture, but will be visible for larger details.
Usually, any rotation, translation or shearing of a texture could be done by assigning two texture space coordinates to at least three vertices. With NATs, only a few permutations are possible (90 degree rotations, two inversions). NAT means no real rotation of maps. Imagine being inside a slightly tilted ship wreck thrown on the beach. Imagine a real earthquake - a small level might rock'n'roll on a fast machine in realtime. None of this works with NATs.
In summary, "naturally aligned textures" are a serious restriction. NATs mean that level design has to take this particular detail of Quake into account during editing the world geometry. In other word, the structure of a world is no longer separable from the textures used. This will make Quake maps both backward (DOOM WADs) and upward incompatible. Maps will be Quake specific. In this regard, paradoxically, Quake is more restricted than DOOM.
Do not underestimate this. In 1998, with "the next game after Quake", this might back to haunt us. We should be aware that well done levels will be around for years to come, with different engines, once we are editing 3D scenes. This restriction will lead to a lot of manual re-alignment of textures.
I wish there was an easy fix that will make it into the
shipment release. Quoting John Carmack once again:
"It was originally a speed issue for building the surface caches, but a later insight rendered the problem moot. I would like to fix it, but it probably won't show until Quake 2.
The current limitation that annoys the map guys most is that texture shifting is by multiples of 16. If I put in The Fix, the primary benefit would be pixel level shifting, but it would also allow arbitrary rotation and scaling. That is an easily abused feature, though..."
Texture animation in DOOM was defined in the engine by built-in mechanisms and selected texture names. In the Quake qtest1 release, animation is reported to be controlled by a special character in the texture's names, namely "*____".
I thought about this in 1995, and ended up with a solution based on entities. The basic idea is:
Example of use: All texture animation could be controlled by entities (and, therefore, QuakeC), thus global, local and distributed changes would be possible even synchronized with or triggered by an objects (things, monsters) state. Light sources could turn red throughout the level at once, digital counter clocks would be snychronized, warnings signs could start flashing as soon as a certain monster is awake.
"Sky" (position independend texture mapping) and "partly transparent" (color index zero interpretation or alpha support) are surface's or entities' appearances (see entities determining visual appearance) attributes, as are "morph", "ripple", or frame animation.
It makes perfect sense to use indexed colors and reference palettes with the Mipmaps. However, the reference palette should be chosen by the object determining visual apearance (that is, the surface, or an entity determining visual appearance). In addition, the palette should containg an alpha value, too.
The palette references would again be used during shading preprocessing of potentially visible textures (see comment on lack of patches support). In indexed color display mode, the renderer could safely ignore the palette references (a color index mapping lookup might be used). The alpha value will thenn only be needed to decide wether or not a given color index will be mapped to the transparent color index during preprocessing (again, see patches support). Hardware support for RGBA will be in widespread use by the end of this year, and could be utilized in combination with a patch/lightmap preprocessing in true color display modes.