Copy Link
Add to Bookmark
Report

Quake Editing Digest Volume 1 : Number 5

eZine's profile picture
Published in 
quake editing digest
 · 8 months ago

quake-editing-digest       Monday, 11 March 1996       Volume 01 : Number 005 

Distributing Quake maps (long-ish, wordy and rambling :)
Re: Distributing Quake maps (long-ish, wordy and rambling :)
Re: Quake tools in C++
Re: Distributing Quake maps (long-ish, wordy and rambling :)
Distributing Quake maps (long-ish, wordy and rambling :)
Distributing Quake maps (long-ish, wordy and rambling :)
Re: Distributing Quake maps

----------------------------------------------------------------------

From: Tom Wheeley <tomw@tsys.demon.co.uk>
Date: Sun, 10 Mar 96 01:40:28 GMT
Subject: Distributing Quake maps (long-ish, wordy and rambling :)

Reading both Raphael's recent article in rgcqe and the Quake specs, it seems
to me like we do not want hundreds of textures cluttering peoples maps.

Now, I don't know:

a) How close people are to anything resembling level editors. (Not close, I
should think, seeing how many `lookups' there are to be calculated in the
maps/*.bsp files.

b) How much Quakes handling of textures (`surfaces') will change between this
test version (qkspec30) and the wondrous mythical `Final Version'.

However, my thoughts on these are:

a) If we agree on something now, perhaps with a bit of source, then a
standard for distributing levels can be set before we get rogue levels
making the rounds of the ftp/www sites...

b) I am assuming that having the textures in each map will not change to
references.

There is also the issue of including id's textures in distributed levels.

At the minimum I am proposing is a tool not dis-similar to DeuSF or DeuTex
which is run locally by the end user. This will process a list of textures
which the program `statically links' into a final BSP file.

This control file could be designed in 3 ways.

. structure based, and thus virtually unmaintainable by humans, but easy
for programs to fiddle with.
. text file, for the most part using the `texture numbers' (for more info
on how the textures are referenced, see Quake Specs). This would be
created by the map editor, which would interface with the user with
the texture names
. text file, but uses the texture names when specifying sources. This
would be the slowest (but not that slow) to process. It would be easily
maintainable by humans though :)

Using the 3rd format, it would look something like this: (The names are
complete fabrications!).

# Quake `MakeTex.cfg' file.

#StoredName Source file Entry name
RedBrickWall maps/test1.bsp IDREDWALL # Search a BSP file
Stuccoish maps/test1.dir/entry02.lmp IDSTUCCOISH # Search multi MIP lump
SecretSrc maps/test3.bsp =12 # Entry 12 in a BSP

# End sample MakeTex.cfg file

Unfortunately, the only magic which can be tested for is for the BSP files and
so no-way could be established for checking that a miptex lump _is_ a miptex
lump (without checking the size difference of the offsets, possibly). Thus
raw .MIP (or whatever) files would have to be outlawed, they would have to
start with the very simple multiple MIP header (as per the beginning of the
miptex lump) of size 1.

This idea of processing at the users locale could possibly be extended to the
creation of the various abstract stuctures which comprise most of the rest of
the .BSP file.

Finally, I think the _easiest_ way to distribute the rest of the data would
be within a PAK file, possibly renamed LEV or QL (or .TOM ;-)). Then each
BSP lump would have an entry in the .PAK file, along with new mip textures
(referenced in the MakeTex file as ``NewName PAKentryname'' -- 2 tokens).
The reason each mip texture would have its own PAK entry, rather than a
simgle MultiMIP entry is that it would make processing simpler (extracting
and searching one archive rather than 2)

The MakeTex itself would be in the PAK as Entry 'MakeTex'. Other (possibly
standardized) entries could be made such as `Author', 'Description', 'Licence'
etc. These could be displayed by the MakeTex program.

There is then the small dilemma of whether to have MakeTex work on the
distribution file directly (save disk space) or create a new file completely.
If there is sufficient reason for the first option, then perhaps the
distributable file would need to be structured almost exactly like a BSP file,
and the new textures simply appended. This, imho, is not as neat.

I would like to thank both of you for reading this far ;-) I would like
other's opinions on this. (I can tell I've missed something obvious... :)
I have done a bit of preliminary coding for MakeTex, but there's no point
going to far with it if the idea is flawed. I am using, abusing and
extending the QEU files to do this. (mainly q_*, f_bsp, f_pack and my own
skeletal f_miptex)
- --
* TQ 1.0 * The 'Just So Quotes'.
I know you're supposed to take life one day at a time -- but lately several
days have attacked me at once.

------------------------------

From: Tom Lees <tom@lpsg.demon.co.uk>
Date: Sun, 10 Mar 1996 08:06:26 +0000 (GMT)
Subject: Re: Distributing Quake maps (long-ish, wordy and rambling :)

On Sun, 10 Mar 1996, Tom Wheeley wrote:

> At the minimum I am proposing is a tool not dis-similar to DeuSF or DeuTex
> which is run locally by the end user. This will process a list of textures
> which the program `statically links' into a final BSP file.

This is a good idea, but it shouldn't be too slow, otherwise we will get
'binaries' distributed versions of levels as well as 'source' versions.

>
> This control file could be designed in 3 ways.
>
> . structure based, and thus virtually unmaintainable by humans, but easy
> for programs to fiddle with.
> . text file, for the most part using the `texture numbers' (for more info
> on how the textures are referenced, see Quake Specs). This would be
> created by the map editor, which would interface with the user with
> the texture names
> . text file, but uses the texture names when specifying sources. This
> would be the slowest (but not that slow) to process. It would be easily
> maintainable by humans though :)
>
Or, it could be structure based, but created from a file using the 3rd
format, for the best of both worlds.

> Unfortunately, the only magic which can be tested for is for the BSP files and
> so no-way could be established for checking that a miptex lump _is_ a miptex
> lump (without checking the size difference of the offsets, possibly). Thus
> raw .MIP (or whatever) files would have to be outlawed, they would have to
> start with the very simple multiple MIP header (as per the beginning of the
> miptex lump) of size 1.

Why not use a different format completely (like BMP or GIF or something)?
This would probably result in minimal work for level designers and level
editor programmers, and make the distributed level format more portable
in the case of the Quake structures changing format.

> This idea of processing at the users locale could possibly be extended to the
> creation of the various abstract stuctures which comprise most of the rest of
> the .BSP file.

However, I think that this would slow down the local processing time too
much, leading to separate binary and source distributions.

> The MakeTex itself would be in the PAK as Entry 'MakeTex'. Other (possibly
> standardized) entries could be made such as `Author', 'Description', 'Licence'
> etc. These could be displayed by the MakeTex program.

I'm not sure if this is completely necessary. If the format of the levels
within the distributed PAK file is going to be different than in standard
Quake BSP files, why not simply put the data as part of the new
distribution level format?

> There is then the small dilemma of whether to have MakeTex work on the
> distribution file directly (save disk space) or create a new file completely.
> If there is sufficient reason for the first option, then perhaps the
> distributable file would need to be structured almost exactly like a BSP file,
> and the new textures simply appended. This, imho, is not as neat.

If the full version of Quake supports more than a single PAK file, a nice
option would be for the user to be all his or her favourite levels in one
PAK file, which could then be loaded every time Quake starts, then access
to all the user-created levels would be simple. However, I don't think
there is any call to directly modify the distribution files, other than
that it keeps the directory structure and files in the Quake directory
cleaner.

- --
Tom Lees (tom@lpsg.demon.co.uk)


------------------------------

From: Bernd Kreimeier <Bernd.Kreimeier@NeRo.Uni-Bonn.DE>
Date: Sun, 10 Mar 1996 10:15:51 +0100 (MET)
Subject: Re: Quake tools in C++

> From: Steve Simpson <ssimpson@world.std.com>

> I'm interested in writing some Quake tools in C++, preferably using
> the C++ Standard Template Library (STL). I'll be developing using
> Win95 with memory mapped files. Are there others out there with
> similar interests (i.e. using C++ rather than C)?

All my tools (which are not primarily intended for Quake) are/will be
written in C++. I do not think, however, that I will use the STL.
From what I have seen, gcc-2.7.2 still has problems with STL, and
I consider UNIX (read: linux) and GCC (hopefully including DJGPP) my
base plattform. Is there any particular reason for your suggesting
STL? Haven


b.

------------------------------

From: Bernd Kreimeier <Bernd.Kreimeier@NeRo.Uni-Bonn.DE>
Date: Sun, 10 Mar 1996 10:32:33 +0100 (MET)
Subject: Re: Distributing Quake maps (long-ish, wordy and rambling :)

> misc. discussion and proposals about distribution files deleted

One remark: as long as we do not know the final PAK/WAD2/whatever
format, it makes no sense to invent another file format with a
huge bias towards Quake.

In 1995, I wrote a proposal for a generalized WAD file structure.
I remember a WAD API proposal by somebody from the DEU team. In
addition, there are descriptions of Dark Forces and Descent
files. Now take therse and PAK and WAD2, and look for what all
these have in common.

Here is how I handled WAD files: I wrote a separate conversion
function that, reading the raw data lump of the directory,
assigned lump Type information based on some heuristics. In addition,
I treated the magic "IWAD" or "PWAD" as a Type indicator, too.
This means that my own DWAD files could as well include
PWAD or IWAD files as lumps, because I had no difference between
File header and Directory entry: both had type, offset, length,
and name (and checksum...), the data was just interpreted differently.

The main advantage is that this is a) recursive, b) allows for
including differently written files as lumps, and therefore c)
allowed for using GIF, WAV etc. raw data lumps as well.

If you want me to elaborate, I will dig up the specs.

A good idea used by the PAK is combining tar-like structures
with direct/seek access capability: the subdirectory handling. But
this simply means changing the way "/" is treated in the names
of generalized WAD files.

Finally: think in terms of checksums, copyright information, identifiers,
and resource (textures, sounds) repositories. It might be a good idea
to use PNG for pictures (see Quake Developers support pages, overview of
resources). GIF is not a good idea.

In principle, each lump that might be used and re-distributed in
different ways should provide all info about how it should be named
in the directory, the checksum to verify the data is not corrupted,
and the copyright/version/revision information, and the Type of lump.


b.


------------------------------

From: Tom Wheeley <tomw@tsys.demon.co.uk>
Date: Sun, 10 Mar 96 13:16:25 GMT
Subject: Distributing Quake maps (long-ish, wordy and rambling :)

In article <Pine.LNX.3.91.960310075200.979D-100000@portal.atak> you write:

> On Sun, 10 Mar 1996, Tom Wheeley wrote:
>
> > At the minimum I am proposing is a tool not dis-similar to DeuSF or DeuTex
> > which is run locally by the end user. This will process a list of textures
> > which the program `statically links' into a final BSP file.
>
> This is a good idea, but it shouldn't be too slow, otherwise we will get
> 'binaries' distributed versions of levels as well as 'source' versions.

The test maps are only a meg, so my guess is a max of about 10s on my 386sx
(I'm in a similar position to Raphael -- can't play the damn game ! :)
This time would be larger for more textures. There is (currently) a max of
256 textures now. I don't think it is a significant step, timewise. Slower
than PKUNZIPping the file, probably, but not meaningfully so.

>
> >
> > This control file could be designed in 3 ways.
> >
> > . structure based, and thus virtually unmaintainable by humans, but easy
> > for programs to fiddle with.
> > . text file, for the most part using the `texture numbers' (for more info
> > on how the textures are referenced, see Quake Specs). This would be
> > created by the map editor, which would interface with the user with
> > the texture names
> > . text file, but uses the texture names when specifying sources. This
> > would be the slowest (but not that slow) to process. It would be easily
> > maintainable by humans though :)
> >
> Or, it could be structure based, but created from a file using the 3rd
> format, for the best of both worlds.
>
> > Unfortunately, the only magic which can be tested for is for the BSP files and
> > so no-way could be established for checking that a miptex lump _is_ a miptex
> > lump (without checking the size difference of the offsets, possibly). Thus
> > raw .MIP (or whatever) files would have to be outlawed, they would have to
> > start with the very simple multiple MIP header (as per the beginning of the
> > miptex lump) of size 1.
>
> Why not use a different format completely (like BMP or GIF or something)?
> This would probably result in minimal work for level designers and level
> editor programmers, and make the distributed level format more portable
> in the case of the Quake structures changing format.

Well, the texture designers have to create the MIP textures, which are scaled
by 1/2/4/8 respectively. It is up to the texture editors to convert from BMP
or GIF. I don't want to have to use multi-image GIFs here.

> > This idea of processing at the users locale could possibly be extended to the
> > creation of the various abstract stuctures which comprise most of the rest of
> > the .BSP file.
>
> However, I think that this would slow down the local processing time too
> much, leading to separate binary and source distributions.

It depends on the how fast it is to create some of them. f.e. if a
particularly large lump was quick to process, it would be prefereable to
process this at the end, rather than distribute a large lump. Remember that
`binary' distributions are illegal, and are currently blocked from ftp.cdrom.com
This is just a _possible_ extension, anyway.

> > The MakeTex itself would be in the PAK as Entry 'MakeTex'. Other (possibly
> > standardized) entries could be made such as `Author', 'Description', 'Licence'
> > etc. These could be displayed by the MakeTex program.
>
> I'm not sure if this is completely necessary. If the format of the levels
> within the distributed PAK file is going to be different than in standard
> Quake BSP files, why not simply put the data as part of the new
> distribution level format?

Umm, I'm not entirely sure what you mean. The MakeTex will build a `standard'
BSP file from the more general information in the PAK file. There is no
good place to put `unneccesary' information in the BSP file, as there is in
the PAK files, which have a more complicated directory. You can only put
things in gaps, (like QEU does with version info)

>
> > There is then the small dilemma of whether to have MakeTex work on the
> > distribution file directly (save disk space) or create a new file completely.
> > If there is sufficient reason for the first option, then perhaps the
> > distributable file would need to be structured almost exactly like a BSP file,
> > and the new textures simply appended. This, imho, is not as neat.
>
> If the full version of Quake supports more than a single PAK file, a nice
> option would be for the user to be all his or her favourite levels in one
> PAK file, which could then be loaded every time Quake starts, then access
> to all the user-created levels would be simple. However, I don't think
> there is any call to directly modify the distribution files, other than
> that it keeps the directory structure and files in the Quake directory
> cleaner.

Yes, I think I would prefer the separate file method. I must say, being
able to specify PPACKs ;) would be very nice. At the moment, you can just put
things in the directory tree, although it doesn't quite have the desired
`priority'.

.splitbung
- --
* TQ 1.0 * 101 Slogans for Quake
57. alt.sex.Quake: Please hit me again....

------------------------------

From: Tom Wheeley <tomw@tsys.demon.co.uk>
Date: Sun, 10 Mar 96 13:31:01 GMT
Subject: Distributing Quake maps (long-ish, wordy and rambling :)

In article <199603100932.KAA08721@colossus.nero.uni-bonn.de> you write:

>
> > misc. discussion and proposals about distribution files deleted
>
> One remark: as long as we do not know the final PAK/WAD2/whatever
> format, it makes no sense to invent another file format with a
> huge bias towards Quake.
>
> In 1995, I wrote a proposal for a generalized WAD file structure.
> I remember a WAD API proposal by somebody from the DEU team. In
> addition, there are descriptions of Dark Forces and Descent
> files. Now take therse and PAK and WAD2, and look for what all
> these have in common.
>
> Here is how I handled WAD files: I wrote a separate conversion
> function that, reading the raw data lump of the directory,
> assigned lump Type information based on some heuristics. In addition,

This is very interesting. I definitely think that this format would be
more appropriate for the distribution file than the PAK. I was not really
proposing a new format, other than the `MakeTex.cfg' file.

Could you send to me (or post to the list if appropriate) the format of this
DWAD? Also, where are the quake-development pages? Apologies if they were
in the info file on the list, but I must have missed it.

.splitbung
- --
* TQ 1.0 * The 'Just So Quotes'.
Recursive, adj.; see Recursive

------------------------------

From: amoon@odin.mdn.com (Alex R. Moon)
Date: Sun, 10 Mar 1996 13:07:44 -0500 (EST)
Subject: Re: Distributing Quake maps

Tom Wheeley wrote:
>
> In article <Pine.LNX.3.91.960310075200.979D-100000@portal.atak> you write:
>
> > On Sun, 10 Mar 1996, Tom Wheeley wrote:
> >
> > > At the minimum I am proposing is a tool not dis-similar to DeuSF or DeuTex
> > > which is run locally by the end user. This will process a list of textures
> > > which the program `statically links' into a final BSP file.
> >
> > This is a good idea, but it shouldn't be too slow, otherwise we will get
> > 'binaries' distributed versions of levels as well as 'source' versions.
>
> The test maps are only a meg, so my guess is a max of about 10s on my 386sx
> (I'm in a similar position to Raphael -- can't play the damn game ! :)
> This time would be larger for more textures. There is (currently) a max of
> 256 textures now. I don't think it is a significant step, timewise. Slower
> than PKUNZIPping the file, probably, but not meaningfully so.

This depends on how compiled you expect this format to be. Do you expect it
to contain only geometry data, or do you expect it to contain all the bsp
trees and visilists as well? If the former, be aware that id has said that
visilist calculation can take over an hour on a 4 processor 200MHz Alpha
machine. If the latter, why not just use a .BSP format with the textures
replaced with placeholders (null textures with just a texture name). Then
some tool could go through and rifle the id .BSP files (or any other .BSP
files) looking for the appropriate textures and chuck them in. Personally
I'd like to see both, a textureless .BSP file AND a human-friendly, BSPless
and visilistless geometry format.

> > > This idea of processing at the users locale could possibly be extended to the
> > > creation of the various abstract stuctures which comprise most of the rest of
> > > the .BSP file.
> >
> > However, I think that this would slow down the local processing time too
> > much, leading to separate binary and source distributions.
>
> It depends on the how fast it is to create some of them. f.e. if a
> particularly large lump was quick to process, it would be prefereable to
> process this at the end, rather than distribute a large lump. Remember that
> `binary' distributions are illegal, and are currently blocked from ftp.cdrom.com
> This is just a _possible_ extension, anyway.

Compiling the BSP trees, light maps, and visilists sounds like it'll take
forever. Definitely not something an end user is going to want to mess with.
On the other hand, Doom showed us that dedicated BSP builders can get better
results than many editors' built-in BSP builders. I hate to think how much
more that will be true with visilist builders. So, how about a generic
text format which all editors and compilers can use which simply gives the
geometry for the level which can then be compiled into a .BSP file (with or
without actual textures, see above)?

> > > The MakeTex itself would be in the PAK as Entry 'MakeTex'. Other (possibly
> > > standardized) entries could be made such as `Author', 'Description', 'Licence'
> > etc. These could be displayed by the MakeTex program.
> >
> > I'm not sure if this is completely necessary. If the format of the levels
> > within the distributed PAK file is going to be different than in standard
> > Quake BSP files, why not simply put the data as part of the new
> > distribution level format?
>
> Umm, I'm not entirely sure what you mean. The MakeTex will build a `standard'
> BSP file from the more general information in the PAK file. There is no
> good place to put `unneccesary' information in the BSP file, as there is in
> the PAK files, which have a more complicated directory. You can only put
> things in gaps, (like QEU does with version info)

If this is all you want it to do, just use a .BSP format with a wierd version
and placeholders for the texture data. Searching other .BSP files or .TEX
files for the appropriate textures should be simple.

Here are my thoughts on how an uncompiled level format should be constructed.
There are essentially three elements which make up a quake map:
1) It's geometry (the surfaces),
2) It's inhabitants (the entities), and
3) It's textures.

Everything else can be calculated given those three. Therefore, the
obvious solution (to me) is to have three file types, one for each
element of a level. Each level would have three files (optionally
PACKed) which could be compiled into one .BSP file. The only question
is what format each file should adhere to.

The entities are stored in the .BSP file in a nice simple text format, so why
not use that for the .ENT file? We could make it conform to DOS CR-LF format
instead of its UNIX-style LFs and just strip the CRs when it's compiled into
a BSP file.

The textures also have their own format in the .BSP file which we can use
for the .TEX (or .MIP, or whatever) files, perhaps with an additional header
giving a magic and any other info we want to add.

The only question then is what format we should use for storing the geometry
data. In my opinion, there's no question that we should use as human-readable
a text format as possible. The files probably won't be huge in the first
place and compression will seriously shrink the text files such that file size
shouldn't be a deciding factor in favoring a binary format. Other than
size, text files have all the advantages; they're easy to read, they're easy
to manipulate, they're easy to understand, etc, etc. Some time ago, iD
was using a format like the following format for this purpose (snarfed from
an old ddt irc log, everything has, no doubt, changed):

:
4 WALL14_5 0 0 0 0 0 0
(0,912,176,144) (0,928,176,144) (16,928,176,144) (16,912,176,144)
4 WALL14_5 0 0 0 0 0 0
(192,928,144,0) (192,1040,144,0) (208,1040,144,0) (208,928,144,0)
:

In the generic case (using the variables from qkspec30), this format is:

numedge texname sofs(?) tofs(?) flips(?) light(?) unknown0(?) unknown1(?)
(v1_x,v1_y,v1_z1,v1_z2) (v2_x,v2_y,v2_z1,v2_z2) ... (vn_x,vn_y,vn_z1,vn_z2)

Note that they're using a simple solid geometry approach instead of giving a
separate entry for each surface. Each entry instead represents a sort of
extruded polygon which includes numedge+2 surfaces. This has advantages for
editors in that they can use the solid geometry information in their user
interface and for CSM instead of forcing the user to deal with each surface
separately. And for editors or files which have no solid information,
you can simply set vn_z1 = vn_z2 for every vertex.

Basically, if we want a text and human-readable geometry format, we need to
decide whether that format should contain only surface information, or if
it should contain some sort of additional solid geometry information. If the
former, a format such as:

numedge texname sofs tofs flips light unknow0 unknown1
(v1_x,v1_y,v1_z) (v2_x,v2_y,v2_z) ... (vn_x,vn_y,vn_z)

should be sufficient. In the latter case we need something more complex,
either id's format above (which is awfully limited), or perhaps something
like this:

# Surfaces
# --------
numsurfaces
:
numedge texname sofs tofs flips light unknow0 unknown1
(v1_x,v1_y,v1_z) (v2_x,v2_y,v2_z) ... (vn_x,vn_y,vn_z)
:
# Solid Objects
# -------------
numobjects
:
numsurfaces surface1 surface2 ... surfacen
:

This format is just the surfaces format from above plus a number of surface
lists (similar to the surface lists used for BSP leaves) which represent
solid objects, for use by editors in CSM and user-level manipulation.

In summary, we've got an entities format. We've got a texture format (though
we may wish to add a header). The only decision we've left to make is for
a geometry format, and deciding on that should be relatively simple.

- --
Alex R. Moon | "A university is what a college become when the
amoon@odin.mdn.com | faculty loses interest in students."
71513.1453@compuserve.com | -- John Cardi

------------------------------

End of quake-editing-digest V1 #5
*********************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT