hn-classics/_stories/2002/15350393.md

461 lines
24 KiB
Markdown
Raw Permalink Normal View History

---
created_at: '2017-09-27T17:50:53.000Z'
title: The Video Game Software Wizardry of Id (2002)
url: https://spectrum.ieee.org/consumer-electronics/gaming/the-video-game-software-wizardry-of-id
author: shawndumas
points: 113
story_text:
comment_text:
num_comments: 32
story_id:
story_title:
story_url:
parent_id:
created_at_i: 1506534653
_tags:
- story
- author_shawndumas
- story_15350393
objectID: '15350393'
2018-06-08 12:05:27 +00:00
year: 2002
---
2018-03-03 09:35:28 +00:00
Advertisement
2018-02-23 18:19:40 +00:00
2018-03-03 09:35:28 +00:00
[![assorted game graphics screens, video game
software](/image/1903521)](/image/1903521 "© 2002 IEEE Spectrum magazine")
2018-02-23 18:19:40 +00:00
2018-03-03 09:35:28 +00:00
All Images: ID Software
2018-02-23 18:19:40 +00:00
2018-03-03 09:35:28 +00:00
Over the last 12 years, the evolving realism of Id Software's graphics
has set the bar for the industry. Among the games \[bottom to top,
left\]: Commander Keen (1990); Hovertank (1991); Wolfenstein 3D (1992);
Doom (1993); Quake (1996); and Return to Castle Wolfenstein (2001).
Click on the image for a larger view.
It's after midnight when the carnage begins. Inside a castle, soldiers
chase Nazis through the halls. A flame-thrower unfurls a hideous tongue
of fire. This is Return to Castle Wolfenstein, a computer game that's as
much a scientific marvel as it is a visceral adventure. It's also the
latest product of Id Software (Mesquite, Texas). Through its
technologically innovative games, Id has had a huge influence on
everyday computing, from the high-speed, high-color, and high-resolution
graphics cards common in today's PCs to the marshalling of an army of
on-line game programmers and players who have helped shape popular
culture.
Id shot to prominence 10 years ago with the release of its original
kill-the-Nazis-and-escape game, Wolfenstein 3D. It and its successors,
Doom and Quake, cast players as endangered foot soldiers, racing through
mazes while fighting monsters or, if they so chose, each other. To bring
these games to the consumer PC and establish Id as the market leader
required skill at simplifying difficult graphics problems and cunning in
exploiting on-going improvements in computer graphics cards, processing
power, and memory size \[see illustration, Driven\]. To date, their
games have earned over US $150 million in sales, according to The NPD
Group, a New York City market research firm.
**It all began with a guy named Mario**
The company owes much of its success to advances made by John Carmack,
its 31-year-old lead programmer and cofounder who has been programming
games since he was a teenager.
Back in the late 1980s, the electronic gaming industry was dominated by
dedicated video game consoles. Most game software was distributed in
cartridges, which slotted into the consoles, and as a consequence,
writing games required expensive development systems and corporate
backing.
The only alternative was home computer game programming, an underworld
in which amateurs could develop and distribute software. Writing games
for the low-powered machines required only programming skill and a love
of gaming.
Four guys with that passion were artist Adrian Carmack; programmer John
Carmack (no relation); game designer Tom Hall; and programmer John
Romero. While working together at Softdisk (Shreveport, La.), a small
software publisher, these inveterate gamers began moonlighting on their
own titles.
At the time, the PC was still largely viewed as being for business only.
It had, after all, only a handful of screen colors and squeaked out
sounds through a tiny tinny speaker. Nonetheless, the Softdisk gamers
figured this was enough to start using the PC as a games platform.
First, hey decided to see if they could recreate on a PC the gaming
industry's biggest hit at the time, Super Mario Brothers 3. This
two-dimensional game ran on the Super Nintendo Entertainment System,
which drove a regular television screen. The object was to make a
mustached plumber, named Mario, leap over platforms and dodge hazards
while running across a landscape below a blue sky strewn with puffy
clouds. As Mario ran, the terrain scrolled from side to side to keep him
more or less in the middle of the screen. To get the graphics
performance required, the Nintendo console resorted to dedicated
hardware. "We had clear examples of console games \[like Mario\] that
did smooth scrolling," John Carmack says, "but \[in 1990\] no one had
done it on an IBM PC."
After a few nights of experimentation, Carmack figured out how to
emulate the side-scrolling action on a PC. In the game, the screen image
was drawn, or rendered, by assembling an array of 16-by-16-pixel tiles.
Usually the on-screen background took over 200 of these square tiles, a
blue sky tile here, a cloud tile there, and so on. Graphics for active
elements, such as Mario, were then drawn on top of the background.
Any attempt to redraw the entire background every frame resulted in a
game that ran too slowly, so Carmack figured out how to have to redraw
only a handful of tiles every frame, speeding the game up immensely. His
technique relied on a new type of graphics card that had become
available, and the observation that the player's movement occurred
incrementally, so most of the next frame's scenery had already been
drawn.
The new graphics cards were known as Enhanced Graphics Adapter (EGA)
cards. They had more on-board video memory than the earlier Color
Graphics Adapter (CGA) cards and could display 16 colors at once,
instead of four. For Carmack, the extra memory had two important
consequences. First, while intended for a single relatively
high-resolution screen image, the card's memory could hold several video
screens' worth of low-resolution images, typically 300 by 200 pixels,
simultaneously, good enough for video games. By pointing to different
video memory addresses, the card could switch which image was being sent
to the screen at around 60 times a second, allowing smooth animation
without annoying flicker. Second, the card could move data around in its
video memory much faster than image data could be copied from the PC's
main memory to the card, eliminating a major graphics performance
bottleneck.
Carmack wrote a so-called graphics display engine that exploited both
properties to the full by using a technique that had been originally
developed in the 1970s for scrolling over large images, such as
satellite photographs. First, he assembled a complete screen in video
memory, tile by tile--plus a border one tile wide \[see illustration,
"Scrolling With the Action" \]. If the player moved one pixel in any
direction, the display engine moved the origin of the image it sent to
the screen by one pixel in the corresponding direction. No new tiles had
to be drawn. When the player's movements finally pushed the screen image
to the outer edge of a border, the engine still did not redraw most of
the screen. Instead, it copied most of the existing image--the part that
would remain constant--into another portion of video memory. Then it
added the new tiles and moved the origin of the screen display so that
it pointed to the new image .
![graphic of scrolling, video game
software](/images/archive/images/idf2.gif)
**Scrolling With the Action:** For two-dimensional scrolling in his PC
game, programmer John Carmack cheated a little by not always redrawing
the background. He built the background of graphical tiles stored in
video memory \[left\] but only sent part of the image to the screen
\[top left, inside orange border\]. As the play character \[yellow
circle\] moved, the background sent to the screen was adjusted to
include tiles outside the border \[see top right\]. New background
elements would be needed only after a shift of one tile width. Then,
most of the background was copied to another region of video memory
\[see bottom right\], and the screen image centered in the new
background.
In short, rather than having the PC redraw tens of thousands of pixels
every time the player moved, the engine usually had to change only a
single memory address--the one that indicated the origin of the screen
image--or, at worst, draw a relatively thin strip of pixels for the new
tiles. So the PC's CPU was left with plenty of time for other tasks,
such as drawing and animating the game's moving platforms, hostile
characters, and the other active elements with which the player
interacted.
Hall and Carmack knocked up a Mario clone for the PC, which they dubbed
Dangerous Dave in Copyright Infringement. But Softdisk, their employer,
had no interest in publishing what were then high-end EGA games,
preferring to stick with the market for CGA applications. So the nascent
Id Software company went into moonlight overdrive, using the technology
to create its own side-scrolling PC game called Commander Keen. When it
came time to release the game, they hooked up with game publisher Scott
Miller, who urged them to go with a distribution plan that was as novel
as their technology: shareware.
In the 1980s, hackers started making their programs available through
shareware, which relied on an honor code: try it and if you like it, pay
me. But it had been used only for utilitarian programs like file tools
or word processors. The next frontier, Miller suggested, was games.
Instead of giving away the entire game, he said, why not give out only
the first portion, then make the player buy the rest? Id agreed to let
Miller's company, Apogee, release the game. Prior to Commander Keen,
Apogee's most popular shareware game had sold a few thousand copies.
Within months of Keen's release in December 1990, the game had sold 30
000 copies. For the burgeoning world of PC games, Miller recalls, "it
was a little atom bomb."
**Going for depth**
Meanwhile programmer Carmack was again pushing the graphics envelope. He
had been experimenting with 3-D graphics ever since junior high school,
when he produced wire-frame MTV logos on his Apple II. Since then,
several game creators had experimented with first-person 3-D points of
view, where the flat tiles of 2-D games are replaced by polygons forming
the surfaces of the player's surrounding environment. The player no
longer felt outside, looking in on the game's world, but saw it as if
from the inside.
The results had been mixed, though. The PC was simply too slow to redraw
detailed 3-D scenes as the player's position shifted. It had to draw
lots of surfaces for each and every frame sent to the screen, including
many that would be obscured by other surfaces closer to the player.
Carmack had an idea that would let the computer draw only those surfaces
that were seen by the player. "If you're willing to restrict the
flexibility of your approach," he says, "you can almost always do
something better."
So he chose not to address the general problem of drawing arbitrary
polygons that could be positioned anywhere in space, but designed a
program that would draw only trapezoids. His concern at this time was
with walls (which are shaped like trapezoids in 3-D), not ceilings or
floors.
For his program, Carmack simplified a technique for rendering realistic
images on then high-end systems. In raycasting, as it is called, the
computer draws scenes by extending lines from the player's position in
the direction he or she is facing. When it strikes a surface, the pixel
corresponding to that line on the player's screen is painted the
appropriate color. None of the computer's time is wasted on drawing
surfaces that would never be seen anyway. By only drawing walls, Carmack
could raycast scenes very quickly.
Carmack's final challenge was to furnish his 3-D world with treasure
chests, hostile characters, and other objects. Once again, he simplified
the task, this time by using 2-D graphical icons, known as sprites. He
got the computer to scale the size of the sprite, depending on the
player's location, so that he did not have to model the objects as 3-D
figures, a task that would have slowed the game painfully. By combining
sprites with raycasting, Carmack was able to place players in a
fast-moving 3-D world. The upshot was Hovertank, released in April 1991.
It was the first fast-action 3-D first-person action shooter for the PC.
Around this time, fellow programmer Romero heard about a new graphics
technique called texture mapping. In this technique, realistic textures
are applied to surfaces in place of their formerly flat, solid colors.
in green slime in its next game, Catacombs 3D. While running through a
maze, the player shot fireballs at enemy figures using another
novelty--a hand drawn in the lower center of the screen. It was as if
the player were looking down on his or her own hand, reaching into the
computer screen. By including the hand in Catacombs 3D, Id Software was
making a subtle, but strong, psychological point to its audience: you
are not just playing the game--you're part of it.
**Instant sensation**
For Id's next game, Wolfenstein 3D, Carmack refined his code. A key
decision ensured the graphics engine had as little work to do as
possible: to make the walls even easier to draw, they would all be the
same height.
This speeded up raycasting immensely. In normal raycasting, one line is
projected through space for every pixel displayed. A 320-by-200-pixel
screen image of the type common at the time required 64 000 lines. But
because Carmack's walls were uniform from top to bottom, he had to
raycast along only one horizontal plane, just 320 lines \[see diagram,
Raycasting 3-D Rooms\].
[![illustration of raycasting 3-d
rooms](/image/1903601)](/image/1903601 "© 2002 IEEE Spectrum magazine")
**Raycasting 3-D Rooms:** To quickly draw three-dimensional rooms
without drawing obscured and thus unnecessary surfaces, Carmack used a
simplified form of raycasting, a technique used to reate realistic 3-D
images. In raycasting, the computer draws scenes by extending lines from
the player's viewpoint \[top\], through an imaginary grid, so that they
strike the surfaces the player sees; only these surfaces get drawn.
Carmack simplified things by keeping all the walls the same height. This
allowed him to extend the rays from the player in just a single
horizontal 2-D plan \[middle\] and scale the apparent height of the wall
according to its distance from the player, instead of determining every
point on the wall individually. The result is the final 3-D image of the
walls \[bottom\]. Click on image for larger view.
With Carmack's graphics engine now blazingly fast, Romero, Adrian
Carmack, and Hall set about creating a brutal game in which an American
G.I. had to mow down Nazis while negotiating a series of maze-based
levels. Upon its release in May 1992, Wolfenstein 3D was an instant
sensation and became something of a benchmark for PCs. When Intel wanted
to demonstrate the performance of its new Pentium chip to reporters, it
showed them a system running Wolfenstein.
Wolfenstein also empowered gamers in unexpected ways--they could modify
the game with their own levels and graphics. Instead of a Nazi officer,
players could, for example, substitute Barney, the purple dinosaur star
of U.S. children's television. Carmack and Romero made no attempt to sue
the creators of these mutated versions of Wolfenstein, for, as hackers
themselves, they couldn't have been more pleased.
Their next game, Doom, incorporated two important effects Carmack had
experimented with in working on another game, Shadowcaster, for a
company called Raven in 1992. One was to apply texture mapping to floors
and ceilings, as well as to walls. Another was to add diminished
lighting. Diminished lighting meant that, as in real life, distant
vistas would recede into shadows, whereas in Wolfenstein, every room was
brightly lit, with no variation in hue.
By this time, Carmack was programming for the Video Graphics Adapter
(VGA) cards that had supplanted the EGA cards. VGA allowed 256 colors--a
big step up from EGA's 16, but still a limited range that made it a
challenge to incorporate all the shading needed for diminished lighting
effects.
The solution was to restrict the palette used for the game's graphics,
so that 16 shades of each of 16 colors could be accommodated. Carmack
then programmed the computer to display different shades based on the
player's location within a room. The darkest hues of a color were
applied to far sections of a room; nearer surfaces would always be
brighter than those farther away. This added to the moody atmosphere of
the game.
Both Carmack and Romero were eager to break away from the simple designs
used in the levels of their earlier games. "My whole thing was--let's
not do anything that Wolfenstein does," Romero says. "Let's not have the
same light levels, let's not have the same ceiling heights, let's not
have walls that are 90 degrees \[to each other\]. Let's show off
Carmack's new technology by making everything look different."
Profiting from improvements in computer speed and memory, Carmack began
working on how to draw polygons with more arbitrary shapes than
Wolfenstein's trapezoids. "It was looking like \[the graphics engine\]
wouldn't be fast enough," he recalls, "so we had to come up with a new
approach....I knew that to be fast, we still had to have strictly
horizontal floors and vertical walls." The answer was a technique known
as binary space partitioning (BSP). Henry Fuchs, Zvi Kedem, and Bruce
Naylor had popularized BSP techniques in 1980 while at Bell Labs to
render 3-D models of objects on screen.
A fundamental problem in converting a 3-D model of an object into an
on-screen image is determining which surfaces are actually visible,
which boils down to calculating: is surface Y in front of, or behind,
surface X? Traditionally, this calculation was done any time the model
changed orientation.
The BSP approach depended on the observation that the model itself is
static, and although different views give rise to different images,
there is no change in the relationships between its surfaces. BSP
allowed the relationships to be determined once and then stored in such
a way that determining which surfaces hid other surfaces from any
arbitrary viewpoint was a matter of looking up the information, not
calculating it anew.
BSP takes the space occupied by the model and partitions it into two
sections. If either section contains more than one surface of the model,
it is divided again, until the space is completely broken up into
sections each containing one surface. The branching hierarchy that
results is called a BSP tree and extends all the way from the initial
partition of the space down to the individual elements. By following a
particular path through the nodes of the stored tree, it is possible to
generate key information about the relationships between surfaces in a
specific view of the model.
What if, Carmack wondered, you could use a BSP to create not just one
3-D model of an object, but an entire virtual world? Again, he made the
problem simpler by imposing a constraint: walls had to be vertical and
floors and ceilings horizontal. BSP could then be used to divide up not
the 3-D space itself, but a much simpler 2-D plan view of that space and
still provide all the important information about which surfaces were in
front of which \[see diagram, Divide and Conquer\].
[![illustration of doom
structure](/images/archive/images/idf4.jpg)](/images/archive/images/idf4.jpg "© 2002 IEEE Spectrum magazine")
Illustration: Armand Veneziano
**Divide and Conquer:** "Doom treated \[the surfaces of the 3-D world\]
all as lines," Carmack says, "cutting lines and sorting lines is so much
easier than sorting polygons....The whole point was taking BSP \[trees\]
and applying them to...a plane, instead of to polygons in a 3-D world,
which let it be drastically simpler." Click on the image for a larger
view.
Doom was also designed to make it easy for hackers to extend the game by
adding their own graphics and game-level designs. Networking was added
to Doom, allowing play between multiple players over a local-area
network and modem-to-modem competition.
The game was released in December 1993. Between the multiplayer option,
the extensibility, the riveting 3-D graphics, and the cleverly designed
levels, which cast the player as a futuristic space marine fighting
against the legions of hell, it became a phenomenon. Doom II, the
sequel, featured more weapons and new levels but used the same graphics
engine. It was released in October 1994 and eventually sold more than 1
500 000 copies at about $50 each; according to the NPD Group, it remains
the third best-selling computer game in history.
**The finish line**
In the mid-1990s, Carmack felt that PC technology had advanced far
enough for him to finally achieve two specific goals for his next game,
Quake. He wanted to create an arbitrary 3-D world in which true 3-D
objects could be viewed from any angle, unlike the flat sprites in Doom
and Wolfenstein. The solution was to use the power of the latest
generation of PCs to use BSP to chop up the volume of a true 3-D space,
rather than just areas of a 2-D plan view. He also wanted to make a game
that could be played over the Internet.
For Internet play, a client-server architecture was used. The
server--which could be run on any PC--would handle the game environment
consisting of rooms, the physics of moving objects, player positions,
and so on. Meanwhile, the client PC would be responsible for both the
input, through the player's keyboard and mouse, and the output, in the
form of graphics and sound. Being online, however, the game was liable
to lags and lapses in network packet deliveries--just the thing to screw
up a fast action game. To reduce the problem, Id limited the packet
delivery method to only the most necessary information, such as a
player's position.
"The key point was use of an unreliable transport for all
communication," Carmack says, "taking advantage of continuous packet
communication and \[relaxing\] the normal requirements for reliable
delivery," such as handshaking and error correction. A variety of data
compression methods were also used to reduce the bandwidth. The
multiplayer friendliness of the game that emerged--Quake--was rewarded
by the emergence of a huge online community when it was released in June
1996.
**Looking good**
Games in general drove the evolution of video cards. But multiplayer
games in particular created an insatiable demand for better graphics
systems, providing a market for even the most incremental advance.
Business users are not concerned if the graphics card they are using to
view their e-mail updates the screen 8 times a second while their
neighbor's card allows 10 updates a second. But a gamer playing Quake,
in which the difference between killing or being killed is measured in
tenths of a second, very much cares.
Quake soon became the de facto benchmark for the consumer graphics card
industry. Says David Kirk, chief scientist of NVIDIA, a leading graphics
processor manufacturer in Santa Clara, Calif., "Id Software's games
always push the envelope."
Quake II improved on its predecessor by taking advantage of hardware
acceleration that might be present in a PC, allowing much of the work of
rendering 3-D scenes to be moved from the CPU to the video card. Quake
III, released in December 1999, went a step further and became the first
high-profile game to require hardware acceleration, much as Id had been
willing to burn its boats in 1990 by insisting on EGA over CGA with
Commander Keen.
Carmack himself feels that his real innovations peaked with Quake in
1996. Everything since, he says, is essentially refining a theme. Return
to Castle Wolfenstein, in fact, was based on the Quake III engine, with
much of the level and game logic development work being done by an
outside company.
"There were critical points in the evolution of this stuff," Carmack
says, "getting into first person at all, then getting into arbitrary
3-D, and then getting into hardware acceleration....But the critical
goals have been met. There's still infinite refinement that we can do on
all these different things, but...we can build an arbitrary
representational world at some level of fidelity. We can be improving
our fidelity and our special effects and all that. But we have the
fundamental tools necessary to be doing games that are a simulation of
the world."
Advertisement