The
3D Feature Set
A scenario -- Youre sitting down in front
of the computer screen and youre about to
play the latest in computer racing games. The
introduction screen gives way to the starting
lines, and as the countdown begins you feel
slightly different than usual. After all,
youve raced before, and it had been a
pretty satisfying experience, but this time the
race track and the cars look as if you can almost
touch them. In the bright light of day, you
almost feel like its too warm, but the race is on
and you have to push forward.Immediately, you
noticed how the lane lines in the road rush past
you, and how the barriers of the race track zoom
by your the periphery of your vision. As you come
into a street scene the buildings stretch out
before you and you can see into the horizon as
the road disappears. This
feeling of perspective, of materials, and
texture, and color, is achieved by a process
called perspective-corrected texture-mapping.
Texture
mapping allows images, either created in
paint programs or scanned in from another source,
to be applied to 3D objects.
Perspective-corrected texture-mapping calculates
how the mapped image would look when viewed from
different angles. The buildings you see racing by
you are not a collection of doors, windows,
bricks and cement modeled in 3D. That would be
far too complex. They are blocks that have had
building facades mapped on to them. Applying
perspective correction means that the textures
conform to the viewers vantage point and
appear to be in perspective from all angles.
The road rushing under you also is a texture
mapped image. Its not possible to model the
road as one object. Its a series of smaller
strips with the image of a road surface
repeatedly mapped on to them. In some computer
games with long corridors, and walls, the walls
and floor seem to shimmer as you move by them.
Sometimes you can see the seams between the
blocks. Filtering
techniques can be applied to remove these
distortions.
Bi-linear
and tri-linear filtered textures are
mathematically assessed pixel by pixel. The
hardware looks at each pixel of the texture map,
looks at the pixels around it, and determines
whats the best pixel to display on screen.
The better the filtering, the more likely it is
that there will be consistency in the image, and
to the user, the impression is of effortless
movement. The less accurate the filtering, the
more likely it is that as you move around the
scene you are going to get some shimmer and shake
in the background because the texture display
varies every time you change your point of view.
So, what appears normal, or real, is only
possible because a great deal of background
calculations have to occur. Textures are normally
stored in dynamic memory so that they can be
accessed quickly. There
is a process called mip mapping that
stores a number of sizes of a texture. By storing
various sizes of the same texture map, the 3D
engine can quickly determine which texture to
apply to a surface near or far from you. Rather
than calculate a shrunken or exploded image of
the texture, and apply it to a surface as it
moves backwards and forwards in the frame, the
images are already pre-rendered and ready for
use.
No matter at what angle you look at a surface,
and no matter how far or near it is from you, the
texture mapped surface of an object adapts
accordingly to your view vantage. These texture
mapping functions are essential to real time 3D
developers. Without them the processor overhead,
to create the high levels of realism needed,
would be so great you could not use advanced
texturing functions. Take texture mapping out of
3D and reduce most of its efficacy.
Back in the racing car. You switch into high
gear and the edges of building, barriers and
objects become a blur as you speed by. Imagine
what it would be like if those lines were not a
blur, but they were jagged edges. This does
happen under normal circumstances because of the
way pixels are drawn on to the screen.
|
|
|
Jaggies |
Anti Aliasing |
Anti-aliasing
removes the jaggies from images. Looking at the
above diagram its clear that any anti
aliasing calculation has to be performed on a
pixel by pixel basis. Its a feature of
hardware acceleration that may not be noticed if
its not there, but is certainly noticeable when
it is there. The enhanced experience of
anti-aliased images lets users suspend disbelief
and become immersed in the game.
Imagine how it would feel to have your view
obstructed by objects on and around the
racetrack. A 3D program would have to determine
what is in the plane of view, and what is
obscured, and only draw those surfaces that lie
in front of others. However, it is not enough to
merely know which objects are in view and which
are hidden behind them. You have to know how they
are placed in relation to each other.
Z-buffering
is a means of determining the depth of objects in
relation to others. A z-buffer is a separate
portion of graphics memory that stores the z
value of an objects coordinates. The 3D
accelerator calculates, on a per pixel basis, the
z value of objects in relation to others in the
same plane, and draws only those with a lower z
value. Thats a big chunk of mathematics. If
you add all these calculations together, bear in
mind that its a continuous process, and then
realize that it changes drastically every time
the point of view changes, you can appreciate why
the CPU hasnt processing time to make it
all happen quickly enough.
3D graphics subsystems are going even further
for more realistic scene effects. Imagine you hit
a corner badly, you skid out of control off the
track and into the dirt, and the flying dust
obscures your view. Fogging
can create the illusion of mist or haze clouding
your view. It can also create the illusion of
distance by gradually fading objects in the far
distance. The function of placing a transparent
set of images against the background of your 3D
world, and controlling that transparency, is
something that can only be effectively achieved
with hardware acceleration. Foggings more
official name is alpha blending, where
transparent images are smoothly blended in with
those of other objects in the same plane of view.
How often do these functions take place? In entertainment
software the most important factor in determining
performance is not one figure relating to polygons
per second, or MIPS, or MOPS.
These values are constrained to single functions
of the processor whereas the entertainment
software experience embraces a number of
functions simultaneously. Thus, the benchmark
for entertainment software is frames per
second. Each frame is the time it takes to
react to input, perform 2D and 3D graphics, play
the audio, and perform the program logic. The
more realistic the game, the more real time if
you will, the greater the frame rate. The higher
the frame rate the higher the response of the
game to the user and the more realistic the
results on screen.
One method of
increasing frame rate is to store images from one
frame into the display memory as another one is
being read. This is double buffering and
ensures that the screen is updated at a faster
rate then sequentially reading in each image. The
great thing about double buffering as a feature
is that it also allows users to experience stereo
vision and further immerse themselves in the
experience of their virtual world. Low cost
stereo glasses are becoming widely available, and
are a driving technology in the development of 3D
graphics. After all, the closer the user gets to
being inside a frame of action the better the
experience.
The frame rate is also a good way of
determining the relative performance of one 3D
accelerator against another because, various
process such as audio, 2D graphics, and program
logic are consistent. The only potential
bottleneck is the 3D graphics pipeline. The
benefit of integrating 3D acceleration with other
multimedia functions is that it will ensure a
closely coupled set of functions running
optimally together. However, the greater
computational needs of 3D graphics and the extra
memory requirements will no doubt require stand
alone 3D accelerators for some applications.
|