Using Direct3D For 2D Tile Rendering

Published February 24, 2000 by Herbert "Bracket" Wolverson, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement
[size="5"]Introduction

Isometric and tile-based games have been around for a very long time. Rendering techniques for this type of game have grown considerably in sophistication since then, but the style itself remains popular. While some have argued (vocally!) that the arrival of 3D graphics will render 2D (and "psuedo-3D") rendering techniques obsolete, there is good reason to believe that if this is the case - it will not be true for some time. For one thing, writing a dynamic 3D engine is difficult - hard enough to be prohibitive for small development groups. Until 3D graphics hardware matures, compatibility remains an issue. Likewise, until the average 3D card can handle a significantly higher polygon throughput than today, 3D graphics will not always look significantly better than a well-planned tile-based game. 3D-renderers do, however, offer 2D graphics a significant boost in performance. After all, all a "3D card" does is spit out textured polygons at high speed - something that can seriously slow down a non-accelerated rendering routine. Alpha blending, texture mapping, gouraud shading and other features have all been implemented in 2D routines - but a 3D card offers the potential to make them fast enough for regular use. This article describes the use of Direct3D to enhance 2D graphics - and suggests that nicely lit tile-based engines are within everybody's grasp. Tile-based and Isometric Views A significant volume of material has been written on both tile-based and isometric rendering. The "diamond tiles versus square tiles" argument has raged across many forums for years, with both sides having advantages. This article will focus on diamond-shaped tiles, largely because of author preference (they look nicer!) It should be relatively easy to switch to a square tile rendering engine using these techniques. Jim Adams' Guide to Isometric Engines gives a great overview of this kind of isometric rendering:


[size="5"]Traditional Methods

Traditionally, diamond tiles have been something of a pain. Tiles needed to be designed as diamond-shaped bitmaps, usually with a color-keyed background. It could be extremely different to make tile boundaries appear seamless. Additionally, the programmer was faced with a fair degree of overhead: the color keyed area of each bitmap had to be ignored during the blitting process. Many innovative ways of reducing this overhead exist (and many artists have proven innovative in tile design), but the fact remained: isometric rendering was tricky.


[size="5"]How 3D Architectures Can Help

3D cards are in many ways ideally suited to rendering diamond-tile based isometric views. Textured triangles, the staple of 3D engines, require the application of bitmaps to odd-shaped regions. Diamonds break down simply into two triangles (as do any other quad), so there is no difference in raw blitting overhead between square and diamond tiles. The artist's life is made easier, too: textures in 3D engines are typically rectangular, and are mapped to fit polygons. Thus, artists are able to design square textures (which tend to tile much more nicely), and let the 3D hardware worry about making it look good when stretched over a diamond-shaped tile. Additionally, height-mapping and other concerns become significantly easier to implement because a textured quad, unlike a static bitmap, may be readily warped.

Note: Most 3D accelerators are really 2D accelerators. With the exception of some new ones that help with transformation & lighting, all these cards can do is pump out textured triangles - onto a 2D surface. Thus, there is absolutely nothing unnatural about using a 3D card to speed up 2D rendering - in fact, its just doing what the card always does, without some heavy matrix math!


[size="5"]Some Assumptions

This article assumes a working knowledge of isometric/tile-based graphics. If you are unsure of how these work, you are advised to check out the bibliography for more information. While plenty of information is given on implementing an isometric engine, this article doesn't take the time to introduce the basic concepts. Sorry! This article also assumes a basic knowledge of DirectDraw (not Direct3D) and Windows programming. There are many tutorials on these on the Internet. Andr? LaMothe's book Tricks of the Windows Game Programming Gurus makes an excellent starting point for learning these subjects (and many others).


[size="5"]First Steps: Setting Up DirectX

The first stage in learning to use "Enhanced 2D" requires the creation of a minimal Win32 application. The sources that come with this article include WinMain.cpp (for each example), which is about as minimal as you can get if you still want your program to be called "Win32"! The only significant items to note in this code are the following:
  • The inclusion of [font="Courier New"][color="#000080"]CEngine *Engine[/color][/font] as a global variable. CEngine is my basic tile engine class.
  • Just before the main message loop in WinMain, note the call to Engine->GameInit()
  • The main loop itself repeatedly calls Engine->GameMain()
  • When closing down, Engine->GameDone() is called.
    [size="5"]The DXEngine Framework

    I'm a diehard C++ fan. I admit it. Classes, Object Oriented Programming and similar concepts appeal to me in ways that probably can't be described to minors... or at least intellectually, anyway. ;-) Because of my love for C++, I've designed the framework used for all the demonstrations in an object oriented manner. DXEngine is the base class for the game itself. Its an abstract class - meaning you can't instantiate it. It includes all of the code necessary to initialize DirectDraw and Direct3D, and hides it away in a single call to InitDirectX(). All a programmer wanting this functionality in his/her program needs to do is inherit this class and write GameInit(), GameMain() and GameDone(). Is that not nifty?


    [size="5"]Setting up DirectDraw

    DXEngine does quite a lot of work to bring DirectDraw to life. Much of this code originally came from LaMothe's work (although the BOB engine itself was trashed very quickly - talk about slow). The function that does all the work for setting up DirectDraw is InitDirectDraw() (no surprises there!). It doesn't differ significantly from the steps that you would need in general - although it does take the time to work in both windowed and fullscreen modes. Basically, it gets an interface to DirectDraw, sets the cooperation level, creates the screen mode, creates a primary and secondary surface and then sets up a clipper. Nothing too revolutionary.

    A few points in it need to be highlighted, however, in the context of Direct3D:
    • When setting up the Cooperation Level, I included the keyword DDSCL_FPUSETUP. This tells DirectDraw that Direct3D is likely to be using the floating point unit. Its not compulsory, but it can improve performance. You should avoid putting doubles in your code if you include this statement.
    • When setting the surface description (ddscaps), I included the keyword DDSCAPS_3DDEVICE. No prizes for guessing what this does... it tells Windows that the surface you want should be compatible with 3D device rendering. Not including this can result in nothing happening when you start rendering.
      [size="5"]Setting Up Direct3D

      InitDirect3D()
      handles most of this work. Since this is all new material to most people, I'll go through the initialization process in detail. Fortunately, its not as complicated as it could be. Direct3D 7 actually includes some helper functions to make this even easier - but we'll learn more doing it the hard way! (See the DirectX 7 SDK for details of the easy way to do this).

      The first step in initializing Direct3D is to query DirectDraw for an interface (ReportError is part of my DXEngine class - it displays a message box):

      LPDIRECT3D3 Direct3D;
      if (FAILED(DirectDraw->QueryInterface(IID_IDirect3D3,(LPVOID *)&Direct3D))) {
      ReportError("Direct3D Query Failed");
      };
      The second step is to enumerate 3D devices. This is done with the FindDevice command (part of the Direct3D interface). The following code looks for hardware accelerated Direct3D devices:

      D3DFINDDEVICESEARCH search;
      D3DFINDDEVICERESULT result;


      memset(&search,0,sizeof(search));
      search.dwSize = sizeof(search);
      search.bHardware = 1;
      memset(&result,0,sizeof(result));
      result.dwSize = sizeof(result);
      if (FAILED(Direct3D->FindDevice(&search,&result))) {
      ReportError("3D Hardware Not Found!");
      exit(10);
      };

      Assuming that this found a device, the next stage is to go ahead and create it. This can be as simple as calling CreateDevice (again, part of the Direct3D interface). The following line of code searches for a hardware accelerated device:

      LPDIRECT3DDEVICE3 D3DDevice; Direct3D->CreateDevice(IID_IDirect3DHALDevice,BBuffer,&D3DDevice,NULL);
      In my code, I actually expand on this somewhat; if a HAL device isn't found, DXEngine will look for an MMX device, then a regular RGB (software emulated) device. This isn't strictly necessary, though. The final stage in starting up Direct3D involves the creation of a viewport. Viewports tell Direct3D how to render the world. Part of this structure sets up 3D clipping - and is only really of use if you are using Direct3D's transformation functions. It's of no use to this article, so we can ignore it. The important part is this:

      LPDIRECT3DVIEWPORT3 Viewport;
      D3DVIEWPORT2 Viewdata;


      memset(&Viewdata,0,sizeof(Viewdata));
      Viewdata.dwSize = sizeof(Viewdata);
      Viewdata.dwWidth = ScreenWidth;
      Viewdata.dwHeight = ScreenHeight;

      // Create the viewport
      if (FAILED(Direct3D->CreateViewport(&Viewport,NULL)))
      { ReportError("Failed to create a viewport"); };
      if (FAILED(D3DDevice->AddViewport(Viewport)))
      { ReportError("Failed to add a viewport"); };
      if (FAILED(Viewport->SetViewport2(&Viewdata)))
      { ReportError("Failed to set Viewport data"); };
      D3DDevice->SetCurrentViewport(Viewport);
      What this does is it sets up a viewport structure with the screen size (if you wanted to render a smaller window, you can change these values). Then it creates the viewport, adds it to the device, adds the data to the viewport itself, and tells the device to use it. Overly complicated, if you ask me, but it gets the job done. When you are finished with Direct3D, it's a good idea to release the variables you've allocated:

      if (Viewport) Viewport->Release(); if (Direct3D)
      Direct3D->Release();
      If you don't do this, memory leaks are pretty likely. You may also make it impossible to run other Direct3D programs (until you reboot)... not a good idea - especially if you want to keep any fans you might have!


      [size="5"]Rendering a Wireframe Isometric View

      DEMO1
      (see the accompanying source code) draws a scrolling, wireframe isometric display - little more than a bunch of outlined triangles adjacent to one another. This is a good illustration of how isometric tile engines stagger tiles (and of how quads may be broken down into triangles), and is pretty boring... but it's a great start to learning Direct3D. Demo 1, like all of my other demos, is based around a class CEngine. CEngine inherits all of its low-level DirectX initialisation/destruction code from DXEngine.

      GameInit, the basic game initialisation, couldn't be simpler for this example. The entire method includes just 2 calls, 1 to initialise DirectDraw/Direct3D and the other to zero my scrolling position counter:

      void CEngine :: GameInit() {
      InitDirectX();
      ScrollX = 0;
      };

      GameMain, the function that gets called for each frame, is also pretty simple:

      void CEngine :: GameMain() {
      ScrollX++;
      if (ScrollX > 64) ScrollX = 0;
      FillSurface(BBuffer,0,NULL);
      Demo1Render(ScrollX, 0);
      Flip();
      // check of user is trying to exit
      if (KEY_DOWN(VK_ESCAPE)) {
      PostMessage(MainWindow, WM_DESTROY,0,0);
      } // end if
      };
      The scroller location is incremented and zeroed again if it exceeds the width of a tile. FillSurface is a utility routine I created that simply sets a DirectDraw surface to a solid color; if I didn't take the time to clear the back buffer each frame, things quickly become pretty ugly. Demo1Render is described in detail below - it's the actual Direct3D rendering routine for this example. Finally, the back buffer is flipped, and the program checks to see if the user has pressed ESC to quit.

      GameDone is really simple... it does nothing in this example! (The underlying class makes sure that Direct3D/DirectDraw are released properly).

      The real meat of this example is the rendering code (I bet you thought I'd never get to it!). The example itself is heavily commented, but here is a step-by-step breakdown of how it works:

      First of all, I declare some variables. ScreenX and ScreenY are used to store screen coordinates for rendering. WhereX and WhereY are used to store world coordinates. (If you are confused by these terms, check out one of the tile rendering tutorials... basically, world coordinates work in terms of whole tiles on a larger map, screen coordinates work in terms of pixel locations). The most important variable, however, is the following:

      D3DTLVERTEX Vertex[4];
      The D3DTLVERTEX structure is central to using Direct3D to improve 2D performance. It is part of Direct3D's "flexible vertex format" system. Other predefined vertex formats include D3DVERTEX and D3DLVERTEX. Each defines a different set of data, and tells the Direct3D pipeline what it needs to do.
      • D3DVERTEX data needs to be both transformed and lit.
      • D3DLVERTEX data already has lighting information, but needs to be transformed.
      • D3DTLVERTEX data already has screen coordinates and lighting information included. As such, it's of the most use in an Enhanced 2D context. The next step is common to most 3D rendering systems. Every frame of 3D graphics has to be preceded by a call to BeginScene():

        D3DDevice->BeginScene();
        Next, I inform Direct3D that I have no intention of using its lighting routines. Even though I'm specifying D3DTLVERTEX structures, Direct3D has a habit of wanting to use its own system... so this tells it not to. This doesn't really need to be called every frame, but I kept it in the rendering routine for clarity.

        D3DDevice->SetLightState(D3DLIGHTSTATE_MATERIAL,NULL);
        Next, and only in this demo, I inform Direct3D that I'd like to render in wireframe mode. This illustrates the SetRenderState command, one of Direct3D's most powerful concepts. D3D maintains a list of variables that may be changed with this command - including fog settings, filtering, perspective correction, and more. Check the SDK for more information.

        D3DDevice->SetRenderState(D3DRENDERSTATE_FILLMODE,D3DFILL_WIREFRAME);
        Because we are rendering a wireframe demo, we want the lines to be white. This is nice and easy to achieve. For each of the 4 vertices (points at the edges of the square), the color property can be set to white. Direct3D includes a macro, D3DRGB that takes 3 floats (from 0.0f to 1.0f) and converts them into its own D3DCOLOR format:

        Vertex[0].color = D3DRGB(1.0f,1.0f,1.0f);
        Vertex[1].color = D3DRGB(1.0f,1.0f,1.0f);
        Vertex[2].color = D3DRGB(1.0f,1.0f,1.0f);
        Vertex[3].color = D3DRGB(1.0f,1.0f,1.0f);
        Finishing up the initialisation phase of rendering, WhereY, and WhereX are set to 0. ScreenY is set to -16, ensuring that even if a large scroll offset is in use it will not leave any gaps.

        The actual isometric rendering loop is pretty much the same as that described in numerous other articles. It may be summarized as (in pseudocode - see the example for actual code) :

        While (ScreenY < ScreenHeight) {
        If (WhereY MOD 2) = 1 then ScreenX = -64 else ScreenX = -96
        While (ScreenX < ScreenWidth) {
        Setup Vertex Information
        Render The Triangles
        ScreenX = ScreenX + 64 (tile width)
        WhereX = WhereX + 1
        }
        WhereY = WhereY + 1
        WhereX = 0
        ScreenY = ScreenY + 16 (half the tile height)
        }
        Why did I leave that in pseudocode? Because its pretty standard stuff, and to keep this article short I'd rather focus on the actual rendering code. Besides, pseudocode is good practice. ;-) I've italicized the parts of this loop that concern Direct3D and will be expanded upon.

        Setting up the vertex information for rendering a square isn't too hard... although it could be easier. Direct3D doesn't support Quads as a primitive type (OpenGL does). Fortunately, its not all that hard to break a diamond into two triangles. The yellow numbers represent the location of each of the 4 vertices:

        tilelayout.gif


        An obvious question at this point is... why is 2 the bottom and not the rightmost vertex? The answer is a little optimization known as the TRIANGLESTRIP. Direct3D performs much better if you can batch triangles and send them through the pipeline together. Unfortunately, the triangles you send together have to be using the same texture. Since we will probably want adjacent tiles to look different, I've just grouped two triangles together. Triangles have to be sent to Direct3D in clockwise order - or they don't draw at all! Because of this, I start with the left most triangle:

        tri1.gif and then add one more vertex to get a second one: tri2.gif

        The vertex arrangement has allowed me to just add 1 vertex rather than using a triangle list and listing both triangles in detail. Neat!

        The SDK includes some nice pictures illustrating how much farther triangle strips may be taken. Its a problem that you can't change texture during the rendering of a texture strip; sometimes, you might want to glue together a big texture for multiple tiles, but in general their utility is greatly diminished because of this.

        Anyway, in terms of this example, the following code fills up the D3DTLVERTEX structures:

        Vertex[0].sx = ScreenX + OffsetX;
        Vertex[0].sy = ScreenY+16 + OffsetY;
        Vertex[1].sx = ScreenX+32 + OffsetX;
        Vertex[1].sy = ScreenY + OffsetY;
        Vertex[2].sx = ScreenX+32 + OffsetX;
        Vertex[2].sy = ScreenY+32 + OffsetY;
        Vertex[3].sx = ScreenX+64 + OffsetX;
        Vertex[3].sy = ScreenY+16 + OffsetY;

        There is plenty of room to optimize these allocations - but I wanted the code to remain clear. OffsetX and OffsetY are the key to smooth pixel scrolling - they are simply a pixel offset by which the entire image is shunted. sx and sy in each vertex define screen positions. Vertex[0] is the left of the diamond, Vertex[1] if the top of the diamond, Vertex[2] is the bottom, and Vertex[3] is the right side. Nothing too revolutionary here!

        Finally for the inner loop, the quad is sent off to be rendered:

        D3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP,D3DFVF_TLVERTEX,Vertex,4,D3DDP_WAIT);
        DrawPrimitive is the basis of modern Direct3D rendering. All this does is:
        1. Tell DrawPrimitive that Vertex contains a triangle strip (as opposed to a triangle list - which would need 6 vertices)
        2. Explains that the vertices are already transformed and lit (D3DFVF_TLVERTEX)
        3. Indicates where D3D may find the vertex information.
        4. Notes that there are 4 vertices [0-3]
        5. Tells DrawPrimitive to wait if it has to before rendering. You can use 0 for this parameter and it should still work.
        Lastly for the render routine, you have to call EndScene: [font="Courier New"][color="#000080"]D3DDevice->EndScene();[/color][/font]

        To recap, this example has shown you how to fire up Direct3D, render wireframe quads, and perform basic isometric scrolling. The tile engine assumes 64x32 tiles, but could be easily adapted for almost any other size. Not bad for a 47 k executable!


        [size="5"]Adding Textures to the Equation

        DEMO2 (and all subsequent demos) extends the code described above. Not an awful lot changes - but the code gains the ability to texture the tiles it renders, as opposed to the demonstrative (but painfully retro) wireframe graphics of DEMO1.

        The biggest addition to the mix in DEMO2 is a new class, CTexture. CTexture encapsulates code required to load a bitmap into memory, hand control of it over to Direct3D's texture management interface, and finally destroy the texture when you are done with it. The code itself is surprisingly simple, all things considered... although the bitmap loading routine can be simplified considerably if you so wish. In fact, the code for LoadTexture can be entirely replaced with the DirectX 7 utility function of the same name! The only reason I've included my own version is that I like to be able to tweak bit depths during loading - something that isn't too relevant for this article. So, I suggest that you either copy my code or use the SDK example!

        The changes to the code most relevant to this article are contained within the new version of CEngine:

        GameInit() now includes code to instantiate SampleTexture (a CTexture object).

        GameMain() is unchanged, except that Demo2Render is called instead of Demo1Render.

        GameDone() now deletes SampleTexture.

        The meat of the texturing code is found in Demo2Render. Once variables have been declared, the first new section deals with texture alignment. Direct3D (and OpenGL for that matter) both use floats to indicate texture location relative to a vertex. A value of 0 indicates the beginning of an axis, a value of 1.0 indicates the end of a texture. Thus, the size of a texture is irrelevant at this point. This lets a programmer/artist have pretty fine control over model skinning without needing to worry about precise pixel coordinates. It also makes it nice and easy to apply any texture you want to a quad. I decided that vertex 0 would be the top left, vertex 1 would be the top right, vertex 2 would be bottom left and vertex 3 would be bottom right. Its traditional when talking about textures to use u and v instead of x and y (presumably for the sake of clarity). Within a D3DTLVERTEX, these coordinates are stored as tu and tv. Thus, texture alignment for DEMO2 is setup as follows:

        Vertex[0].tu = 0.0f;
        Vertex[0].tv = 0.0f;
        Vertex[1].tu = 1.0f;
        Vertex[1].tv = 0.0f;
        Vertex[2].tu = 0.0f;
        Vertex[2].tv = 1.0f;
        Vertex[3].tu = 1.0f;
        Vertex[3].tv = 1.0f;

        Its worth noting that whatever coordinates you choose, the texture will be warped to fit your polygon. This gives a very quick and easy way of rotating, zooming and panning textures!

        The next consideration when texturing in Direct3D is the rhw component of D3DTLVERTEX. rhw generally isn't used in a 2D context, since its (to quote the SDK) "often 1 divided by the distance from the origin to the object along the z-axis." We don't have a z axis, so perspective correction isn't going to happen... so we'll just set this to 1.0f:

        Vertex[0].rhw = 1.0f;
        Vertex[1].rhw = 1.0f;
        Vertex[2].rhw = 1.0f;
        Vertex[3].rhw = 1.0f;

        Finally, since I'm only using one texture, I tell Direct3D to use it to texture all subsequent polygons for the scene:

        D3DDevice->SetTexture(0,SampleTexture->Texture);
        SetTexture is an interesting command. The 0 represents the texture stage. It is possible (and a good idea, sometimes!) to set several textures to apply to the same render command. With hardware multitexturing, this can be inexpensive in terms of processor performance, and you can create some really neat effects. This is where you would specify bump maps, lightmaps, overlays, etc. With transparency, its even possible to perform multilayer tile rendering this way with only one render call! SetTexture may be your best friend in this respect, but be warned: it is a little slow. You really only want to call it when you have to. Once per tile will work, but if you can find a way to make sure that runs of identical tiles don't require a call to it, then you'll get a frame rate boost.

        The renderer's inner loop remains unchanged! On the machines I tested the demo on, there wasn't much speed difference between texturing the polygons and just rendering them in wireframe, although this may vary depending upon hardware.

        For me, probably the best feature of this type of rendering is the texturing engine. Textures are rectangular bitmaps, and don't need to be pre-distorted (with wasted space for color keying). This makes life a lot easier for your artist!

        Important Note: Texture sizes must be powers of 2 to render properly in Direct3D. Thus, 64x64, 128x32 and 256x16 (etc) are all fine... but 15x15 isn't. This generally isn't a problem, though. Many 3D cards, however, choke on larger textures. Anything above 256x256 probably won't work on a large portion of cards out there (such as everything 3DFX have released when I wrote this article). Larger textures also use up a lot of texture memory, so its probably a good idea to keep textures small - particularly for a tiled engine.


        [size="5"]Adding Lighting to the Engine

        DEMO3 adds random colored lights to the renderer. A random color light is applied to each vertex of every tile, on every frame. The result is decidedly trippy, and has a nasty habit of making me think of disco. Despite this, it is a good example of the speed at which Direct3D can apply Gouraud shading on most cards. I remember working hard to do this in regular 2D, with just DirectDraw - and arriving with framerates around 5 fps. Direct3D makes it so ridiculously easy to use this form of lighting that I've been kicking myself ever since for not trying it earlier!

        A more advanced form of lighting uses lightmaps. These are basically a grayscale texture, applied as the second texture with SetTexture(1,texture) and a call to ensure that the renderer knows that it should alpha-blend the second texture. Direct3D makes doing this incredibly fast. However, lightmapping is a sufficiently large topic to warrant its own article - so I'm only going to say that it can be done (and quickly, substantially more quickly than any DirectDraw solution I've seen thus far). Vertex lighting should be more than enough to get you started on the Direct3D road.

        DEMO3's GameInit and GameDone are unchanged from DEMO2. GameMain calls Demo3Render instead of Demo2Render, but that's really all that changes in the non-rendering code.

        Demo3Render itself isn't substantially different from Demo2Render, either. The only difference is that inside the inner loop, before DrawPrimitive is called, I set the color value of each vertex to a random float. D3DRGB meshes red, green and blue floating point values between 0 and 1 into whatever color format Direct3D happens to be using within D3DCOLOR. The entirety of the random lighting code is this (it's a little inelegant, I just wanted to demonstrate the color property's power:

        Vertex[0].color = D3DRGB(rand()/5000,rand()/5000,rand()/5000);
        Vertex[1].color = D3DRGB(rand()/5000,rand()/5000,rand()/5000);
        Vertex[2].color = D3DRGB(rand()/5000,rand()/5000,rand()/5000);
        Vertex[3].color = D3DRGB(rand()/5000,rand()/5000,rand()/5000);


        [size="5"]Pulling This Together: A Height-mapped, Lit Tile Engine!

        DEMO4 is designed to demonstrate the advantages to having made it this far - it pulls most of the work from earlier sections into one, reasonably coherent engine. A lot would need to be done to make this into a working game, but it's a pretty good start. What this demo does is add a map structure, storing a height and lighting level for each vertex. This is something that would be extremely hard to do in a regular tile engine - the vertical element usually had to be added with clever artist tricks, and ramps had been known to give artists serious headaches.

        The first major change incorporated in DEMO4 is the tMapNode class. This is simply a storage class (I'd have used a struct, except that in C++ structs are stored as classes anyway so there really isn't a lot of point!). The entire class definition is as follows:

        class tMapNode {
        public:
        int VertexHeight[4];
        D3DCOLOR VertexColor[4];
        int Texture;
        };
        Basically, this class stores VertexHeight (an offset from the normal pixel location), a color for each vertex, and an index number designed to identify that tile's texture. In a real game, there would be plenty more - possibly including a pointer to any tiles that live above this one, giving you the option of infinite layering (something I'm fond of in this age of super-fast rendering!).

        Within CEngine, I've expanded the scrolling code to cope with having a map defined, rather than just static tiles. ScrollX now indicates the upper-left tile to be rendered. ScrollXOffset indicates the pixel offset by which to adjust the rendering pass. Direction has been added so that the scroller will bounce left, then right, then left, etc. I didn't add vertical scrolling, even though it would have been easy to do so - exactly the same principles apply. GameInit has gained code to initialize these variables. GameMain now also calls a new method, MakeDemoMap. MakeDemoMap is a relatively simple routine to initialize the map with random heights. It includes some tile adjacency stuff taken straight from TANSTAAFL's isometric tutorial (q.v.). It also assigns a lighting level based on the height of a vertex - higher equals brighter. This isn't a bad lighting system for a simple landscape render; in a real game, you would probably want something a little more sophisticated. Lighting systems is another topic that could easily fill another tutorial, so for now I'll give you the basics of how to apply the results - and let you experiment. Who knows, I may be talked into writing a lighting tutorial someday!

        GameMain includes some basic scrolling logic, now:

        if (Direction == 0) {
        ScrollXOffset--;
        if (ScrollXOffset < 1) {
        ScrollX++;
        ScrollXOffset = 63;
        if (ScrollX > 20) Direction = 1;
        };
        } else {
        ScrollXOffset++;
        if (ScrollXOffset > 63) {
        ScrollX--;
        ScrollXOffset = 0;
        if (ScrollX < 1) Direction = 0;
        };
        };

        This is pretty basic. If Direction is 0, it subtracts 1 from ScrollXOffset. If ScrollXOffset is less than 1, it moves tile. If it has run out of map, it reverses direction. When Direction is equal to 1, it does the same thing but in the other direction!

        GameMain also includes a call to Demo4Render, which has gained some extra parameters. WorldX and WorldY are now passed to it, and these specify the coordinates (in tilespace) of the top-left tile to be rendered.

        GameDone hasn't been touched. It didn't really need to be!

        Demo4Render is largely the same as previous incarnations, but some changes have been made to incorporate both the map structure and heightmapped rendering. The most trivial (but important) change is that WhereX and WhereY are now reset to World coordinates rather than zero. If I didn't do this, I'd be rendering the same portion of map every frame - which is boring! Almost all of the changes in Demo4 come within the renderer's inner loop:

        The Vertex sx and sy assignment section has changed. The height for each vertex is now subtracted from its sy coordinate; I chose subtraction so that positive height numbers would give the appearance of hills, while negative would give valleys... it just seemed more logical to me, that way around. The vertex position assignments now look like this:

        Vertex[0].sx = ScreenX + OffsetX;
        Vertex[0].sy = ScreenY+16 + OffsetY - Map[WhereX][WhereY].VertexHeight[0];
        Vertex[1].sx = ScreenX+32 + OffsetX;
        Vertex[1].sy = ScreenY + OffsetY - Map[WhereX][WhereY].VertexHeight[1];
        Vertex[2].sx = ScreenX+32 + OffsetX;
        Vertex[2].sy = ScreenY+32 + OffsetY - Map[WhereX][WhereY].VertexHeight[2];
        Vertex[3].sx = ScreenX+64 + OffsetX;
        Vertex[3].sy = ScreenY+16 + OffsetY - Map[WhereX][WhereY].VertexHeight[3];
        That wasn't too difficult! The neat thing is, this tiny change causes all of the textures to warp as appropriate with NO change to the rendering/blitting code! Isn't that wonderful? I like this so much that I should probably go and take a nice, long, cold drink. I remember trying to do this the hard way (2D!)... not pleasant. Enough to give me gray hair, anyway!

        The color code has also changed, so that it uses the stored colors rather than making you wonder if I'm on drugs. ;-) This is another pretty simple change:

        Vertex[0].color = Map[WhereX][WhereY].VertexColor[0];
        Vertex[1].color = Map[WhereX][WhereY].VertexColor[1];
        Vertex[2].color = Map[WhereX][WhereY].VertexColor[2];
        Vertex[3].color = Map[WhereX][WhereY].VertexColor[3];

        Believe it or not, that's all that it took to produce a neatly shaded, height-mapped tiling engine. There are plenty of optimizations that could be applied, lots of room for additional innovation, but that's the basics. With luck, lots of vertex-shaded tile games will appear, now! (If you write something cool, and think this article helped, I'd love to get [email="bracket@unforgettable.com"]email[/email] from you!)


        [size="5"]Where To Go From Here (Or What I didn't cover!)

        A number of topics could have been in this tutorial, but were omitted so that it didn't turn into a book. The basics of each is here, but a more advanced treatment of any of these would require its own tutorial. Some of these things are:
        • Sprites. I didn't produce a general quad-drawing system, although I gave out enough information that it should be really easy to do. Hint: load the sprite as a texture (using color keying if you aren't using any form of texture filter - it works the same was as in 2D, just remember to use [font="Courier New"][color="#000080"]SetRenderState(D3DRENDERSTATE_COLORKEYENABLE, TRUE)[/color][/font] at least once.
        • Complex Lighting. Vertex lighting is cool, and can be made a lot cooler if you apply dynamic lighting routines, more sophisticated light generation systems in the first place, etc. Lightmaps can also be cool, although they can eat up a lot of resources pretty quickly.
        • Layering. I deliberately didn't go into this; there are a lot of articles out there covering layering in tile based engines - and layering in Direct3D is no different from layering in 2D. I'm open to any other suggestions, of course. This tutorial was intended as a primer in using Direct3D to enhance one's tile rendering experience, and I hope it has helped. You can contact me at [email="bracket@unforgettable.com"]bracket@unforgettable.com[/email] with any questions, comments, queries or complaints!


          [size="5"]Bibliography

          TANSTAAFL, "Isometric 'n' Hexagonal Maps Part I"

          TANSTAAFL, "Isometric 'n' Hexagonal Maps Part II"

          Jim Adams, "Isometric Views: Explanation and Interpretation"

          Tobias Lensing, "Enhanced 2D"

          Andr? LaMothe, Tricks of the Windows Game Programming Gurus.

          Microsoft, DirectX SDK documentation. Usually the first place to check!

Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement