Polygons! They Are Everywhere!
By seapegasus on Sep 22, 2006
Woah. Recently, when I look at buildings and streets, all I see is vectors, polygons and vertices. I had taken the first week of September off and, among other things, had a closer look at some 3D (game) engines, and 3D mesh editors. Mesh-editors are those tools to create 3D objects that you use in the 3D engine later. The engine is the piece of software that renders the 3D objects live and - if it has a game physics component with gravity and collision detection included - lets you walk through the 3D scene.
I had tried to write my own 3D engine in Cocoa once, not because I thought it would be better (in fact my demos were quite inefficient), but just to understand how basic 3D rendering algorithms worked. Pretty interesting stuff though!
- First, there's the Bresenham algorithm for drawing lines as approximations (stair-stepping pixels). It can be adapted to draw filled polygons too.
- Next you need to implement the projection formula that renders a 3D object in memory as a 2D image on the screen.
- When you got that part set up, you need to deal with transformation formulas for rotating, scaling and translating (moving) wireframe coordinates.
- Then when drawing an object, there's a formula to identify and cull backfacing polygons -- since your program will stubbornly render all 6 sides of a cube simultanously unless you tell it not to.
- The next issue is similar: As soon as you have more than one object in the scene, you also need to deal with overlapping polygons, and your programm can't decide the criteria on its own which of the overlapping polygons to draw and which not -- so you need algorithms for hidden surface removal. Actually, you can choose between two of them:
- The painter's algorithm simply sorts all polygons in the scene by depth, and draws all polygons from back to front, so that front objects are drawn over the background as it should be. A lot of redundant drawing, but works.
- A more efficient but complexer solution is the z-buffer algorithm: The z-buffer is named after the z-coordinate (depth). The buffer itself is implemented as a 2D array that stores each pixel's depth value -- because depth is the information that is lost when projecting a 3D coordinate to 2D coordininates on the screen. For every pixel P you draw to the 2D framebuffer, you check the z-buffer at the same coordinate: If there already is a lower (closer) depth value recorded, you don't draw P, since it is be behind another existing pixel. If there is no recorded pixel depth yet, or the recorded depth is further away than P's, you draw P to the 2D framebuffer, and record P's depth for this coordinate in the z-buffer -- etc.
Well, my own "3D engine" never reached any degree of usefulness beyond a study project. That's the reason why I eventually started looking into existing 3D engines and mesh editors for MacOS:
- The first mesh editor I tried was ArtOfIllusion. It is free, platform-independent, very easy to learn, and it exports 3D objects in Wavefront's widely used .obj and .mtl format.
- I also tried the infamous Blender. If you ever used it, you will certainly recall your first impression. Are they serious about the UI?! Yes! Blender is to mesh editing what vi is to text editing. Not exactly intuitive, but as soon as you learn you are meant to use all three mouse buttons in sync with keyboard keys, it slowly starts to make sense... Luckily there is a Blender tutorial. Mac users note that it's possible to use Blender with Apple's MightyMouse, but it's not optimal.
- Last but not least: The 3D engine I looked at is Irrlicht, a free platform-independent c++ library. I wrote down the steps my brother showed me how to compile the library and set up a Carbon project for MacOS X: Irrlicht in Apple XCode. There is still one open question at the end (about textures), but it should be enough to get you set up and running.