For the past few months, mostly because of school, I’ve been starting to move into more “low-level” gaming applications; that is, UDK to OpenGL and the like. I even gave a talk at my school about what I learned in the field of video game lighting, and uploaded an adapted version of the talk to YouTube:
As of right now most of my research is focused on 2D lighting and 2D visibility. Why? Well…it’s simpler than 3D for one thing, and I want to start simple. Moreover, many aspects of game logic require good knowledge of visibility issues. Navigation is based in part on what the AI can “see”, and that’s based off of visibility culling. Shadow maps are almost entirely based off visibility.
I’ve combed the web extensively for everything from visibility/shadow algorithms to fast 2D shaders, and right now I’m about to present to you the best of what I’ve found. If you have somehow stumbled upon this post looking for the same things you can stop searching; most of the resources are fairly outdated anyway.
We can split the notion of 2D visibility into two related topics: pre-render and post-render visibility. Pre-render, as its name implies, is done before rendering. Thus the algorithm has no idea what your occluding objects look like, what color your light is, etc. It only has abstract data: points, structs representing segments, maybe Java classes representing polygons. The output of such an algorithm usually consists of a list of points representing the vertices of the final visibility polygon.
The best algorithm I’ve found for this is here: http://simblob.blogspot.com/2012/07/2d-visibility.html. Notice how many links to more resources this guy has included! Amazing! I rather like this algorithm even though it’s in Haxe and I had to spend a few days rewriting it in C++ (you’ll never know the horrors I went through…).
There are apparently faster algorithms out there using something called Constrained Delaunay Triangulation (CDT). If you’re interested you can look into that, but the literature gets very math-y and boring really fast.
Post-render visibility is fun because people have done a whole lot of hacks to make the process faster. Post-render visibility and shadow-mapping are essentially one and the same, since you would almost never use this method for AI. What post-render visibility spits out is a texture, which you can overlay onto your scene. Or even better, you can combine it with a circular gradient and then overlay it onto your scene. Voila, lighting!
I like the method here: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/. However, this method can easily be improved given that you have really fast trigonometry routines. The problem with Catalin Zima’s approach is that there are three bottlenecks:
- Splitting the scene into four quadrants and flipping two of them
- Reducing the scene successively by powers of two
- Blur shaders at the end, as an extra step
What we need is a way to handle the entire scene at once and remove all the loops. Consider a scene like the following, which has the light source as a little green dot and occluding objects in black:
What if we applied a polar transform?
As you can see, this causes the green dot to stretch out into a long green line at the top of the scene. I will not go into the math behind why this happens here, so if you don’t understand polar coordinates I suggest you read up on them. We can now do a “cascade” effect on the rest of the scene — imagine that the light line is illuminating the rest of the scene from the top down.
This is a fairly simple step and requires only one loop, iterating per-pixel in the vertical direction. For each pixel, if the pixel above it is darker, make the current pixel that color. Simple as that! Now, just reverse the polar transformation:
What’s great about this method is that if you use a good polar-rectangular algorithm, you’ll get the nice “blurring” effect (a.k.a. soft shadows) automatically! Some things to remember if you intend to implement this algorithm: initially render all occluding objects in black, and do not render the light (the green dot in the above images was just for show).
Pseudocode for the polar transform in some weird BASIC dialect is available here: http://www.blitzbasic.com/Community/posts.php?topic=97463. We could, of course, improve the algorithm even more. Right now it only handles lights in the exact center of the image; what if we wanted a light that could move around the scene? Or what if we wanted a rectangular scene? This is actually fairly trivial, what you do is render the polar transform to a texture that is 360 pixels wide (360 degrees in a circle) and having a height that is the diagonal of your original image. This ensure that even if your light is in the very corner, everything will render alright.
I’ll hopefully be back quite soon with more of my research. Until then, happy gaming!