2D Visibility

For the past few months, mostly because of school, I’ve been starting to move into more “low-level” gaming applications; that is, UDK to OpenGL and the like. I even gave a talk at my school about what I learned in the field of video game lighting, and uploaded an adapted version of the talk to YouTube:

As of right now most of my research is focused on 2D lighting and 2D visibility. Why? Well…it’s simpler than 3D for one thing, and I want to start simple. Moreover, many aspects of game logic require good knowledge of visibility issues. Navigation is based in part on what the AI can “see”, and that’s based off of visibility culling. Shadow maps are almost entirely based off visibility.

I’ve combed the web extensively for everything from visibility/shadow algorithms to fast 2D shaders, and right now I’m about to present to you the best of what I’ve found. If you have somehow stumbled upon this post looking for the same things you can stop searching; most of the resources are fairly outdated anyway.

We can split the notion of 2D visibility into two related topics: pre-render and post-render visibility. Pre-render, as its name implies, is done before rendering. Thus the algorithm has no idea what your occluding objects look like, what color your light is, etc. It only has abstract data: points, structs representing segments, maybe Java classes representing polygons. The output of such an algorithm usually consists of a list of points representing the vertices of the final visibility polygon.

The best algorithm I’ve found for this is here: http://simblob.blogspot.com/2012/07/2d-visibility.html. Notice how many links to more resources this guy has included! Amazing! I rather like this algorithm even though it’s in Haxe and I had to spend a few days rewriting it in C++ (you’ll never know the horrors I went through…).

There are apparently faster algorithms out there using something called Constrained Delaunay Triangulation (CDT). If you’re interested you can look into that, but the literature gets very math-y and boring really fast.

Post-render visibility is fun because people have done a whole lot of hacks to make the process faster. Post-render visibility and shadow-mapping are essentially one and the same, since you would almost never use this method for AI. What post-render visibility spits out is a texture, which you can overlay onto your scene. Or even better, you can combine it with a circular gradient and then overlay it onto your scene. Voila, lighting!

I like the method here: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/. However, this method can easily be improved given that you have really fast trigonometry routines. The problem with Catalin Zima’s approach is that there are three bottlenecks:

  1. Splitting the scene into four quadrants and flipping two of them
  2. Reducing the scene successively by powers of two
  3. Blur shaders at the end, as an extra step

What we need is a way to handle the entire scene at once and remove all the loops. Consider a scene like the following, which has the light source as a little green dot and occluding objects in black:

What if we applied a polar transform?

As you can see, this causes the green dot to stretch out into a long green line at the top of the scene. I will not go into the math behind why this happens here, so if you don’t understand polar coordinates I suggest you read up on them. We can now do a “cascade” effect on the rest of the scene — imagine that the light line is illuminating the rest of the scene from the top down.

This is a fairly simple step and requires only one loop, iterating per-pixel in the vertical direction. For each pixel, if the pixel above it is darker, make the current pixel that color. Simple as that! Now, just reverse the polar transformation:

What’s great about this method is that if you use a good polar-rectangular algorithm, you’ll get the nice “blurring” effect (a.k.a. soft shadows) automatically! Some things to remember if you intend to implement this algorithm: initially render all occluding objects in black, and do not render the light (the green dot in the above images was just for show).

Pseudocode for the polar transform in some weird BASIC dialect is available here: http://www.blitzbasic.com/Community/posts.php?topic=97463. We could, of course, improve the algorithm even more. Right now it only handles lights in the exact center of the image; what if we wanted a light that could move around the scene? Or what if we wanted a rectangular scene? This is actually fairly trivial, what you do is render the polar transform to a texture that is 360 pixels wide (360 degrees in a circle) and having a height that is the diagonal of your original image. This ensure that even if your light is in the very corner, everything will render alright.

I’ll hopefully be back quite soon with more of my research. Until then, happy gaming!

7 thoughts on “2D Visibility

  1. Hi,
    First of all, nice article !
    I have a question though :
    The step of the “cascade effect”, doesn’t seem feasible using shaders… So how would you do it efficiently ?
    Thanks.

    • Actually it’s more efficient than many other methods out there…the whole point of the transform/cascade is to reduce the amount of work being done. The way I do it is just how I say, for each TexCoord grab the pixel right above it, compare, and set a new value. Not the best way I’m sure, but it’s still blazingly fast.

      I think you could boost efficiency by keeping track of the pixels somehow, or maybe using bitwise AND (color = color & oldcolor), maybe even on entire rows. Haven’t tried this though.

  2. Do you mean you’re not doing it in a shader ?
    If so, how about RAM VRAM transfers performances ???
    If not, how would you assure pixels are treated in a specific order from a shader ?

    Not questioning your results ^^ just curious about how you achieve this, this subject really interests me !

    • Yes it’s in a shader. You seem like you’re a lot more knowledgeable about this subject than I am 😛 I am aware that accessing pixel values other than the one currently being operated on is also costly, but necessary to do it this way.

      Just curious, do you know of any way that might be better?

  3. Hi! Great article! Funny coincide, I have been using UDK for years and just recently have shifted my focus in C++ , OpenGL and SFML! I even have played with the QuadAssault game source like you seem to have ( or at least It did look a bit like it in the video ). Anyways I am wondering if you got the algorithm from redblobgames to work with C++? Have you got that polar approach working with multiple lights?

    • Haha yes QuadAssault was the basis for a lot of my early code…even though I could never get the shaders to work right.

      The grayish animated thing near the end of the video is redblobgame’s algorithm in C++. As for the polar approach, I never went much further than one light and some un-optimized code. I would assume that multiple lights you’d just render each separately and overlay them…

  4. Pingback: hidebehind | YOUPI!

Leave a comment