This is Part 1 of a 3 part series on working with depth and normal textures in Unity. Here’s Part 2 and Part 3. I spent the last three days learning to write shaders in Unity. For the most part, this isn’t a terribly difficult task as there is quite a lot of documentation that goes over the basics. However, when it comes to depth buffers, which are useful for post-process special effects, there’s definitely a shortage of information, and the Unity docs are not super helpful. For example, if you’re trying to understand how depth and normal textures are used, Unity doc’s advice is to “refer to the EdgeDetection image effect in the Shader Replacement example project or SSAO Image Effect.” While this may be sufficient for someone who already has a firm grasp of shaders, this isn’t very helpful for a beginner. Anyway, after many hours of coding through trial and error, and hunting down rare blog posts and forum discussions concerning the topic, I eventually did figure out how to work with depth and normal textures in Unity. As the learning process was such a frustrating one, I thought it’d be a good idea to write down what I did while my memory is still fresh because:
So, here we go. InspirationI had started dabbling with shaders about six months ago. I remember going through a lot of tutorials explaining the graphics pipeline, different kinds of shaders, etc. At the time, I didn’t understand any of it and the topic of shaders just seemed very intimidating. I did manage to get a few things done by starting with an existing shader and tweaking things around until I got kind of what I wanted. This time around, I wanted to recreate this dimension shifting effect from the game Quantum Conundrum: In case you haven’t played Quantum Conundrum yet, I’ll explain what’s going on. Basically, your character has the ability to shift between a number of different dimensions: fluffy dimension, heavy dimension, slow-motion dimension, and reverse-gravity dimension. In each dimension, the shape of the environment and objects are constant, but they have different physical properties. For example, in the fluffy dimension, everything is very lightweight, so you can pick up couches and other items you normally can’t pick up, and in the heavy dimension, everything becomes really heavy, so a cardboard box which normally wouldn’t weigh down a button, becomes heavy enough to do so in the heavy dimension. In addition to changing properties, the look of everything changes. In fluffy dimension, everything looks like clouds, while in the heavy dimension, everything has a metallic texture to it. In the gif above, the player is shifting from normal dimension to heavy dimension, then to fluffy, back to heavy, and then normal again. Here’s a still frame of the transition: A few key things I noticed about this effect:
First Step – Ask for Depth Texture I had no idea how to approach this effect, and wasn’t even sure where to start looking. After posting the question on some forums and twitter, I was informed that it’s a post-processing effect shader that utilizes the depth buffer to give it that “spatially aware” sense. I had forgotten most things I learned about shaders at this point, so I started off by going through the basics again. I won’t go into this part too much, except to point you to this explanation of the difference between surface shaders and vertex/fragment shaders, and a list of resources that I found really helpful. This stuff might seem really confusing and intimidating at first, but just read it over a few times and practice writing shaders, and I promise it’ll all make sense eventaully. I do encourage you to at least have a look over these links before you continue reading, especially if you’re still new to shaders. In Unity, to get the depth buffer, you actually have to use a render texture, which is a special type of texture that’s created and updated in realtime. You can use it to create something like a TV screen that’s showing something happening in one area of your game. The depth buffer, or depth texture, is actually just a render texture that contains values of how far objects in the scene are from the camera. (I should note that render textures are only available in Unity Pro). So how do you get the depth texture? It turns out you just have to ask for it. First, you need to tell the camera to generate the depth texture, which you can do with Camera.depthTextureMode. Then, to pass it to your shader for processing, you’ll need to use the OnRenderImage function. Your script, lets call it PostProcessDepthGrayscale.cs will therefore look like this:
You will then need to attach this script to the camera object. The ShaderNow, we will create a shader to process the depth texture and display it. It will be a simple vertex and fragment shader. Basically, it will read the depth texture from the camera, then display the depth value at each screen coordinate. Let’s call the shader DepthGrayscale.shader:
So as you can see, it’s a pretty basic vertex and fragment shader. The one thing I want to draw your attention to is line 24: o.scrPos.y = 1 - o.scrPos.y For some reason, my depth texture kept coming out inverted. I couldn’t find anyone else who had the same problem, and could not figure out what was causing this, so I just inverted the y value as a fix. If you’re finding that your image is inverted vertically with the above script, then you can delete this line. Now, create a new material, call it DepthGrayscale, and set its shader to “DepthGrayscale” that we just created. Then, set DepthGrayscale as the material variable on the PostProcessDepthGrayscale.cs script that you attached to your camera. What you should seeYou scene should look something like this (obviously with different objects – my scene is just a bunch of boxes spaced out so that you can see the change in color, which is just the depth value): Also, if your image is coming out like the image below, try lowering the far clipping plane setting on the camera object. It could be that the value is set too high, and so all your objects fall into a small band of the depth spectrum, and therefore all appear black. If you lower the far clipping plane value, then the depth spectrum gets smaller, and the objects would fall along more of a gradient in terms of depth values. I spent quite a long time thinking my code wasn’t working, when it turned out I just had the far clipping plane set too high. This post is getting to be quite long, so I’m going to stop for now, and continue in Part 2. Just a quick recap, this is what we’ve done so far:
|
|