分享

William Chyr | Unity Shaders

 睡神在在 2016-02-22

This is Part 1 of a 3 part series on working with depth and normal textures in Unity. Here’s Part 2 and Part 3.

I spent the last three days learning to write shaders in Unity. For the most part, this isn’t a terribly difficult task as there is quite a lot of documentation that goes over the basics. However, when it comes to depth buffers, which are useful for post-process special effects, there’s definitely a shortage of information, and the Unity docs are not super helpful. For example, if you’re trying to understand how depth and normal textures are used, Unity doc’s advice is to “refer to the EdgeDetection image effect in the Shader Replacement example project or SSAO Image Effect.” While this may be sufficient for someone who already has a firm grasp of shaders, this isn’t very helpful for a beginner.

Anyway, after many hours of coding through trial and error, and hunting down rare blog posts and forum discussions concerning the topic, I eventually did figure out how to work with depth and normal textures in Unity. As the learning process was such a frustrating one, I thought it’d be a good idea to write down what I did while my memory is still fresh because:

  1. in a few months, I will have forgotten what I did and won’t be able to understand my own code.
  2. In case somebody out there is having the same problem, the information will hopefully be helpful. The few blog posts I found about depth textures were incredibly useful to me, and I was really glad those developers took the time to write things down.

So, here we go.

Inspiration

I had started dabbling with shaders about six months ago. I remember going through a lot of tutorials explaining the graphics pipeline, different kinds of shaders, etc. At the time, I didn’t understand any of it and the topic of shaders just seemed very intimidating. I did manage to get a few things done by starting with an existing shader and tweaking things around until I got kind of what I wanted.

This time around, I wanted to recreate this dimension shifting effect from the game Quantum Conundrum:quantum_conundrum_dimension_shift2

In case you haven’t played Quantum Conundrum yet, I’ll explain what’s going on. Basically, your character has the ability to shift between a number of different dimensions: fluffy dimension, heavy dimension, slow-motion dimension, and reverse-gravity dimension. In each dimension, the shape of the environment and objects are constant, but they have different physical properties. For example, in the fluffy dimension, everything is very lightweight, so you can pick up couches and other items you normally can’t pick up, and in the heavy dimension, everything becomes really heavy, so a cardboard box which normally wouldn’t weigh down a button, becomes heavy enough to do so in the heavy dimension.

In addition to changing properties, the look of everything changes. In fluffy dimension, everything looks like clouds, while in the heavy dimension, everything has a metallic texture to it. In the gif above, the player is shifting from normal dimension to heavy dimension, then to fluffy, back to heavy, and then normal again.

Here’s a still frame of the transition: quantum_conundrum_dimension_shift

A few key things I noticed about this effect:

  1. The ring of light that passes through the room always starts from whichever object you’re looking at and spreads outwards from there. My guess that it’s a sphere that’s expanding in radius in all direction, since you can see a bit of the ring behind the glass as well.
  2. The ring of light is superimposed on the environment as well an any objects.
  3. The ring splits up the textures of the dimensions, so the textures of the new dimensions are not actually put in place until the ring has passed through. This means that at certain points, objects actually have two textures (eg. the painting – look closely and you’ll see the bottom right part of the painting is the heavy dimension painting, while the rest is in the normal dimension).

First Step – Ask for Depth Texture

I had no idea how to approach this effect, and wasn’t even sure where to start looking. After posting the question on some forums and twitter, I was informed that it’s a post-processing effect shader that utilizes the depth buffer to give it that “spatially aware” sense.

I had forgotten most things I learned about shaders at this point, so I started off by going through the basics again. I won’t go into this part too much, except to point you to this explanation of the difference between surface shaders and vertex/fragment shaders, and a list of resources that I found really helpful. This stuff might seem really confusing and intimidating at first, but just read it over a few times and practice writing shaders, and I promise it’ll all make sense eventaully. I do encourage you to at least have a look over these links before you continue reading, especially if you’re still new to shaders.

In Unity, to get the depth buffer, you actually have to use a render texture, which is a special type of texture that’s created and updated in realtime. You can use it to create something like a TV screen that’s showing something happening in one area of your game. The depth buffer, or depth texture, is actually just a render texture that contains values of how far objects in the scene are from the camera. (I should note that render textures are only available in Unity Pro).

So how do you get the depth texture? It turns out you just have to ask for it. First, you need to tell the camera to generate the depth texture, which you can do with Camera.depthTextureMode. Then, to pass it to your shader for processing, you’ll need to use the OnRenderImage function.

Your script, lets call it PostProcessDepthGrayscale.cs will therefore look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
using UnityEngine;
using System.Collections;

//so that we can see changes we make without having to run the game

[ExecuteInEditMode]
public class PostProcessDepthGrayscale : MonoBehaviour {

   public Material mat;

   void Start () {
      camera.depthTextureMode = DepthTextureMode.Depth;
   }

   void OnRenderImage (RenderTexture source, RenderTexture destination){
      Graphics.Blit(source,destination,mat);
      //mat is the material which contains the shader
      //we are passing the destination RenderTexture to
   }
}

You will then need to attach this script to the camera object.

The Shader

Now, we will create a shader to process the depth texture and display it. It will be a simple vertex and fragment shader. Basically, it will read the depth texture from the camera, then display the depth value at each screen coordinate.

Let’s call the shader DepthGrayscale.shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Shader "Custom/DepthGrayscale" {
SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthTexture;

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos:TEXCOORD1;
};

//Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   //for some reason, the y position of the depth texture comes out inverted
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

//Fragment Shader
half4 frag (v2f i) : COLOR{
   float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);
   half4 depth;

   depth.r = depthValue;
   depth.g = depthValue;
   depth.b = depthValue;

   depth.a = 1;
   return depth;
}
ENDCG
}
}
FallBack "Diffuse"
}

So as you can see, it’s a pretty basic vertex and fragment shader. The one thing I want to draw your attention to is line 24:

o.scrPos.y = 1 - o.scrPos.y

For some reason, my depth texture kept coming out inverted. I couldn’t find anyone else who had the same problem, and could not figure out what was causing this, so I just inverted the y value as a fix. If you’re finding that your image is inverted vertically with the above script, then you can delete this line.

Now, create a new material, call it DepthGrayscale, and set its shader to “DepthGrayscale” that we just created. Then, set DepthGrayscale as the material variable on the PostProcessDepthGrayscale.cs script that you attached to your camera.

What you should see

You scene should look something like this (obviously with different objects – my scene is just a bunch of boxes spaced out so that you can see the change in color, which is just the depth value):

depth_texture

Also, if your image is coming out like the image below, try lowering the far clipping plane setting on the camera object. It could be that the value is set too high, and so all your objects fall into a small band of the depth spectrum, and therefore all appear black. If you lower the far clipping plane value, then the depth spectrum gets smaller, and the objects would fall along more of a gradient in terms of depth values. I spent quite a long time thinking my code wasn’t working, when it turned out I just had the far clipping plane set too high.

depth_texture_far_clipping

This post is getting to be quite long, so I’m going to stop for now, and continue in Part 2.

Just a quick recap, this is what we’ve done so far:

  • Learned to use Camera.depthTextureMode to generate a depth texture.
  • Wrote a script to tell the camera to send the rendered image (in this case the depth texture) to a render texture, which is then passed to a shader.
  • Wrote a shader to display the depth values as a grayscale scene.
  • Facebook
  • Twitter
  • Tumblr
  • LinkedIn
  • del.icio.us
  • Google Bookmarks
  • Reddit
  • StumbleUpon
  • RSS

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多