Surface Angle Silhouette with Unity Post-Processing

Silhouette Outlining on a Dragon and Bunny Model

Overview

In this ramble I will demonstrate how to use post-processing within Unity’s deferred rendering pipeline by writing a basic surface angle silhouette shader. It is assumed that the reader has some familiarity with Unity and it’s deferred rendering functionality.

We will see how to set up post-processing within a Unity scene, using both a Post-Processing Layer and Volume, as well as how to access the GBuffer textures and calculating basic scene values such as camera direction and pixel world position. While we will work within the deferred pipeline, it is possible to apply post-processing effects in a purely forward rendered scene though additional steps are necessary to capture the data that is freely available within the deferred GBuffers.

Setup

Installing the Post Processing Package

Note: This section assumes you are not using the High Definition Render Pipeline (HDRP) which includes it’s own post-processing implementation.

To begin, we must first install the Post Processing package as it does not come in the default Unity project setup. This can be retrieved through the Window > Package Manager window and searching for the Post Processing package (v2.1.7 at the time of writing).

With the package installed you will have access to the UnityEngine.PostProcessing.Runtime assembly. It should be noted that if you have defined your own .asmdef for your project that you will need to add a reference to this assembly. If you do not you will receive numerous errors in later steps such as:

error CS0234: The type or namespace name 'PostProcessing' does not exist in the namespace 'UnityEngine.Rendering' (are you missing an assembly reference?)

Using a Post Processing Effect

Before we write our own post-processing effect, we will try to use one of the post-processing effects that come with the package.

First, select your main camera and add a Post Process Layer component to it. For right now the only field we care about is the Layer which we want to set to Everything. This sets up our camera so that it will render the effect that we are about to add. Additionally, the Post Process Layer also gives us access to multiple AA implementations including FXAA, SMAA, and TAA.

With our camera setup to view post-processing effects we will now add one to the scene. This is done by creating a new object in the scene and adding a Post Process Volume component to it. For this example we want to make sure that the Is Global box is checked which will apply the effect to the entire scene.

To add the effect itself, click New in the Profile row to create a new effect profile. These profiles store information such as what effect to apply and the values of their customization parameters, if they have any. With the profile created, you should see an Add effect... button appear at the bottom of the component properties. When this is clicked you will be presented with a list of built-in effects that you may apply, such as Ambient Occlusion (SSAO) and Bloom.

For demonstration purposes, choose the Grain effect. To enable the effect, check the Intensity box and slide the value to 1.0. You should now see a grainy film effect applied to your camera in both Scene and Game mode.

At this point you can play around with the other settings for the effect and for the volume as well, such as Weight, to gain an understanding of what they do. You can also try creating a new GameObject layer to store your effect in, which can then be referenced by the Post Process Layer component that is attached to the main camera. This will allow you to selectively render global effects without having to toggle the effect itself on/off.

Surface Angle Silhouette

It is time to create our own post-processing effect now that we know how to add them to the scene.

The effect we will be creating is called a Surface Angle Silhouette. This is a relatively simple effect which operates on the dot product between an object’s normal and the camera view direction to produce a silhouette or outline on the object. We will expose properties within the effect to control the width, density, and color which will give us enough control to produce results ranging from subtle highlights to thick toon-esque edges.

The effect is split into two different parts: the Unity component and the screen-space shader. Though either one could be done first, it is generally better to write the component before the shader so that we can see our incremental progress in the shader and tweak it as we go.

Creating the Post Process Component

Each post-processing component is comprised of two classes:

  • PostProcessEffectSettings which defines the parameters that controls the effect, such as outline thickness and color.
  • PostProcessEffectRenderer which is responsible for drawing the screen-space triangle on which our effect shader is applied.

PostProcessEffectSettings

To start we will define our effect settings class. We want to expose three parameters for controlling the effect: thickness, density, and color.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[PostProcess(typeof(SurfaceAngleSilhouettingRenderer), PostProcessEvent.AfterStack, "Custom/SurfaceAngleSilhouetting")]
public class SurfaceAngleSilhouettingSettings : PostProcessEffectSettings
{
    [Range(0.0f, 1.0f)]
    public FloatParameter thickness = new FloatParameter { value = 0.2f };

    [Range(0.0f, 1.0f)]
    public FloatParameter density = new FloatParameter { value = 0.75f };

    public ColorParameter color = new ColorParameter { value = Color.black };
}

In the first two lines we create our effects class and provide metadata about it. The metadata specifies which renderer will be used (which we have not yet defined, causing a chicken-or-the-egg situation), when the effect should be rendered, and the name of the effect shown within the Unity Editor when selecting it from the effect list.

When can a Post Process effect be applied?
There are three different spots within the render pipeline that we can inject our post-process effect. These are exposed with the PostProcessEvent enumeration value that is specified in the settings class decorator, and are as follows:

  • BeforeTransparent the effect is applied only to opaque objects, before the transparent pass.
  • BeforeStack the effect is applied before the built-in effects, such as AA and depth-of-field.
  • AfterStack the effect is applied after the built-in effects.

In the body of the class we then define our three control parameters as we detailed before: two clamped floats and a color value. With our settings complete we can move on to the renderer class.

PostProcessEffectRenderer

In the renderer we will retrieve our custom shader (that has yet to be created), apply our settings control parameters, and then blit a fullscreen pass to a destination buffer using the current render buffer as input.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public sealed class SurfaceAngleSilhouettingRenderer : PostProcessEffectRenderer<SurfaceAngleSilhouettingSettings>
{
    public override void Render(PostProcessRenderContext context)
    {
        var sheet = context.propertySheets.Get(Shader.Find("PostProcessing/SurfaceAngleSilhouetting"));

        sheet.properties.SetMatrix("_ViewProjectInverse", (Camera.current.projectionMatrix * Camera.current.worldToCameraMatrix).inverse);
        sheet.properties.SetFloat("_OutlineThickness", (1.0f - settings.thickness));
        sheet.properties.SetFloat("_OutlineDensity", settings.density);
        sheet.properties.SetColor("_OutlineColor", settings.color);

        context.command.BlitFullscreenTriangle(context.source, context.destination, sheet, 0);
    }
}

With both the settings and renderer defined, let’s modify our scene to add it. Select the GameObject which we previously added a PostProcessVolume to and remove or disable the Grain effect. Then click Add effect... and select our new SurfaceAngleSilhouetting.

And voilà! Our screen has turned black and the Unity Editor console is displaying an error message informing us that it can not find our custom shader.

Creating the Post Process Shader

Now for the fun part, create a new shader in the editor and set it’s contents to the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Shader "PostProcessing/SurfaceAngleSilhouetting"
{
    HLSLINCLUDE
        #include "Packages/com.unity.postprocessing/PostProcessing/Shaders/StdLib.hlsl"
        float4 FragMain(VaryingsDefault i) : SV_Target
        {
            return float4(0.0, 1.0, 0.0, 1.0);
        }
    ENDHLSL

    SubShader
    {
        Cull Off ZWrite Off ZTest Always

        Pass
        {
            HLSLPROGRAM
                #pragma vertex VertDefault
                #pragma fragment FragMain
            ENDHLSL
        }
    }
}

Why do we include StdLib.hlsl?
This is a utility shader include file that comes with the Post Processing package. It provides common functionality for HLSL-based shaders. Additionally it defines several Vertex programs and the VaryingsDefault structure.

Our once black screen should now be green as the post-processing renderer component has found our new shader. It should be noted that while we are writing our shader using HLSL, it will be cross-compiled to GLSL when we are not using a Direct3D-based renderer.

We will now start incrementally building up our shader. Future code snippets will show only the modified or added portions, but the complete shader is available at the end.

Reconstructing the Scene

From left to right: the scene render target, depth buffer, and surface normals.

Our first step in creating the actual effect is to recreate the scene by extracting three components: the current render target, depth buffer, and the scene normals.

We require the current render target as we will be applying our effect on top of the previously rendered image, the depth to exclude the skybox from our effect, and the normals for each fragment as the surface angle silhouette is calculated as:

Where,

  • V is the normalized camera view vector.
  • S is the surface normal for the current fragment.

Fortunately for us, all three of these components are provided as inputs to our shader as we are using the deferred pipeline. They can be retrieved by adding definitions for the relevant textures and then sampling them within our fragment shader.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
TEXTURE2D_SAMPLER2D(_MainTex, sampler_MainTex);
TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
TEXTURE2D_SAMPLER2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2);

float4 FragMain(VaryingsDefault i) : SV_Target
{
    float3 sceneColor  = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.texcoord).rgb;
    float  sceneDepth  = SAMPLE_TEXTURE2D(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoord).r;
    float3 sceneNormal = SAMPLE_TEXTURE2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2, i.texcoord).xyz * 2.0 - 1.0;

    return float4(sceneColor, 1.0);
}

Where _MainTex is the current render target, _CameraDepthTexture is the main camera’s depth texture, and _CameraGBufferTexture2 is our GBuffer texture containing the scene normals. Notice that we have to undo the transformation applied to our normal which fits it to the range of [0, 1]. The other sampled values are usable without any further modifications.

What other GBuffer textures are available?
There are four GBuffers which can be sampled. Their contents are described in Deferred Shading Rendering Path, but can vary if using a custom deferred shader. By default, they are:

  • _CameraGBufferTexture0 which is {diffuse.rgb, occlusion}.
  • _CameraGBufferTexture1 which is {specular.rgb, roughness}.
  • _CameraGBufferTexture2 which is {normal.rgb, unused}.
  • _CameraGBufferTexture3 which is the cumulative lighting. (HDR or LDR)

Calculating the View Direction

Now that we have the surface normal we will need the camera view vector. Once that is retrieved we can finalize our effect.

It is important for us to keep in mind that there is not a single uniform value for the camera direction as we are using a perspective projection. So the direction vector at UV coordinate (0, 0) will be different from the vector at (1, 1).

In order to interpolate the view vector over the screen we will calculate it in our Vertex shader. However we are currently using the VertDefault program provided by our inclusion of StdLib.hlsl so we will first create our own Vertex program. Additionally we will use a new structure for Vertex output/Fragment input so that we can interpolate our camera view vector.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
float4x4 UNITY_MATRIX_MVP;

struct FragInput
{
    float4 vertex    : SV_Position;
    float2 texcoord  : TEXCOORD0;
    float3 cameraDir : TEXCOORD1;
};

FragInput VertMain(AttributesDefault v)
{
    FragInput o;
    
    o.vertex   = mul(UNITY_MATRIX_MVP, float4(v.vertex.xyz, 1.0));
    o.texcoord = TransformTriangleVertexToUV(v.vertex.xy);

#if UNITY_UV_STARTS_AT_TOP
    o.texcoord = o.texcoord * float2(1.0, -1.0) + float2(0.0, 1.0);
#endif

    return o;
}

float4 FragMain(FragInput i) : SV_Target
{
    // ...
}

And in our shader pass we set our new VertMain program as the Vertex shader:

#pragma vertex VertMain

Now we will calculate three vectors: the camera forward vector, the local direction vector, and finally our vector pointing to the camera.

To calculate the first of these vectors, the uniform camera forward vector, we simply unproject the NDC-space coordinate (0.0, 0.0, 0.5) back to world-space. The returned vector is constant regardless of which vertex or fragment is being rendered. Note that the calculation makes use of _ViewProjectInverse which we provided as input from our renderer component and _WorldSpaceCameraPos is provided by Unity.

float4 cameraForwardDir = mul(_ViewProjectInverse, float4(0.0, 0.0, 0.5, 1.0));
cameraForwardDir.xyz /= cameraForwardDir.w;
cameraForwardDir.xyz -= _WorldSpaceCameraPos;

We want the non-normalized direction vector as we will use it’s length in an upcoming step. If all we wanted was the camera direction we could instead unproject (0.0, 0.0, 1.0) and perform normalization after converting to world-space. However, what we are actually interested in is what we will be referring to as the “local” camera direction.

As we are using a perspective projection, the uniform view direction calculated earlier is only valid for the UV coordinate (0.5, 0.5). As we move away from this position the angle will vary across the image as we approach the edges of our view frustum. To calculate this “local” direction, we perform the following:

float4 cameraLocalDir = mul(_ViewProjectInverse, float4(o.texcoord.x * 2.0 - 1.0, o.texcoord.y * 2.0 - 1.0, 0.5, 1.0));
cameraLocalDir.xyz /= cameraLocalDir.w;
cameraLocalDir.xyz -= _WorldSpaceCameraPos;

Breaking this down, we first convert from our UV screen-space coordinates to NDC-space remembering that the NDC unit cube ranges from [-1, -1, 0] to [1, 1, 1].

float4(o.texcoord.x * 2.0 - 1.0, o.texcoord.y * 2.0 - 1.0, 0.5, 1.0)

Next we take our NDC-space position and unproject it back to projection-space:

mul(_ViewProjectInverse, float4(o.texcoord.x * 2.0 - 1.0, o.texcoord.y * 2.0 - 1.0, 0.5, 1.0));

Then we perform perspective division to return to view-space:

cameraForwardDir.xyz /= cameraForwardDir.w;

And finally from view-space to world-space:

cameraForwardDir.xyz -= _WorldSpaceCameraPos;

Whew. One step left and then we will have our “local” view direction vector:

o.cameraDir = cameraLocalDir.xyz / length(cameraForwardDir.xyz);

What does our shader look like at this point?
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Shader "PostProcessing/SurfaceAngleSilhouetting"
{
    HLSLINCLUDE
        #include "Packages/com.unity.postprocessing/PostProcessing/Shaders/StdLib.hlsl"

        TEXTURE2D_SAMPLER2D(_MainTex, sampler_MainTex);
        TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
        TEXTURE2D_SAMPLER2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2);

        float4x4 UNITY_MATRIX_MVP;
        float4x4 _ViewProjectInverse;
        
        struct FragInput
        {
            float4 vertex    : SV_Position;
            float2 texcoord  : TEXCOORD0;
            float3 cameraDir : TEXCOORD1;
        };

        FragInput VertMain(AttributesDefault v)
        {
            FragInput o;
            
            o.vertex   = mul(UNITY_MATRIX_MVP, float4(v.vertex.xyz, 1.0));
            o.texcoord = TransformTriangleVertexToUV(v.vertex.xy);

#if UNITY_UV_STARTS_AT_TOP
            o.texcoord = o.texcoord * float2(1.0, -1.0) + float2(0.0, 1.0);
#endif

            float4 cameraLocalDir = mul(_ViewProjectInverse, float4(o.texcoord.x * 2.0 - 1.0, o.texcoord.y * 2.0 - 1.0, 0.5, 1.0));
            cameraLocalDir.xyz /= cameraLocalDir.w;
            cameraLocalDir.xyz -= _WorldSpaceCameraPos;

            float4 cameraForwardDir = mul(_ViewProjectInverse, float4(0.0, 0.0, 0.5, 1.0));
            cameraForwardDir.xyz /= cameraForwardDir.w;
            cameraForwardDir.xyz -= _WorldSpaceCameraPos;

            o.cameraDir = cameraLocalDir.xyz / length(cameraForwardDir.xyz);
            
            return o;
        }

        float4 FragMain(FragInput i) : SV_Target
        {
            float3 sceneColor  = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.texcoord).rgb;
            float  sceneDepth  = SAMPLE_TEXTURE2D(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoord).r;
            float3 sceneNormal = SAMPLE_TEXTURE2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2, i.texcoord).xyz * 2.0 - 1.0;

            return float4(i.cameraDir, 1.0);
        }
    ENDHLSL

    SubShader
    {
        Cull Off ZWrite Off ZTest Always

        Pass
        {
            HLSLPROGRAM
                #pragma vertex VertMain
                #pragma fragment FragMain
            ENDHLSL
        }
    }
}

Applying Surface Angle Silhouettes

Between the inputs to our shader program and the output of our Vertex shader, we now have everything we need to apply the silhouette edges to our image in the Fragment shader. Let’s start by visualizing our dot product:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
float4 FragMain(FragInput i) : SV_Target
{
    float3 sceneColor  = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.texcoord).rgb;
    float  sceneDepth  = SAMPLE_TEXTURE2D(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoord).r;
    float3 sceneNormal = SAMPLE_TEXTURE2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2, i.texcoord).xyz * 2.0 - 1.0;

    if (sceneDepth > 0.0)
    {
        float3 toCameraDir = normalize(-i.cameraDir);
        float silhouette = dot(toCameraDir, normalize(sceneNormal));

        sceneColor = float3(silhouette, silhouette, silhouette);
    }

    return float4(sceneColor, 1.0);
}

Why do we check for depth?
We compare against the depth value to ensure that we do not apply our effect to parts of the scene that do not contain a rendered object. This includes our skybox or clear color, and is indicated by a depth value of 0.

Next we use our remaining input values from our renderer component to transform our raw dot product values into a proper silhouette/outline:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
if (sceneDepth > 0.0)
{
    float3 toCameraDir = normalize(-i.cameraDir);
    float silhouette = dot(toCameraDir, normalize(sceneNormal));

    silhouette = saturate(silhouette + _OutlineThickness);
    silhouette = smoothstep(_OutlineDensity, 1.0, silhouette);
    
    sceneColor = lerp(_OutlineColor, sceneColor, silhouette);
}

And that is it! From the Unity Editor we can now modify our post-processing effect control parameters to make a wide range of silhouettes and outlines, from thick black comic book style lines to smoothly interpolated colored highlights.

Along the way we have also learned how to use the Unity Post Processing package, create our own HLSL-based shader, and calculate various commonly-used values such as the camera direction and transforming from screen-space back to world-space.

As a final bonus calculation, we can easily calculate the world position of our current fragment with:

float linearDepth = LinearEyeDepth(sceneDepth);
float3 worldPosition = (i.cameraDir * linearDepth) + _WorldSpaceCameraPos;

The absolute world position set as the scene color.

Complete Source Code

Source: SurfaceAngleSilhouettingSettings.cs
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
using UnityEngine;
using UnityEngine.Rendering.PostProcessing;

[PostProcess(typeof(SurfaceAngleSilhouettingRenderer), PostProcessEvent.AfterStack, "SurfaceAngleSilhouetting")]
public class SurfaceAngleSilhouettingSettings : PostProcessEffectSettings
{
    [Range(0.0f, 1.0f), Tooltip("Thickness of the Silhouette Outline")]
    public FloatParameter thickness = new FloatParameter { value = 0.2f };

    [Range(0.0f, 1.0f), Tooltip("Density of the Silhouette Outline")]
    public FloatParameter density = new FloatParameter { value = 0.75f };

    public ColorParameter color = new ColorParameter { value = Color.black };
}

Source: SurfaceAngleSilhouettingRenderer.cs
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
using UnityEngine;
using UnityEngine.Rendering.PostProcessing;

public sealed class SurfaceAngleSilhouettingRenderer : PostProcessEffectRenderer<SurfaceAngleSilhouettingSettings>
{
    public override void Render(PostProcessRenderContext context)
    {
        var sheet = context.propertySheets.Get(Shader.Find("PostProcessing/SurfaceAngleSilhouetting"));

        sheet.properties.SetMatrix("_ViewProjectInverse", (Camera.current.projectionMatrix * Camera.current.worldToCameraMatrix).inverse);
        sheet.properties.SetFloat("_OutlineThickness", (1.0f - settings.thickness));
        sheet.properties.SetFloat("_OutlineDensity", settings.density);
        sheet.properties.SetColor("_OutlineColor", settings.color);

        context.command.BlitFullscreenTriangle(context.source, context.destination, sheet, 0);
    }
}

Source: SurfaceAngleSilhouetting.shader
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
Shader "PostProcessing/SurfaceAngleSilhouetting"
{
    HLSLINCLUDE
        #include "Packages/com.unity.postprocessing/PostProcessing/Shaders/StdLib.hlsl"

        TEXTURE2D_SAMPLER2D(_MainTex, sampler_MainTex);
        TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
        TEXTURE2D_SAMPLER2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2);

        float4x4 UNITY_MATRIX_MVP;
        float4x4 _ViewProjectInverse;

        float _OutlineThickness;
        float _OutlineDensity;
        float3 _OutlineColor;
        
        struct FragInput
        {
            float4 vertex    : SV_Position;
            float2 texcoord  : TEXCOORD0;
            float3 cameraDir : TEXCOORD1;
        };

        FragInput VertMain(AttributesDefault v)
        {
            FragInput o;
            
            o.vertex   = mul(UNITY_MATRIX_MVP, float4(v.vertex.xyz, 1.0));
            o.texcoord = TransformTriangleVertexToUV(v.vertex.xy);

#if UNITY_UV_STARTS_AT_TOP
            o.texcoord = o.texcoord * float2(1.0, -1.0) + float2(0.0, 1.0);
#endif

            float4 cameraLocalDir = mul(_ViewProjectInverse, float4(o.texcoord.x * 2.0 - 1.0, o.texcoord.y * 2.0 - 1.0, 0.5, 1.0));
            cameraLocalDir.xyz /= cameraLocalDir.w;
            cameraLocalDir.xyz -= _WorldSpaceCameraPos;

            float4 cameraForwardDir = mul(_ViewProjectInverse, float4(0.0, 0.0, 0.5, 1.0));
            cameraForwardDir.xyz /= cameraForwardDir.w;
            cameraForwardDir.xyz -= _WorldSpaceCameraPos;

            o.cameraDir = cameraLocalDir.xyz / length(cameraForwardDir.xyz);
            
            return o;
        }

        float4 FragMain(FragInput i) : SV_Target
        {
            float3 sceneColor  = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.texcoord).rgb;
            float  sceneDepth  = SAMPLE_TEXTURE2D(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoord).r;
            float3 sceneNormal = SAMPLE_TEXTURE2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2, i.texcoord).xyz * 2.0 - 1.0;

            if (sceneDepth > 0.0)
            {
                float3 toCameraDir = normalize(-i.cameraDir);
                float silhouette = dot(toCameraDir, normalize(sceneNormal));

                silhouette = saturate(silhouette + _OutlineThickness);
                silhouette = smoothstep(_OutlineDensity, 1.0, silhouette);
                
                sceneColor = lerp(_OutlineColor, sceneColor, silhouette);
            }

            return float4(sceneColor, 1.0);
        }
    ENDHLSL

    SubShader
    {
        Cull Off ZWrite Off ZTest Always

        Pass
        {
            HLSLPROGRAM
                #pragma vertex VertMain
                #pragma fragment FragMain
            ENDHLSL
        }
    }
}

References