Linearizing depth buffer samples in HLSL
written on 31 March 2017
In Direct3D, we can can create a shader resource view (SRV) for our hardware depth buffer to sample it in the shader. Usually you will use something like `DXGI_FORMAT_D32_FLOAT` for the depth stencil view and `DXGI_FORMAT_R32_FLOAT` for the SRV.
However, this means that we get non-linear depth when sampling in the shader:
```
// The projection done in the vertex shader and perspective divide
float4 clipCoords = mul(mtxProj, mul(mtxView, worldCoords));
clipCoords /= clipCoords.w;
// [-1,1] -> [0,1]
float2 normClipCoords = clipCoords.xy * 0.5f + 0.5f;
// Sample the raw depth buffer
float nonLinearDepth = depthBuffer.Sample(pointSampler, normClipCoords).r;
```
The sampled depth is in range `[0,1]` where 0 represents the near- and 1 the far-clip-plane. Additionally, the distribution is not linear, but changes in depth close to the near-clip-plane have a higher resolution than changes far away from the camera.
**To linearize the sampled depth-buffer value**, we can multiply the native device coordinates (ndc) vector by the inverse projection matrix and divide the result by the w coordinate (as the result is a homogenous vector).
```
// We are only interested in the depth here
float4 ndcCoords = float4(0, 0, nonLinearDepth, 1.0f);
// Unproject the vector into (homogenous) view-space vector
float4 viewCoords = mul(mtxProjInv, ndcCoords);
// Divide by w, which results in actual view-space z value
float linearDepth = viewCoords.z / viewCoords.w;
```
Note that based on your projection matrix, you may have to negate the resulting depth, as it may point into the camera.