So I was noclipping through d1_canals_08 in HL2 as one normally does on a saturday night when I saw this:

A nice sharp volumetric light that does a lot for the mood of the scene.

Half Life 2 was released in 2004 so obviously it’s not “real” volumetric fog but just a picture of a light beam that rotates towards the camera using Z-Billboard technique. It’s only visible when you look at it from the side. When you look at the light straight on, it fades out to hide the fact that it’s a billboard.

For lights like these it gives a much sharper silhouette than what can be achieved with godot’s real volumetric fog for almost no performance cost. Of course it can be used together with godot’s fog if you wanna.

So I decided to give it a shot in godot. Surprisingly it got more reactions from people than I expected for such a simple shader!

Download

Here’s the shader and a material that has some pre-assigned gradient and noise on it:

All you need to do is just make a MeshInstance3D in your scene with PlaneMesh in it and use that material on it.

How it all works

Here’s a short overview of how the shader works. You can open the shader file from the archive and read along, if you want:

Billboarding

First thing that the shader does is it makes our quads face the camera:

This is done in the vertex part of the shader:

void vertex() {
	vec3 local_z = normalize(MODEL_MATRIX[2].xyz);
	vec3 to_camera = normalize(CAMERA_POSITION_WORLD - NODE_POSITION_WORLD);
	vec3 local_x = normalize(cross(to_camera, local_z));
	vec3 local_y = normalize(cross(local_z, local_x));
	
	MODELVIEW_MATRIX = VIEW_MATRIX * mat4(
		vec4(local_x * length(MODEL_MATRIX[0].xyz), 0.0),
		vec4(local_y * length(MODEL_MATRIX[1].xyz), 0.0),
		MODEL_MATRIX[2],
		MODEL_MATRIX[3]
	);
}

In computer graphics, transformations are represented by matrices, in this 3D case a 4x4 matrix that can be created with the mat4 constructor. Transformation matrix is basically a table of numbers that define how to move, rotate, scale and skew vertices from one place to another.

Using mat4 we can create such a table from 4 vec4 vectors. First three columns of the table are basis vectors that define the coordinate system (those allow us to rotate, scale and skew) and the last column is position.

You may be wondering, why all the matrix nerd nonsense for such a simple thing as rotating an object?

That’s a good question! You see, despite gamedev being a thing for decades we’re still stuck with a lot of tools and pipelines that are made for computers rather than humans. Running code on GPU is one of those things.

Anyways, back to our matrices!

In our shader the position stays the same and z basis vector stays the same as well because we’re rotating around it (it’s the direction of the light beam). So we’re just copying them as-is from the initial model matrix, while the x and y basis vectors are being calculated:

vec4(local_x * length(MODEL_MATRIX[0].xyz), 0.0), // New x basis vector
vec4(local_y * length(MODEL_MATRIX[1].xyz), 0.0), // New y basis vector
MODEL_MATRIX[2], // Using the initial z basis vector as-is
MODEL_MATRIX[3] // Using the initial model position as-is

This top part is where we calculate the new x and y basis vectors to get the desired rotation:

vec3 local_z = normalize(MODEL_MATRIX[2].xyz);
vec3 to_camera = normalize(CAMERA_POSITION_WORLD - NODE_POSITION_WORLD);
vec3 local_x = normalize(cross(to_camera, local_z));
vec3 local_y = normalize(cross(local_z, local_x));

This line calculates the direction vector from object to camera:

vec3 to_camera = normalize(CAMERA_POSITION_WORLD - NODE_POSITION_WORLD);

And this line is where the billboarding magic happens:

vec3 local_x = normalize(cross(to_camera, local_z));

Cross product in linear algebra is an operation that gives a vector perpendicular to two other vectors. To make the mesh face the camera we’re setting its local x (which you can think of as local “right” direction) to be perpendicular to the local z and direction from the object to camera.

The local y is just perpendicular to the other 2 basis vectors. It just needs to be, because if our coordinate system is not orthogonal (if all basis vectors aren’t perpendicular to each other) our quad will end up weirdly skewed.

As you can see we’re normalizing the basis vectors (keeping their length 1) in the calculations to make sure we dont get weird skewing in the process. However before using them in the final model matrix we’re multiplying them by the length of the initial basis vectors to restore the initial model scale, in case it was set to something other than 1 in godot:

vec4(local_x * length(MODEL_MATRIX[0].xyz), 0.0),
vec4(local_y * length(MODEL_MATRIX[1].xyz), 0.0),

The MODELVIEW_MATRIX is a combined transformation matrix for both model transformation (position, rotation and scale of the object in the world space) and the view transformation (conversion from world space to view space that defines how vertices are projected on screen). The modelview matrix exists because it’s faster to multiply each vertex just by one matrix to calculate its final position on screen rather than doing two separate multiplications.

In our shader we’re only really interested in modifying the model rotation rather than how it’s projected on screen, but we can’t write to MODEL_MATRIX in gdshader, we can only write to MODELVIEW_MATRIX. Luckily all we need to do to calculate that is to multiply by VIEW_MATRIX (keep in mind that order of multiplication matters).

Hopefully this gives an idea of how billboarding works. Keep in mind that I’m not a math professor but a dirty gamedev, best I can do is explain things as I understand them myself. If you need a deeper dive into matrix math you’ll be better off watching 3blue1brown videos or smth.

Drawing the light beam

In the top part of the shader code we’re disabling shading to prevent the light beam from catching light and shadows from the environment and switching blending to “add” mode to make it behave more like fog:

render_mode unshaded, blend_add;

Initially I thought I’d just draw the beam in affinity photo or smth and use it for the texture but I decided that it wouldn’t hurt to have some control over the shape to be able to quickly make any beam shapes in engine and to be able to overlay gradients on top of it. So I just wrote some code to draw it in fragment shader instead of sampling from a texture.

First thing that I did when making this shader was to yoink the proximity fade code from the StandardMaterial3D, because I’m a busy man and I don’t have time to write things that were already written. This helps us avoid sharp intersections between the beam and the level geometry:

I yoinked it by enabling proximity fade on it, then right clicking on the material in the inspector and clicking Convert to ShaderMaterial.

This is the code that it’s using:

uniform float proximity_fade_distance : hint_range(0.0, 4096.0, 0.01) = 0.75;
uniform sampler2D depth_texture : hint_depth_texture, repeat_disable, filter_nearest;

[...]

float proximity_depth_tex = textureLod(depth_texture, SCREEN_UV, 0.0).r;
vec4 proximity_view_pos = INV_PROJECTION_MATRIX * vec4(SCREEN_UV * 2.0 - 1.0, proximity_depth_tex, 1.0);
proximity_view_pos.xyz /= proximity_view_pos.w;
ALPHA *= clamp(1.0 - smoothstep(proximity_view_pos.z + proximity_fade_distance, proximity_view_pos.z, VERTEX.z), 0.0, 1.0);

It’s a very common technique to isolate geometry intersections using the scene depth buffer, and it’s used for all kinds of effects in shaders, not just to fade out alpha.

The beginning of the fragment shader code is pretty straightforward and probably doesn’t need much commentary:

vec2 uv = UV;

vec3 input_color;
float input_alpha;
if (use_second_color) {
  input_color = mix(color.rgb, second_color.rgb, uv.y);
  input_alpha = mix(color.a, second_color.a, uv.y);
} else {
  input_color = color.rgb;
  input_alpha = color.a;
}

Here we’re simply either blending between two different colors vertically (on uv.y) or just copying values from the single input color based on what is selected in the shader parameters.

The reason I added support for the second color is just because it looks cool. I dont think there’s any physical phenomena for that.

Actually when I think about it there isn’t any explanation for anything that I do in this shader, I just kinda do whatever I want 🤔

After the color shenanigans we’re calculating the width of the light cone for a given distance from the light source (uv.y), using cone_curve as a power to bend the shape, and then using that width for the horizontal mask that will be used in ALPHA. It will gradually fade the cone away at its edges using the beam_sharpness parameter:

// Calculate cone width at current position along the cone length
float cone_progress_y = pow(uv.y, 1.0 - cone_curve);
float width = mix(cone_start_width, 1.0, cone_progress_y);
float half_width = width * 0.5;

// Create horizontal mask
float distance_from_center_x = abs(uv.x - 0.5);
float beam_edge_start = half_width - (half_width * (1.0 - beam_sharpness));
float horizontal_mask = 1.0 - smoothstep(beam_edge_start, half_width, distance_from_center_x);	

As you can see we’re also supporting the cone_start_width parameter there, it’ll come in handy to support spotlights with non-zero size, but we leave it at zero for now.

Using that mask in alpha (ALPHA *= horizontal_mask;) we get this with beam sharpness of 1:

And this with beam sharpness of 0:

As you can see there’s a visible 1px line on top, we’ll fix it next when we add a vertical mask.

The code for the vertical mask is pretty straightforward:

// Create vertical mask
float vertical_mask = pow(max(1.0 - uv.y, 0.001), beam_fade);
vertical_mask *= smoothstep(0.0, cone_start_width * 0.5, uv.y);

On the first line we’re using the beam_fade parameter to control how fast the beam will fade away with distance. We’re also clamping the minimum uv y to 0.001 to fix 1px line at the end of the beam. With the vertical mask it looks like this now:

If you’re wondering what the second line is for (vertical_mask *= smoothstep(0.0, cone_start_width * 0.5, uv.y);), it both fixes 1px line in the beginning of the beam as well as slowly fades the light in on top to prevent a sharp edge when using a non-zero cone_start_width:

Drawing the gradient

At this point we’re done with basic functionality, and we can add support for overlaying textures on top of, such as gradient and animated noise. Keep in mind that doing this will make it much harder to make the light beam believable with Z-billboard technique, because textures will easily give away that it’s just a flat picture, unless you make them very subtle.

Here’s the code that overlays the gradient:

if (use_gradient) {
  float gradient_coord = distance_from_center_x / half_width;
  float gradient_value = texture(gradient, vec2(gradient_coord, 0.5)).r;
  horizontal_mask *= mix(1.0, gradient_value, 1.0 - (vertical_mask * vertical_mask));
}

As you can see I’m mixing the gradient value with the vertical mask to make the gradient affect the top of the cone less. I did that because I found that it looks slightly more believable when you can’t see separate light rays near the light source.

This is what it looks like with a gradient:

And this is the gradient that it samples from:

I find that cubic interpolation works best for it. As you can see the variation in gradient color is very subtle.

Animated noise

Finally we add animated noise to it:

This effect will be the hardest to apply in a billboard, most likely it can only be used on distant lights that dont need to rotate much, or in cinematics with fixed camera angle, because we’re drawing this on the surface of the billboard.

In the video above I exaggerated the intensity and speed to make it easier to see the effect.

This is what the code for it looks like:

// Overlay noise
if (use_noise) {
  vec2 distortion_offset = noise_distortion_scroll_speed * TIME + NODE_POSITION_WORLD.xz;
  vec2 distortion_sample = texture(noise_texture, uv * noise_scale + distortion_offset).rg;
  vec2 distortion_uv = uv + (distortion_sample - 0.5) * 2.0 * noise_distortion_intensity;

  vec2 scroll_offset1 = noise_scroll_speed * TIME + NODE_POSITION_WORLD.xz;
  vec2 scroll_offset2 = vec2(-noise_scroll_speed.x * 0.7, noise_scroll_speed.y * 1.3) * TIME + NODE_POSITION_WORLD.xz;
  
  float noise_sample1 = texture(noise_texture, distortion_uv * noise_scale + scroll_offset1).r;
  float noise_sample2 = texture(noise_texture, distortion_uv * noise_scale + scroll_offset2).r;
  float combined_noise = mix(1.0, noise_sample1 * noise_sample2, 1.0 - vertical_mask);

  horizontal_mask *= mix(1.0, combined_noise, noise_intensity);
}

Here we’re sampling from the noise texture three times, two for the alpha value and one to distort the noise. All three of those scroll in different directions to create a somewhat believable dust/smoke animation. Finally we’re mixing the resulting noise with vertical mask to draw less noise near the light source to make the result look a little bit more 3D.

Oh and we’re offseting the texture by the node position with NODE_POSITION_WORLD to prevent different lights from having the same texture on them.

The noise texture itself that I used is Simplex noise with a little bit of domain warp:

If you keep the noise subtle you may get away with using it for some distant lights.

Fading out beams for lights facing the camera

Since it’s a billboard that rotates on the local Z axis we need to fade it out entirely when the camera angle gets close to the beam local Z direction because the billboard won’t really do much when you look at it from that direction, it only works when you look at it from the side.

This is pretty easy to do because we already have both local_z vector and the to_camera vector with the direction from object to camera, so all that’s really left to do is to calculate their dot product to see how much they align and use it in smoothstep:

z_axis_fade = smoothstep(z_fade_end, z_fade_start, abs(dot(to_camera, local_z)));

To make it accessible in the fragment shader we declare it as a varying:

varying float z_axis_fade;

And then we just multiply the final ALPHA by it:

ALPHA = horizontal_mask * vertical_mask * input_alpha * z_axis_fade;

This fades out any lights that are facing the camera. Here’s a grid of light beams:

That’s it!

Here’s the final result with some omni lights added:

You can make this a part of a light prefab together with some light source model and a SpotLight3D and maybe put some script on it that syncs the parameters of the spotlight with the shader parameters.

That way it would automatically change the color, size, range, etc based on what the spotlight is using to avoid manually setting those things twice.