A few days ago I saw video titled Shadow art is better with Legos

Watching this video got me thinking about how something like this could be done and started to look for more information on what this was all about.

I was interested in seeing what other possibilities for Shadow Art were available and found some interesting ideas.

That’s only a subset of what can be described as Shadow Art!

The way I understood it, was that you could arrange shapes in multiple ways to block the light and end up with a shadow that created a familiar shape.

The great thing about programming is that, usually, you can take an idea and turn that into some sort of demo! Even better when graphics are involved 😉

So, I decided to go ahead and make a demo that would result in some sort of Shadow Art!

Using Unity allowed me to focus on the core of the problem rather than going out and solving all sorts of dependencies that the idea depended on.

Here’s the result:

Thank’s to Unity’s WebGL export capabilities I’ve put up a runnable version of this code:

Shadow Art with Unity Demo, check it out!

Here’s a more technical explanation of how this was achieved

The setup

I went for having 2 spot lights set up, to project the shadows

Then have a number of textures that are used to create the geometry

Since there are 2 spotlights, I wanted to see if 2 separate shadows could be generated from the same mesh, similar to what was being done on the video shown for the Magical Angle Sculptures, except I went for just 2 shadows instead of 3!

The way this is computed is in 3 passes:

First Pass

This is the most important step, since it is here where the overlap is calculated and sets up the base for showing 2 shadows at the same time.

To figure out the overlap:

1. Iterate through the pixels on one of the textures
2. As soon as there’s an opaque pixel do a look up on the other texture
3. If the lookup results in a pixel that’s also opaque, then generate the vertices for a cube at that position (x,y) and then center the z position
4. Lastly, the overlapping pixel is stored so that it is not accessed again

Second & Third Pass

These are more generic, and essentially create geometry for opaque pixels where geometry hasn’t been created before.

To give it a less uniform feel, there’s a random depth value applied to each new piece of geometry.

Image Effects

Unity comes with a number of pre-built Image Effects which helped to make this look more presentable.

I’m using the Vignette and Bloom Image Effects to create the final look for the presentation. You can see how adding them up looks below.

Hope you find this fun to play around with, and if you have some cool ideas let me know!

## Multiple render targets and Stage3D

For some time I’ve been curious on how to do things with the depth buffer using Stage3D.

As far as I could find, there is no real “direct” way to access the depth buffer with Stage3D, so I went ahead and did the next best thing, which was to build my own Depth Buffer in a shader.

I saw that this could be done thanks to Flare3D’s MRT demo and started learning how I could use this to test out some things I’ve been thinking about.

Now that I had a depth buffer in place, the next step was to use this to see what sort of techniques I could combine it with.

I’ve been following Dan Moran on Patreon  and decided to try out an intersection highlight shader which he describes in one of his videos. This looked fun to do, so I went ahead and tried to implement a basic form of it using Flare3D’s shading language FLSL.

Here’s how the shader turned out:

You can also get it from here

The way that this shader works is in the following:

1. Provide a texture with depth information
2. Check if the difference between your current position and the value on the depth buffer is within a threshold
1. If it is within the threshold, then use the smoothStep function to create a sort of “fall-off” effect, which at the maximum value makes the color white, and if not it fades out into the color of your mesh (or the tint being applied to it)
3. One more thing to keep in mind is that you need to use screen space coordinates so that you test against the texture’s UVs and your own position

There are some more things to consider, such as the color format of your depth texture. If you use regular 32 bit RGBA values, then you will get some banding since the data in depth texture won’t be as precise as you need it, so using an RGBA_HALF_FLOAT value is recommended.

The final part comes by composing the 2 buffers together to create the final image. This is achieved by performing additive blending of the 2 render targets using another shader that outputs it to a final 3rd render target, which is then drawn on screen.

+

=

But, in practice how is this all achieved?

1. Render all the geometry that you don’t want to use for effects together to a render target
1. Also, use the MRT technique to be able to export a 2nd texture which is the equivalent of your depth buffer
2. Render your effect meshes to another render target and supply the depth texture as a parameter
3. Finally, take the outputs of steps 1 & 2, and place them into a 3rd shader that does the additive blending for you and “composes” the final image for your shader.

Using Intel GPA you can see how the Render Targets all look:

In this case, you are only drawing what you need once to a number of buffers, and in the end compositing an image from all the various steps.

I’ve created a repo on GitHub that you can download and check out and hopefully extend for your own needs 🙂

You can also follow me on @jav_dev

Though I’ve heard about ShaderToy for a while, and looked at various examples of what they do, I never really had spent much time trying it out.

I recently started looking into Javascript and ThreeJS and very shortly the idea of making a cool background animation came into mind.

I wanted to do a copy of the PlayStation3’s background menu, but didn’t find much about it online. I ended up going to ShaderToy and doing my own, based off of another shader:

I think it came out pretty well, and made it using ShaderToy. Now I’ve managed to port it over to ThreeJS:

How does this work?

Main Function #1

The meat of the work is done in the fragment shader.

[code language=”javascript”]

color += calcSine(uv, 0.20, 0.2, 0.0, 0.5, vec3(0.5, 0.5, 0.5), 0.1, 15.0,false);,
color += calcSine(uv, 0.40, 0.15, 0.0, 0.5, vec3(0.5, 0.5, 0.5), 0.1, 17.0,false);,
color += calcSine(uv, 0.60, 0.15, 0.0, 0.5, vec3(0.5, 0.5, 0.5), 0.05, 23.0,false);,

[/code]

The main function calls into calcSine. Every call to calcSine() creates a new “line” in the shader.

Calc Sine

The calcSine() function, simply calculates the value to pass into the sin() function, and is multiplied by various parameters (amplitude & offset). Amplitude meaning how much to stretch the values by in the y-axis, and offset moving it that much of the screen percentage up (5%, 10%, etc.)

[code language=”javascript”]

float angle = time * speed * frequency + (shift + uv.x) * 3.14;

[/code]

The next trick is to set a sort of exit criteria, which is diffY. What this asks is how far the value we got from sin() is from our current uv.y value. This is important since it determines if we’re below or above the uv.y coordinate we’re calculating this for.

dSqr simply figures out how far we are from the uv.y coordinate

[code language=”javascript”]

float y = sin(angle) * amplitude + offset;,
float diffY = y – uv.y;,
float dsqr = distance(y,uv.y);

[/code]

The if-statement I put in there is done to determine if we’re below or above, and how to handle it. Multiplying dSqr by 8.0 helps to make the cut off more smooth than multiplying by something higher. Multiplying by 12 would create a more cutting effect,  while multiplying by something below it makes the image more blurry.

[code language=”javascript”]

if(dir && diffY > 0.0)
{
dsqr = dsqr * 8.0;
}
else if(!dir && diffY < 0.0)
{
dsqr = dsqr * 8.0;
}

[/code]

The last step is to apply a power function & smoothstep to make the final effect and cause that “fadeout” effect around the lines.

[code language=”javascript”]

scale = pow(smoothstep(width * widthFactor, 0.0, dsqr), exponent);

[/code]

Main Function #2

What calling calcSine() multiple times allows us to do is to shift the color being returned up closer to white. This creates an effect of almost having “additive” blending, and makes the faded-out regions shiny brighter when they overlap.

At the end of the main function, a final step is done to tint the entire background with a color, and make this into a gradient color.

[code language=”javascript”]

color.x += t1 * (1.0-uv.y);
color.y += t2 * (1.0-uv.y);

gl_FragColor = vec4(color,1.0);

[/code]

Putting all this together, and updating the “time” uniform value finally gives us the final effect.

I’ve uploaded the sample to GitHub as well, so have a look there!