CIS*4820 Juicing up your GameDev Skills

Earlier this year , I ended up doing a presentation for the University of Guelph’s CIS*4820 (Game Programming) course. During the talk, I shared a number of tips and tricks on how to make games stand out by adding “Juice” to them. Inspired by a great talk from 2013 about this (Juice it or lose it) where two game developers showed how a simple game of Breakout could get a lot of eye-candy effects that would make the gameplay seem a lot more exciting, I wanted to give the students a chance to see how some of these things would be achieved in practice. My hope was that by sharing with them that I am an alumni it would drive their interest even further, as there are some topics which I learned back at The University of Guelph that are still relevant to my everyday work (i.e. finite state machines).

For the talk, I created a number of examples which would individually showcase how to add “Juice” to your game. These included some simple types of scaling, jelly effects, how to slow down things using time dilation and how to do a ghosting effect for a character.

The Ghosting effect is visually appealing and naturally got the most interest during the talk, which is why I’d like to break it down further here.

While investigating how to do the effect, I ran into a number of implementations and realized that the most common pattern for it would be to spawn GameObject copies with a certain frame offset.  However, one of the things that I have become wary of during the time I’ve worked at Unity is that spawning GameObjects in the middle of your game is not the best thing to do. There are various reasons, and some work arounds, but my biggest concern is the unnecessary allocations that spawning a GameObject implies. Other considerations being:

  • Allocating one Sprite Renderer per instanced GameObject
  • GameObject lifecycle callbacks during instantiation (i.e. Awake() and friends)
    • Transform registration 

Naturally, we can get around the allocation penalty by using a Memory Pool, but again, that’s working around the issue and not addressing the fundamental problem which is, why would a full GameObject need to be instantiated for a temporary and non-interactive copy of the character?

Particles to the rescue

After thinking sometime about rolling out my own solution for this, and implementing some half-baked sprite sheet with quad creation, I realized that the obvious and battle-tested solution to this would be to just use Unity’s particle system. Also, this technique would be easy to share with the students as they would not need to dig through code I wrote and instead they’d be able to use an already well established particle editing workflow.

To achieve this, I took the base sprite, added it to a particle system and tweaked various settings across its modules to spawn frame specific versions of the character and let the particles fade out over time. 

This is what the end result looked like:

The following modules are enabled in the particle system:

  • Emission
  • Shape
  • Color over Lifetime
  • Renderer

As can be seen below:

First, its important to get the scale correct. By default, a particle system will emit particles out in a radial fashion, so if you have a material with your texture, you’ll end up with this as a starting point:

To get this under control, you’ll need to change the following properties:

  • Start Size: 2.5
  • Start Speed: 0
  • Start Lifetime: 0.25
  • Emission
    • Rate over time: 0 -> This will shut off the particle emission completely, allow us to control it via script

At this point, all you should see is the Particle System’s radius and your character:

The next change is in the Color over lifetime.

The trick here is to use both the automated color change to you advantage and have a start and end color, together with a start/end alpha, so the Color slider looks something like this:

The top represents the alpha value (which starts at 100% with the white color, and ends at 0% with the black color) and the bottom represents the start and end colors (i.e. greenish to blue). The blend works quite well, and the Particle System will handle this over the specified Start Lifetime of your particle which was one of the first things we changed. Having set the Start Lifetime to 0.25s means the system will interpolate their color values over that amount of time.

Emitting Particles via script

The script here is kept simple on purpose, to show the bare minimum that would be needed for making the effect start. This is where we begin spawning of the particles:


else if(m_State == PlayerState.StateDash)
{
    // Do effect
    float directionMultiplier = (m_Direction == Direction.Right) ? 1.0f : -1.0f;
    Vector2 pos = transform.position;
    pos.x += (m_DashSpeed * directionMultiplier) * Time.deltaTime;
    transform.position = pos;

    if (Time.realtimeSinceStartup - m_LastEmit > m_EmitDelay)
    {
        EmitParams emitParams = new EmitParams();
        emitParams.position = transform.position;
        m_GhostParticleSystem.Emit(emitParams, 1);
        m_LastEmit = Time.realtimeSinceStartup;
    }

    if (Time.realtimeSinceStartup - m_StartDash > m_DashDuration)
    {
        m_State = PlayerState.StateRunning;
    }
}

There are a couple things that need to happen:

  • The player must be in a  “Dash” state (explained next)
  • We keep track of when the last particle was emitted, and if enough time has passed we emit one more (and reset the duration).
  • Once we have dashed “long enough” we go back to the running state for the player, so another dash can be potentially triggered.

That’s almost all of it, just one last piece needed to make sure the particles are emitted in the correct direction, and the last bit of setup code for time tracking goes here:


if(Input.GetKeyDown(KeyCode.Space))
{
   m_State = PlayerState.StateDash;
   m_StartDash = Time.realtimeSinceStartup;


   var particleRenderer = m_GhostParticleSystem.GetComponent<ParticleSystemRenderer>();


   particleRenderer.material.mainTexture = m_SpriteRenderer.sprite.texture;


   EmitParams emitParams = new EmitParams();
   emitParams.position = transform.position;


   float direction = (m_Direction == Direction.Right) ? 0 : 1.0f;
   particleRenderer.flip = new Vector3 (direction, 0, 0);
   m_GhostParticleSystem.Emit(emitParams, 1);
   m_LastEmit = Time.realtimeSinceStartup;
}

In this code block, once we press the Space Bar, we’ll signal to our MonoBehaviour that we need to start the dash effect. To do this, we:

  • Change the player’s state to “Dash”
  • Figure out which direction the player is facing, and flip the particle renderer accordingly
  • We emit a single particle, and record the time for when our Dash state begins
  • The particle we emit is a snapshot of where in the sprite sheet the player’s animation is, so we repeat that image as many times as our settings allow.

Finally we can bring it all together, and we have a simple ghosting effect with much less overhead than spawning a large amount of GameObjects and reducing memory overhead while doing it too. 

Overall, I was quite happy that I was able to present to a group of students, and that we spent a good amount of time in the Q&A part of the talk discussing various aspects of Game Development, but also about the games industry in general.

Last but not least, I’d like to thank Dennis Nikitenko for letting me present in his class, and Chandler Gray for bringing the two of us together so we could make this happen!

Until next time 😀

Shadow Art with Unity

A few days ago I saw video titled Shadow art is better with Legos 

Watching this video got me thinking about how something like this could be done and started to look for more information on what this was all about.

I was interested in seeing what other possibilities for Shadow Art were available and found some interesting ideas.

That’s only a subset of what can be described as Shadow Art!

The way I understood it, was that you could arrange shapes in multiple ways to block the light and end up with a shadow that created a familiar shape.

The great thing about programming is that, usually, you can take an idea and turn that into some sort of demo! Even better when graphics are involved 😉

So, I decided to go ahead and make a demo that would result in some sort of Shadow Art!

Using Unity allowed me to focus on the core of the problem rather than going out and solving all sorts of dependencies that the idea depended on.

Here’s the result:

shadow_art

Thank’s to Unity’s WebGL export capabilities I’ve put up a runnable version of this code:

Shadow Art with Unity Demo, check it out!

launch_demo
Launch the Demo!

 

Here’s a more technical explanation of how this was achieved

The setup

I went for having 2 spot lights set up, to project the shadows

two_spotlights

Then have a number of textures that are used to create the geometry

textures_shadow_art

Since there are 2 spotlights, I wanted to see if 2 separate shadows could be generated from the same mesh, similar to what was being done on the video shown for the Magical Angle Sculptures, except I went for just 2 shadows instead of 3!

combined_mesh
The resulting mesh

The way this is computed is in 3 passes:

First Pass

This is the most important step, since it is here where the overlap is calculated and sets up the base for showing 2 shadows at the same time.

To figure out the overlap:

  1. Iterate through the pixels on one of the textures
  2. As soon as there’s an opaque pixel do a look up on the other texture
  3. If the lookup results in a pixel that’s also opaque, then generate the vertices for a cube at that position (x,y) and then center the z position
  4. Lastly, the overlapping pixel is stored so that it is not accessed again

Second & Third Pass

These are more generic, and essentially create geometry for opaque pixels where geometry hasn’t been created before.

To give it a less uniform feel, there’s a random depth value applied to each new piece of geometry.

Image Effects

Unity comes with a number of pre-built Image Effects which helped to make this look more presentable.

I’m using the Vignette and Bloom Image Effects to create the final look for the presentation. You can see how adding them up looks below.

shadow_art_image_effects

Hope you find this fun to play around with, and if you have some cool ideas let me know!

And remember, you can always follow me on twitter @JavDev

Multiple render targets and Stage3D

For some time I’ve been curious on how to do things with the depth buffer using Stage3D.

As far as I could find, there is no real “direct” way to access the depth buffer with Stage3D, so I went ahead and did the next best thing, which was to build my own Depth Buffer in a shader.

I saw that this could be done thanks to Flare3D’s MRT demo and started learning how I could use this to test out some things I’ve been thinking about.

depth_01

color_01

Now that I had a depth buffer in place, the next step was to use this to see what sort of techniques I could combine it with.

I’ve been following Dan Moran on Patreon  and decided to try out an intersection highlight shader which he describes in one of his videos. This looked fun to do, so I went ahead and tried to implement a basic form of it using Flare3D’s shading language FLSL.

intersection_shader

Here’s how the shader turned out:

shader

You can also get it from here

The way that this shader works is in the following:

  1. Provide a texture with depth information
  2. Check if the difference between your current position and the value on the depth buffer is within a threshold
    1. If it is within the threshold, then use the smoothStep function to create a sort of “fall-off” effect, which at the maximum value makes the color white, and if not it fades out into the color of your mesh (or the tint being applied to it)
  3. One more thing to keep in mind is that you need to use screen space coordinates so that you test against the texture’s UVs and your own position

There are some more things to consider, such as the color format of your depth texture. If you use regular 32 bit RGBA values, then you will get some banding since the data in depth texture won’t be as precise as you need it, so using an RGBA_HALF_FLOAT value is recommended.

The final part comes by composing the 2 buffers together to create the final image. This is achieved by performing additive blending of the 2 render targets using another shader that outputs it to a final 3rd render target, which is then drawn on screen.

color_02

+

effect_01

=

composed_01

But, in practice how is this all achieved?

  1. Render all the geometry that you don’t want to use for effects together to a render target
    1. Also, use the MRT technique to be able to export a 2nd texture which is the equivalent of your depth buffer
    2. mrt_output
  2. Render your effect meshes to another render target and supply the depth texture as a parameter
    1. effect_mesh
  3. Finally, take the outputs of steps 1 & 2, and place them into a 3rd shader that does the additive blending for you and “composes” the final image for your shader.
    1. compose_material
  4. Draw a full screen quad with your final composed image!

Using Intel GPA you can see how the Render Targets all look:

In this case, you are only drawing what you need once to a number of buffers, and in the end compositing an image from all the various steps.

I’ve created a repo on GitHub that you can download and check out and hopefully extend for your own needs 🙂

You can also follow me on @jav_dev