It is a widely held axiom among programmers that premature optimization is poison. If you let it get in your head, you can end up spending months optimizing code that isn’t even a problem and waste countless hours better spent actually writing code. Being the level-headed individual I am, I decided to engage in such a behavior over the past couple of weeks. I didn’t arrive at this point by impulse or recklessness though, and the results are pretty incredible so far. Let me explain how I got here.

Tiles, United

I have made my fair share of tile-based engines in the past. I’ve used a variety of different languages and frameworks to do it, and I thought Unity would be no different. In a way I was right, just not in the way I expected. Up to this point I had been using Unity’s sprite renderer component to draw all of my images onto the screen. After learning that I was limited to one such component per object, I immediately took the brute force approach and just started adding more game objects to the scene, each with a renderer for a single tile. With the lights turned off, this worked fine, and Unity batched all of the renders together into a single pass, and everything was fast and efficient. Unfortunately, I’m making heavy use of lighting for this game, and when you turn the lights on with this approach, things start to fall apart.

Initially it wasn’t all bad. I only had one renderer per terrain feature, which was just stretched across the size of feature. This made everything look terrible, and was a hack to get the scene up and running. I started making more materials, one for each unique size of terrain, and telling the shader to tile the image out. This worked great, but I ended up with an unmanageable amount of materials, and it didn’t seem like a very scalable solution. I opted to switch to a single game object with a child object for each tile, each holding a single sprite renderer. A much more viable approach to the problem, but I noticed a peculiar spike in my draw call count and a dip in my frame rate. It wasn’t much, so I ignored it; premature optimization is the devil, after all.

I intend to use about five layers to render the scenes in this game, and up to this point I had simply been using a sprite layer and the terrain layer. A solid black background makes for boring screenshots, and since I am trying to spread the word about this game early and often, I wanted something a bit more interesting to share. As I started to put in background, middle, and parallax layers, the funny little draw call spike turned into a hulking monster. One room, with only a handful of lights and a single character, was spiking to unreasonable levels. The draw call count was well over 300, and the frame rate on my computer was dropping dangerously close to 60. I have a pretty powerful computer, I can play AAA 3d titles with the settings maxed out and never drop below my refresh rate. My 2d game should not be running at 60 fps on this computer, and if it does, it probably won’t run very well on a more standard desktop. Something had to be done.

The Cusp of Trouble

I think it had to be done, anyway. I did no profiling on this, I didn’t test it on other computers to see if it was really that slow. For all I know it could have been just a fluke. Regardless, I panicked. I had no idea how to approach this problem, I didn’t understand how Unity’s sprite renderers did their batching or what the potential ramifications of pumping 10k game objects per room into the scene were. I started looking into it, and while it seemed to be a common problem, nobody ever really had a solution that would work with my lighting system. The only thing I could think of was to stop using Unity’s built-in 2D tools and just write a tile renderer myself.

After digging through Unity’s sparse documentation on the matter, and then looking up several examples of people building mesh geometry through code, I figured out how to build the meshes and send them to Unity to be rendered. Essentially what I have now is a series of textured quads that are all built as part of the same mesh, with each quad representing a single tile on the screen. If this sounds like a simple and obvious solution to the problem, it is, but that’s part of why I’m confident in this decision.

Worth It

This comes with a few nice benefits on the side; answers to problems I hadn’t even thought about addressing yet. First and foremost, it lets me create an atlas for my tiles. I wasn’t sure how to do this in Unity without creating a separate material for each sprite in the atlas. This would mean that the extra textures for lighting would have to be split apart, which defeats the purpose of using an atlas in the first place. It also gives me a nice, single object to use as the view for each layer of map data. This cuts down on extra game objects in the scene, extra components in the code, and the extra complexity of grouping and managing tiles within a room to help decide when to stop processing that area because it’s too far away.

I really wanted to let Unity handle this for me, but it seems their sprite system and 3d lighting just don’t mix. At this point I’m not really using any of Unity’s 2d tools any more, which is a bit frustrating as this was supposed to save me time. If it makes for a better product though, I guess I’m alright with that. I’m hoping this is the last bit of infrastructure I have left, and I should be able to start focusing on making this into an actual game instead of a tech demo with a story.

Quick announcement today. I’ve been working on updating the back-end of the website to reduce spam comments so I could open them up for more general commenting. This was all so I could launch the new website for Project Dunwich. The project now has an official name. Check out the main website below for an updated description of the game. It’s not much yet, but at least I have something to call this game now.

The Stranger in Kilstow

You can also get to the site through http://kilstowgame.com/ or http://strangerinkilstow.com/.

 

I’ve been pretty quiet on the blog for the past month, and I’d like to apologize for that. I have a tendency to get caught up in my game’s development and forget about pretty much everything else. This whole player interaction piece has always been hard for me, and I guess I just let it slip for a while. No matter, you’re not here to listen to excuses, let’s get on to the good stuff.

No Seriously, We Do

It's tilted the wrong way because try as I might I could not get it to balance on the curve.

It’s tilted the wrong way because try as I might I could not get it to balance on the curve.

My wife has recently been getting into 3D printing, and this was one of the first things she printed. It hasn’t been sanded/finished yet in this picture, but it’s pretty damn cool as-is. While my wife did this all of her own accord, it has spurred in me a desire to make some merchandise available for both the company and the game as it moves closer to finish. I’m going to try really hard on the marketing angle of this game once I get the first vertical slice ready, and shirts and other physical objects seem like a good way to aid that. I do eventually want to go to conventions to present, and this has got me thinking about aspects of that I hadn’t even considered before. Plus, 3D printing is just cool.

The Slice

One of the things I want to do differently this time around is the way I approach the the promotion of the game. In Vivo was made in a very linear fashion, and I didn’t have a demo ready until just a few weeks before release. This, combined with the fact that I was so set on releasing it that I didn’t really give adequate time to content creation, meant there was very little time for marketing, and by the time I was able to I was so focused on finishing that I dropped the ball on it completely. The fact that it sold any copies at all given my efforts at marketing it still amazes to this day. That being said, this time I’m trying to get a polished demo out as soon as possible. We call this the vertical slice.

Essentially this is a cross section of the whole game, a short amount of gameplay with the most important systems in place and all of the polish of the finished game. I’m still quite a ways off from that goal, but I feel like I’ve overcome the final hurdle in regards to things I’ve never done before. I’ve made procedural level systems, I’ve done stealth and platforming and AI, and I’ve done sound effects and music. Now that I have the lighting and animation systems finished, all that’s left is making assets, and doing things that I’ve done before in code. That’s not to say it will be easy or fast, but at least I can better predict how long it will take and what obstacles I’m likely to encounter.

Lights and Spirits

Since the last update I’ve been focused on integrating the Sprite Lamp shader that I wrote with the existing Spine animation system I had been using. This presented some unexpected and maddening challenges. While I’m not 100% satisfied with how it is right now, it’s good enough to move forward for now. The crux of the Spine integration problems was due to the way Unity handles lights for objects on the same layer. It seems to be that Unity processes all objects on the same layer in one batch, first rendering the ambient and directional lights, and then adding each dynamic light one at a time to that initial render pass. Since the lights don’t know or care which object was drawn first and therefore occluded by other parts, it blends on the lights for everything to the top of the first render. Things that were covered up before (like the arm behind the body) have their light values blended onto the image anyway. It creates a really bizarre ghostly image.

While this may be a cool effect for some of the more bizarre Lovecraftian horrors, it seems like a bad fit for any protagonist who isn't Ghost Dad.

While this may be a cool effect for some of the more bizarre Lovecraftian horrors, it seems like a bad fit for any protagonist who isn’t Ghost Dad.

The good news is that after spending almost two weeks trying to fix this issue, I have something that works. It’s not ideal, and creates some odd graphical artifacts that I haven’t exactly figured out how to remove, but it works. The strategy here was to explode the Spine model on the z axis by a very small fragment so that each piece was on a different plane. This is irrelevant for rendering position, since the orthographic camera used for 2D ignores the z axis, but it lets me implement z-axis sorting for determining which pixels to write. Thankfully it looks like the guys over at Esoteric Software had planned on adding this as a feature at some point, and changing the runtime to make this work is literally a one-line change. Using this, I can test for the z values at each pixel to determine which object gets drawn and only draw that object. The problem is that I can only pick one object, and it’s all or nothing. This means I lose the translucent anti-aliasing effect on the edges of each object, since they can’t blend down to the next lower object because it isn’t even processed.

The hard edges are like knives in my eyes. On the plus side I get rim lighting for free. I have no idea where that effect came from, I sure didn't do it.

The hard edges are like knives in my eyes. On the plus side I get rim lighting for free. I have no idea where that effect came from, I sure didn’t do it.

It’s functional, and I definitely prefer it over the bizarre apparition it was producing before, but I would love to lose those hard edges and weird unlit pixels on the fringes. I think I can fix some of them, and with how dark the game will be, the hard edges shouldn’t be too noticeable, but settling for less than ideal always stings. This works for now, and it may be what the game ships with barring a few bug fixes and optimizations. Either way, it’s off the plate for the vertical slice and I can finally move on to other things that aren’t lighting.

Sneaking, Walking, Running

The next step in creating something that looks presentable is creating a character that moves naturally. While I was working on the lighting system, I was continually tweaking the walking animation. It’s in a place now that I’m pretty happy with, but there’s a lot more to this game than just walking. I have laid out a list of animations that I need to create to handle all of the gameplay systems that will be in the slice, and though there’s quite a few, creating and tweaking the animations is incredibly easy with Spine. Here are the walking and running animations that I have so far, and over the next week I hope to add jumping, sneaking and maybe some of the ledge animations.

I should be back next week with more updates, and hopefully a real name for this game. I’ve got something picked out, but I have some paperwork to do before I can announce it. For now, though, I have to fix the tear in those pants.

I mentioned in my last weekly post that I had gotten a working shader for Sprite Lamp up and running. I’m going to explain what that shader does, and attach it here for you all to download and use if you feel so inclined. I will caveat this by saying that I am, by no means, an expert on shaders, Unity, or lighting. I figured all of this out with google searches on various topics and by looking at both the Unity documentation on writing shaders and the built-in shaders that Unity comes with. There may be mistakes and some features may not work as you would expect, but it fits my needs and so may fit some of yours as well. On with the show (or scroll to the bottom for the download link).

The Shader, Explained

Shader “Sprite Lamp/Default”
{

Properties
{

_MainTex (“Diffuse Texture”, 2D) = “white” {}
_BumpMap (“Normal Map (no depth)”, 2D) = “bump” {}
_AOMap (“Ambient Occlusion”, 2D) = “white” {}
_AOIntensity (“Occlusion Intensity”, Range(0, 1.0)) = 0.5
_SpecMap (“Specular Map”, 2D) = “white” {}
_SpecIntensity (“Specular Intensity”, Range(0, 1.0)) = 0.5
_Color (“Tint”, Color) = (1,1,1,1)

}

SubShader
{

Tags
{
“Queue”=”Transparent”
“IgnoreProjector”=”True”
“RenderType”=”Transparent”
}

LOD 300
Blend SrcAlpha OneMinusSrcAlpha

CGPROGRAM
#pragma surface surf BlinnPhong vertex:vert

sampler2D _MainTex;
sampler2D _BumpMap;
sampler2D _AOMap;
sampler2D _SpecMap;
fixed4 _Color;
uniform float _AOIntensity = 0;
uniform float _SpecIntensity = 0;

struct Input
{

float2 uv_MainTex;
float2 uv_BumpMap;
float2 uv_AOMap;
float2 uv_SpecMap;
fixed4 color;

};

void vert (inout appdata_full v, out Input o)
{

v.normal = float3(0,0,-1);

UNITY_INITIALIZE_OUTPUT(Input, o);
o.color = v.color * _Color;

}

void surf (Input IN, inout SurfaceOutput o)
{

fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * IN.color;
fixed4 a = tex2D(_AOMap, IN.uv_AOMap);
fixed4 s = tex2D(_SpecMap, IN.uv_SpecMap);
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
o.Albedo = c.rgb * c.a;
o.Albedo *= (1.0 – _AOIntensity) + (a.r * _AOIntensity);
o.Albedo *= (1.0 – _SpecIntensity) + (s.r * _SpecIntensity);
o.Alpha = c.a;

}

ENDCG

}

Fallback “Specular”

}

This shader handles normal mapping, ambient occlusion through calculated maps, and specular mapping. I’m going to start at the top and explain what each piece does.

Shader “Sprite Lamp/Default”
{

Properties
{

_MainTex (“Diffuse Texture”, 2D) = “white” {}
_BumpMap (“Normal Map (no depth)”, 2D) = “bump” {}
_AOMap (“Ambient Occlusion”, 2D) = “white” {}
_AOIntensity (“Occlusion Intensity”, Range(0, 1.0)) = 0.5
_SpecMap (“Specular Map”, 2D) = “white” {}
_SpecIntensity (“Specular Intensity”, Range(0, 1.0)) = 0.5
_Color (“Tint”, Color) = (1,1,1,1)

}

This stuff is all pretty basic. The first line lets you declare the name and folder where the shader will show up in Unity’s shader selection list. The next chunk is the properties block. This is where you define what properties are exposed in the editor interface. I don’t know how these map in code, I built all of my materials through the editor and just access them that way, but in the editor they are listed by name. Each property goes on its own line, and it’s pretty simple to set them up. The first part is the variable name, then in parenthesis you define how it works in the editor, with name as a string, followed by the type. 2D gives you any texture, range gives you a slider with specified minimum and maximum values, and color is a the color picker.

After the equals sign you specify the default value for the objects. This is important for the textures, since if you omit one, you want to make sure that it all still works right. Since both the ambient occlusion and specular mapping are done with multipliers, white means no change, which effectively turns that feature off if you don’t specify a value for that variable. The same thing goes for color.

SubShader
{

Tags
{

“Queue”=”Transparent”
“IgnoreProjector”=”True”
“RenderType”=”Transparent”

}

LOD 300
Blend SrcAlpha OneMinusSrcAlpha

The next block starts the actual shader logic. The tags let you specify some settings for the shader. I pulled these from the built-in sprite shader, so I’m a little fuzzy on what they do. Based on what I read from the official documentation, the queue tag sets which layer things are rendered on, and the render type tag sets how the rendering is performed. All three of these tags are in the default shaders, so I felt safe using them. There were two more that I took out that may go back in if this runs into performance problems later on, but these work fine for now.

The LOD tag sets the level of detail this shader runs at. Unity recommends 300 for bumped specular, so that’s what I went with. The blend mode sets standard alpha blending, which supports translucent pixels so you can have nice anti-aliased edges.

CGPROGRAM
#pragma surface surf BlinnPhong vertex:vert

sampler2D _MainTex;
sampler2D _BumpMap;
sampler2D _AOMap;
sampler2D _SpecMap;
fixed4 _Color;
uniform float _AOIntensity = 0;
uniform float _SpecIntensity = 0;

From here on out, most of this code was pulled from the official shader documentation. CGProgram is what starts the actual shader code; this is a standard Unity thing. The pragma line is where we declare what the names of our surface and vertex shaders are, as well as the lighting model we want to use. BlinnPhong is the lighting model for specular lights, so that’s what I went with. The rest of this is where we declare the variables that actually go into the shader code. These map by name, so they have to be exactly the same as they are defined in the properties block. I got the data types from looking at the included shaders, they seem to map 1:1 to the types in the properties section.

struct Input
{

float2 uv_MainTex;
float2 uv_BumpMap;
float2 uv_AOMap;
float2 uv_SpecMap;
fixed4 color;

};

This section just defines what information is fed to our surface shader from the vertex shader. Since all of these values are affected by their position within the face of the quad that is holding our sprite, we need them all. The ones that aren’t affected by position, the intensity values for both ambient occlusion and specular mapping, we don’t need to send those through. They have been declared so we can access them at the top level, and since they are constant (within a given pass) we don’t need any more detail.

void vert (inout appdata_full v, out Input o)
{

v.normal = float3(0,0,-1);

UNITY_INITIALIZE_OUTPUT(Input, o);
o.color = v.color * _Color;

}

This is the vertex shader. I ripped this almost entirely from the built-in diffuse sprite shader. All this really does is clear the normal based on the geometry and then apply the color from the vertex. This makes it so that we can use the tint of the sprite itself or the tint applied to the shader, they will both work fine. The only thing I took out was the pixel snapping code. My game doesn’t use pixel art, it’s all rasterized vectors. I didn’t need or want it in there, but if you want to enable pixel snapping, you can modify this based on Unity’s diffuse sprite shader.

void surf (Input IN, inout SurfaceOutput o)
{

fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * IN.color;
fixed4 a = tex2D(_AOMap, IN.uv_AOMap);
fixed4 s = tex2D(_SpecMap, IN.uv_SpecMap);
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
o.Albedo = c.rgb * c.a;
o.Albedo *= (1.0 – _AOIntensity) + (a.r * _AOIntensity);
o.Albedo *= (1.0 – _SpecIntensity) + (s.r * _SpecIntensity);
o.Alpha = c.a;

}

This is the real meat of the shader. This is what actually applies the different lighting concepts to the sprite. The first thing we do is grab the color of the pixel at the point of the current fragment being lit from all of our source textures. Remember that we defaulted these to white if they weren’t supplied, so if they aren’t available, we just get a fully opaque white pixel. We also multiply the texture color by the light that was supplied by our vertex shader. This is standard practice from the built-in shader to make the tinting and colored lights work.

The bottom part is where we set the actual values used for this fragment on the screen. First we grab the normal from our nomal map. This uses the built-in unity function for unpacking normals from a normal map. I have no idea how it works, it’s magical, and it basically handles all of the normal mapping for you. All that’s left is setting the albedo. I have no idea what albedo even means, but it seems to want a color based on the shaders I looked at, so I started with that. The rgb value from the input texture, multiplied by its alpha. I don’t know why, exactly, you have to multiply the color by the alpha, but if you don’t translucency just breaks completely.

From there it’s a simple matter of just multiplying in the ambient occlusion and specular maps and setting the alpha. These are always in grayscale, so I just grab the first color in the bunch since they are all the same. I will be honest, I have no idea if this is how baked ambient occlusion or specular mapping work. I did some googling on how to bake those things in, and the general consensus was that you just multiply them against your input texture. This makes sense, and the results look like I would have expected, so I left it at that. The math in there is just for tuning the intensity of the effects while ensuring that the top end is always 1.0.

So that’s it, in a nutshell. There may be some issues, and if you find any I’m all ears. I wrote this up quick and dirty just to get it working, and the results actually came out pretty nice.

Usage Instructions

To use this shader, there are a couple of things you need to make sure of. First, your diffuse texture should be set as a sprite in unity. None of the settings on the sprite really matter, though they will affect the image in the expected ways. Second, your normal map should be set as a normal map. If you’re using the output from Sprite Lamp, it needs to be normal only, not normal + depth or anything like that. When you set it to a normal map in unity, it has a tendency to default the Create from Grayscale setting on, but that will mess up the normals so make sure that’s unchecked. Third, the other textures, ambient occlusion and specular map, should be set as textures. That’s it, plug your values in and go.

There are a number of features that this shader does not support right now. The first is shadow casting. I don’t have Unity Pro, so I can’t use shadows. I could implement something for self-contained shadows, but since that would look awful in my game without external shadows as well, I didn’t. As a result, there is really no use for the depth map, so I didn’t include that either. The second is the lack of emissive light. Since this shader supports global directional lighting, I didn’t see a need to include it or wraparound lighting. The last thing it doesn’t have is cel shading. I may end up implementing this later, but for right now I’m going forward without it to see how it turns out. I will update this article and the attached download if I make that change.

Download the shader

Click to view, right-click save as to download.

I mentioned in my post last week that I was on the fence about purchasing Sprite Lamp. I’m happy to announce that fence is long gone. After a cordial interaction with the developer, I went for it. This post is going to be mostly about that, so there will be lots of pretty pictures.

Getting Started

After getting my download link and cracking open the software, I was immediately struck by the fact that I had no images suitable for lighting. The body work I had done in the past was fragmented and didn’t apply to the new Spine animations. I wasn’t sure how this would work with the vector graphics style I had going for my images so far, and I had no idea how to actually shade anything in that style either. I sketched out the outline of a Corinthian column header, exported it into gimp, and set to work trying to make gradients wrap around complex shapes. Needless to say, this didn’t work. I found a plugin that did a good bevel effect and set to using that to shade my object. It sort of worked, but it felt very round and fuzzy.

Initial inputs and static output. Apparently I can't tell left from right at times.

Initial inputs and static output. Apparently I can’t tell left from right at times.

Not so much teddy bear fuzzy, as doesn't brush its teeth fuzzy.

Not so much teddy bear fuzzy, as doesn’t brush its teeth fuzzy.

It worked, sort of, and I had an object reacting to the light. I was ecstatic. I knew I could do better though, so the next day I set out to do just that. I had always wanted something closer to cel shading than normal lighting for this game, and getting smoothly-lit input images had been an incredible pain anyway. I decided to try and bake the cel shading into the lighting images used to generate the normal maps. This carried a couple of benefits. The first was that I no longer had to deal with creating perfectly smooth light maps. Creating cel-shaded input images in inkscape is pretty easy compared to making something perfectly smooth by hand or with linear gradients. It also means I don’t have to switch tools half-way through the process. The second benefit is that I can get something sort-of like cel shading without the somewhat erratic lines of light that normally come with those types of shaders. I wanted to test this quickly, so I made something very simple.

Clearly my mastery of lines and circles shows how great of an artist I truly am.

Clearly my mastery of lines and circles shows how great of an artist I truly am.

Painstaking Detail

This proved the concept, but it didn’t exactly look amazing. It looks closer to claymation than it does to cel shading, but I was pleased with it nonetheless. To make sure that the style held up in more complicated pieces, it was back to the column header. This took quite a while as I’m no master artist. Figuring out what the shapes should actually look like with orthogonal lighting was the biggest challenge; there aren’t a lot of references with that kind of lighting. After several hours tweaking nodes, I felt like I had something that might be passable. I have to say that I’m shocked with how nice this turned out, though I guess I wasn’t expecting much.

It kind of looks like a bucket. Catch the falling Cthulus mini-game, anyone?

It kind of looks like a bucket. Catch the falling Cthulus mini-game, anyone?

Apparently it just needed a new toothbrush.

Apparently it just needed a new toothbrush.

Lurking in the Shaders

With that, I decided I was content with my asset creation pipeline and decided to actually try and put this into the game. One of the key question I asked the developer before I bought this software was if it had an implementation for Unity. He said it had one, but it wasn’t fully done, and it wasn’t a top priority. Ok, not a big deal, I can work with partial support. It turned out though that partial support meant something different to me than it did to him. Hard edges on lights, no alpha transparency support, and a general approach that didn’t look like it was actually built for unity. I was a bit dismayed, but to his credit, he did warn me. I initially tried to get it working as-is, and when I saw that lights just stopped at their edge instead of falling off I knew I was going to be writing shaders for the rest of the weekend.

I kind of feel bad for that one off to the right. What did he do to deserve being ostracized?

I kind of feel bad for that one off to the right. What did he do to deserve being ostracized?

Prior to this weekend, I had never touched 3D lighting. I’ve dabbled in shaders for special effects, but dealing with lights was completely foreign to me. The included implementation had a whole lot of vector math that I wasn’t comfortable with, so I prepared myself for the long haul. Just getting it to where I had it had taken the better part of a day, how long would it take to make a new one from scratch? Not long at all, it turns out. Unity does, occasionally, make something really easy for me. The built-in shader lab made getting that shader up and running incredibly easy. It doesn’t support all of the features that Sprite Lamp offers, but for my purposes, it’s perfect. In a matter of a couple of hours, I had every feature besides shadows, depth, and pixel snapping up and running. A few minutes later and I had the whole thing integrated into my existing project.

This is actually an awful screenshot to show off the shader. Programming: A+. Media: F-.

This is actually an awful screenshot to show off the shader. Programming: A+. Media: F-.

It felt like a very productive week, and I’m looking forward to working on integrating spine, and getting my procedural decorator up and running in the coming week. These should be the last two graphical development pieces for a good long while. After this it’s all content and gem programming. This is when things get really exciting.