It turns out that PAX is a much more exhausting event than I anticipated. There is so much here going on that I wasn’t expecting, and I’m more tired than anticipated. As a result my brain hasn’t really processed everything that’s happened yet, so it’s hard to form a cohesive summary of the events. Since I fell asleep last night before writing anything and I don’t want to be late to the party today, I think the nightly recap is going to turn into an event recap. For now I need to get charged up and head back down, there’s so much more to do still.
I wanted to write a quick update here for all of you that don’t read the twitter feed. In the past two months I’ve been pretty busy with life, and my game development efforts have taken a hit as a result. Such is the life of a solo developer. My wife and I bought our first home in December and just finished getting all of our stuff moved over. We also had a friend staying with us during that time who was having some issues in his personal life and needed a place to stay for a while. He has since moved out, and while we still have a bit of unpacking to do at the new house, my office is all set up. Next week life should return to something close to normal and I can get this crazy game development train back up and running at full speed.
For now though, I am at PAX South. I’ve never been to a convention of this size before, and never to one that wasn’t aimed at professionals. I’m excited to see what all goes on at a place like this, and to check out the booths of my fellow (if a bit more successful) developers. I would like to eventually present at a convention like this one, and while I’m not quite ready yet, I wanted to come take a look at how they work from the other side first. This way I can soak it all in and see what it’s like without the stress of having to run a booth as well. A little vacation after what’s gone on in the past couple of months won’t hurt either.
I’m going to try and stay active during this event, on twitter during the day and maybe a recap each night. I’m already feeling excited and ready to jump back into my work when I get back. For now though, I have to get going; I’m sure the line is already a mile long.
It is a widely held axiom among programmers that premature optimization is poison. If you let it get in your head, you can end up spending months optimizing code that isn’t even a problem and waste countless hours better spent actually writing code. Being the level-headed individual I am, I decided to engage in such a behavior over the past couple of weeks. I didn’t arrive at this point by impulse or recklessness though, and the results are pretty incredible so far. Let me explain how I got here.
I have made my fair share of tile-based engines in the past. I’ve used a variety of different languages and frameworks to do it, and I thought Unity would be no different. In a way I was right, just not in the way I expected. Up to this point I had been using Unity’s sprite renderer component to draw all of my images onto the screen. After learning that I was limited to one such component per object, I immediately took the brute force approach and just started adding more game objects to the scene, each with a renderer for a single tile. With the lights turned off, this worked fine, and Unity batched all of the renders together into a single pass, and everything was fast and efficient. Unfortunately, I’m making heavy use of lighting for this game, and when you turn the lights on with this approach, things start to fall apart.
Initially it wasn’t all bad. I only had one renderer per terrain feature, which was just stretched across the size of feature. This made everything look terrible, and was a hack to get the scene up and running. I started making more materials, one for each unique size of terrain, and telling the shader to tile the image out. This worked great, but I ended up with an unmanageable amount of materials, and it didn’t seem like a very scalable solution. I opted to switch to a single game object with a child object for each tile, each holding a single sprite renderer. A much more viable approach to the problem, but I noticed a peculiar spike in my draw call count and a dip in my frame rate. It wasn’t much, so I ignored it; premature optimization is the devil, after all.
I intend to use about five layers to render the scenes in this game, and up to this point I had simply been using a sprite layer and the terrain layer. A solid black background makes for boring screenshots, and since I am trying to spread the word about this game early and often, I wanted something a bit more interesting to share. As I started to put in background, middle, and parallax layers, the funny little draw call spike turned into a hulking monster. One room, with only a handful of lights and a single character, was spiking to unreasonable levels. The draw call count was well over 300, and the frame rate on my computer was dropping dangerously close to 60. I have a pretty powerful computer, I can play AAA 3d titles with the settings maxed out and never drop below my refresh rate. My 2d game should not be running at 60 fps on this computer, and if it does, it probably won’t run very well on a more standard desktop. Something had to be done.
The Cusp of Trouble
I think it had to be done, anyway. I did no profiling on this, I didn’t test it on other computers to see if it was really that slow. For all I know it could have been just a fluke. Regardless, I panicked. I had no idea how to approach this problem, I didn’t understand how Unity’s sprite renderers did their batching or what the potential ramifications of pumping 10k game objects per room into the scene were. I started looking into it, and while it seemed to be a common problem, nobody ever really had a solution that would work with my lighting system. The only thing I could think of was to stop using Unity’s built-in 2D tools and just write a tile renderer myself.
After digging through Unity’s sparse documentation on the matter, and then looking up several examples of people building mesh geometry through code, I figured out how to build the meshes and send them to Unity to be rendered. Essentially what I have now is a series of textured quads that are all built as part of the same mesh, with each quad representing a single tile on the screen. If this sounds like a simple and obvious solution to the problem, it is, but that’s part of why I’m confident in this decision.
This comes with a few nice benefits on the side; answers to problems I hadn’t even thought about addressing yet. First and foremost, it lets me create an atlas for my tiles. I wasn’t sure how to do this in Unity without creating a separate material for each sprite in the atlas. This would mean that the extra textures for lighting would have to be split apart, which defeats the purpose of using an atlas in the first place. It also gives me a nice, single object to use as the view for each layer of map data. This cuts down on extra game objects in the scene, extra components in the code, and the extra complexity of grouping and managing tiles within a room to help decide when to stop processing that area because it’s too far away.
I really wanted to let Unity handle this for me, but it seems their sprite system and 3d lighting just don’t mix. At this point I’m not really using any of Unity’s 2d tools any more, which is a bit frustrating as this was supposed to save me time. If it makes for a better product though, I guess I’m alright with that. I’m hoping this is the last bit of infrastructure I have left, and I should be able to start focusing on making this into an actual game instead of a tech demo with a story.
Quick announcement today. I’ve been working on updating the back-end of the website to reduce spam comments so I could open them up for more general commenting. This was all so I could launch the new website for Project Dunwich. The project now has an official name. Check out the main website below for an updated description of the game. It’s not much yet, but at least I have something to call this game now.
I’ve been pretty quiet on the blog for the past month, and I’d like to apologize for that. I have a tendency to get caught up in my game’s development and forget about pretty much everything else. This whole player interaction piece has always been hard for me, and I guess I just let it slip for a while. No matter, you’re not here to listen to excuses, let’s get on to the good stuff.
No Seriously, We Do
My wife has recently been getting into 3D printing, and this was one of the first things she printed. It hasn’t been sanded/finished yet in this picture, but it’s pretty damn cool as-is. While my wife did this all of her own accord, it has spurred in me a desire to make some merchandise available for both the company and the game as it moves closer to finish. I’m going to try really hard on the marketing angle of this game once I get the first vertical slice ready, and shirts and other physical objects seem like a good way to aid that. I do eventually want to go to conventions to present, and this has got me thinking about aspects of that I hadn’t even considered before. Plus, 3D printing is just cool.
One of the things I want to do differently this time around is the way I approach the the promotion of the game. In Vivo was made in a very linear fashion, and I didn’t have a demo ready until just a few weeks before release. This, combined with the fact that I was so set on releasing it that I didn’t really give adequate time to content creation, meant there was very little time for marketing, and by the time I was able to I was so focused on finishing that I dropped the ball on it completely. The fact that it sold any copies at all given my efforts at marketing it still amazes to this day. That being said, this time I’m trying to get a polished demo out as soon as possible. We call this the vertical slice.
Essentially this is a cross section of the whole game, a short amount of gameplay with the most important systems in place and all of the polish of the finished game. I’m still quite a ways off from that goal, but I feel like I’ve overcome the final hurdle in regards to things I’ve never done before. I’ve made procedural level systems, I’ve done stealth and platforming and AI, and I’ve done sound effects and music. Now that I have the lighting and animation systems finished, all that’s left is making assets, and doing things that I’ve done before in code. That’s not to say it will be easy or fast, but at least I can better predict how long it will take and what obstacles I’m likely to encounter.
Lights and Spirits
Since the last update I’ve been focused on integrating the Sprite Lamp shader that I wrote with the existing Spine animation system I had been using. This presented some unexpected and maddening challenges. While I’m not 100% satisfied with how it is right now, it’s good enough to move forward for now. The crux of the Spine integration problems was due to the way Unity handles lights for objects on the same layer. It seems to be that Unity processes all objects on the same layer in one batch, first rendering the ambient and directional lights, and then adding each dynamic light one at a time to that initial render pass. Since the lights don’t know or care which object was drawn first and therefore occluded by other parts, it blends on the lights for everything to the top of the first render. Things that were covered up before (like the arm behind the body) have their light values blended onto the image anyway. It creates a really bizarre ghostly image.
The good news is that after spending almost two weeks trying to fix this issue, I have something that works. It’s not ideal, and creates some odd graphical artifacts that I haven’t exactly figured out how to remove, but it works. The strategy here was to explode the Spine model on the z axis by a very small fragment so that each piece was on a different plane. This is irrelevant for rendering position, since the orthographic camera used for 2D ignores the z axis, but it lets me implement z-axis sorting for determining which pixels to write. Thankfully it looks like the guys over at Esoteric Software had planned on adding this as a feature at some point, and changing the runtime to make this work is literally a one-line change. Using this, I can test for the z values at each pixel to determine which object gets drawn and only draw that object. The problem is that I can only pick one object, and it’s all or nothing. This means I lose the translucent anti-aliasing effect on the edges of each object, since they can’t blend down to the next lower object because it isn’t even processed.
It’s functional, and I definitely prefer it over the bizarre apparition it was producing before, but I would love to lose those hard edges and weird unlit pixels on the fringes. I think I can fix some of them, and with how dark the game will be, the hard edges shouldn’t be too noticeable, but settling for less than ideal always stings. This works for now, and it may be what the game ships with barring a few bug fixes and optimizations. Either way, it’s off the plate for the vertical slice and I can finally move on to other things that aren’t lighting.
Sneaking, Walking, Running
The next step in creating something that looks presentable is creating a character that moves naturally. While I was working on the lighting system, I was continually tweaking the walking animation. It’s in a place now that I’m pretty happy with, but there’s a lot more to this game than just walking. I have laid out a list of animations that I need to create to handle all of the gameplay systems that will be in the slice, and though there’s quite a few, creating and tweaking the animations is incredibly easy with Spine. Here are the walking and running animations that I have so far, and over the next week I hope to add jumping, sneaking and maybe some of the ledge animations.
I should be back next week with more updates, and hopefully a real name for this game. I’ve got something picked out, but I have some paperwork to do before I can announce it. For now, though, I have to fix the tear in those pants.