I’ve been pretty quiet on the blog for the past month, and I’d like to apologize for that. I have a tendency to get caught up in my game’s development and forget about pretty much everything else. This whole player interaction piece has always been hard for me, and I guess I just let it slip for a while. No matter, you’re not here to listen to excuses, let’s get on to the good stuff.

No Seriously, We Do

It's tilted the wrong way because try as I might I could not get it to balance on the curve.

It’s tilted the wrong way because try as I might I could not get it to balance on the curve.

My wife has recently been getting into 3D printing, and this was one of the first things she printed. It hasn’t been sanded/finished yet in this picture, but it’s pretty damn cool as-is. While my wife did this all of her own accord, it has spurred in me a desire to make some merchandise available for both the company and the game as it moves closer to finish. I’m going to try really hard on the marketing angle of this game once I get the first vertical slice ready, and shirts and other physical objects seem like a good way to aid that. I do eventually want to go to conventions to present, and this has got me thinking about aspects of that I hadn’t even considered before. Plus, 3D printing is just cool.

The Slice

One of the things I want to do differently this time around is the way I approach the the promotion of the game. In Vivo was made in a very linear fashion, and I didn’t have a demo ready until just a few weeks before release. This, combined with the fact that I was so set on releasing it that I didn’t really give adequate time to content creation, meant there was very little time for marketing, and by the time I was able to I was so focused on finishing that I dropped the ball on it completely. The fact that it sold any copies at all given my efforts at marketing it still amazes to this day. That being said, this time I’m trying to get a polished demo out as soon as possible. We call this the vertical slice.

Essentially this is a cross section of the whole game, a short amount of gameplay with the most important systems in place and all of the polish of the finished game. I’m still quite a ways off from that goal, but I feel like I’ve overcome the final hurdle in regards to things I’ve never done before. I’ve made procedural level systems, I’ve done stealth and platforming and AI, and I’ve done sound effects and music. Now that I have the lighting and animation systems finished, all that’s left is making assets, and doing things that I’ve done before in code. That’s not to say it will be easy or fast, but at least I can better predict how long it will take and what obstacles I’m likely to encounter.

Lights and Spirits

Since the last update I’ve been focused on integrating the Sprite Lamp shader that I wrote with the existing Spine animation system I had been using. This presented some unexpected and maddening challenges. While I’m not 100% satisfied with how it is right now, it’s good enough to move forward for now. The crux of the Spine integration problems was due to the way Unity handles lights for objects on the same layer. It seems to be that Unity processes all objects on the same layer in one batch, first rendering the ambient and directional lights, and then adding each dynamic light one at a time to that initial render pass. Since the lights don’t know or care which object was drawn first and therefore occluded by other parts, it blends on the lights for everything to the top of the first render. Things that were covered up before (like the arm behind the body) have their light values blended onto the image anyway. It creates a really bizarre ghostly image.

While this may be a cool effect for some of the more bizarre Lovecraftian horrors, it seems like a bad fit for any protagonist who isn't Ghost Dad.

While this may be a cool effect for some of the more bizarre Lovecraftian horrors, it seems like a bad fit for any protagonist who isn’t Ghost Dad.

The good news is that after spending almost two weeks trying to fix this issue, I have something that works. It’s not ideal, and creates some odd graphical artifacts that I haven’t exactly figured out how to remove, but it works. The strategy here was to explode the Spine model on the z axis by a very small fragment so that each piece was on a different plane. This is irrelevant for rendering position, since the orthographic camera used for 2D ignores the z axis, but it lets me implement z-axis sorting for determining which pixels to write. Thankfully it looks like the guys over at Esoteric Software had planned on adding this as a feature at some point, and changing the runtime to make this work is literally a one-line change. Using this, I can test for the z values at each pixel to determine which object gets drawn and only draw that object. The problem is that I can only pick one object, and it’s all or nothing. This means I lose the translucent anti-aliasing effect on the edges of each object, since they can’t blend down to the next lower object because it isn’t even processed.

The hard edges are like knives in my eyes. On the plus side I get rim lighting for free. I have no idea where that effect came from, I sure didn't do it.

The hard edges are like knives in my eyes. On the plus side I get rim lighting for free. I have no idea where that effect came from, I sure didn’t do it.

It’s functional, and I definitely prefer it over the bizarre apparition it was producing before, but I would love to lose those hard edges and weird unlit pixels on the fringes. I think I can fix some of them, and with how dark the game will be, the hard edges shouldn’t be too noticeable, but settling for less than ideal always stings. This works for now, and it may be what the game ships with barring a few bug fixes and optimizations. Either way, it’s off the plate for the vertical slice and I can finally move on to other things that aren’t lighting.

Sneaking, Walking, Running

The next step in creating something that looks presentable is creating a character that moves naturally. While I was working on the lighting system, I was continually tweaking the walking animation. It’s in a place now that I’m pretty happy with, but there’s a lot more to this game than just walking. I have laid out a list of animations that I need to create to handle all of the gameplay systems that will be in the slice, and though there’s quite a few, creating and tweaking the animations is incredibly easy with Spine. Here are the walking and running animations that I have so far, and over the next week I hope to add jumping, sneaking and maybe some of the ledge animations.

I should be back next week with more updates, and hopefully a real name for this game. I’ve got something picked out, but I have some paperwork to do before I can announce it. For now, though, I have to fix the tear in those pants.

I mentioned in my last weekly post that I had gotten a working shader for Sprite Lamp up and running. I’m going to explain what that shader does, and attach it here for you all to download and use if you feel so inclined. I will caveat this by saying that I am, by no means, an expert on shaders, Unity, or lighting. I figured all of this out with google searches on various topics and by looking at both the Unity documentation on writing shaders and the built-in shaders that Unity comes with. There may be mistakes and some features may not work as you would expect, but it fits my needs and so may fit some of yours as well. On with the show (or scroll to the bottom for the download link).

The Shader, Explained

Shader “Sprite Lamp/Default”
{

Properties
{

_MainTex (“Diffuse Texture”, 2D) = “white” {}
_BumpMap (“Normal Map (no depth)”, 2D) = “bump” {}
_AOMap (“Ambient Occlusion”, 2D) = “white” {}
_AOIntensity (“Occlusion Intensity”, Range(0, 1.0)) = 0.5
_SpecMap (“Specular Map”, 2D) = “white” {}
_SpecIntensity (“Specular Intensity”, Range(0, 1.0)) = 0.5
_Color (“Tint”, Color) = (1,1,1,1)

}

SubShader
{

Tags
{
“Queue”=”Transparent”
“IgnoreProjector”=”True”
“RenderType”=”Transparent”
}

LOD 300
Blend SrcAlpha OneMinusSrcAlpha

CGPROGRAM
#pragma surface surf BlinnPhong vertex:vert

sampler2D _MainTex;
sampler2D _BumpMap;
sampler2D _AOMap;
sampler2D _SpecMap;
fixed4 _Color;
uniform float _AOIntensity = 0;
uniform float _SpecIntensity = 0;

struct Input
{

float2 uv_MainTex;
float2 uv_BumpMap;
float2 uv_AOMap;
float2 uv_SpecMap;
fixed4 color;

};

void vert (inout appdata_full v, out Input o)
{

v.normal = float3(0,0,-1);

UNITY_INITIALIZE_OUTPUT(Input, o);
o.color = v.color * _Color;

}

void surf (Input IN, inout SurfaceOutput o)
{

fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * IN.color;
fixed4 a = tex2D(_AOMap, IN.uv_AOMap);
fixed4 s = tex2D(_SpecMap, IN.uv_SpecMap);
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
o.Albedo = c.rgb * c.a;
o.Albedo *= (1.0 – _AOIntensity) + (a.r * _AOIntensity);
o.Albedo *= (1.0 – _SpecIntensity) + (s.r * _SpecIntensity);
o.Alpha = c.a;

}

ENDCG

}

Fallback “Specular”

}

This shader handles normal mapping, ambient occlusion through calculated maps, and specular mapping. I’m going to start at the top and explain what each piece does.

Shader “Sprite Lamp/Default”
{

Properties
{

_MainTex (“Diffuse Texture”, 2D) = “white” {}
_BumpMap (“Normal Map (no depth)”, 2D) = “bump” {}
_AOMap (“Ambient Occlusion”, 2D) = “white” {}
_AOIntensity (“Occlusion Intensity”, Range(0, 1.0)) = 0.5
_SpecMap (“Specular Map”, 2D) = “white” {}
_SpecIntensity (“Specular Intensity”, Range(0, 1.0)) = 0.5
_Color (“Tint”, Color) = (1,1,1,1)

}

This stuff is all pretty basic. The first line lets you declare the name and folder where the shader will show up in Unity’s shader selection list. The next chunk is the properties block. This is where you define what properties are exposed in the editor interface. I don’t know how these map in code, I built all of my materials through the editor and just access them that way, but in the editor they are listed by name. Each property goes on its own line, and it’s pretty simple to set them up. The first part is the variable name, then in parenthesis you define how it works in the editor, with name as a string, followed by the type. 2D gives you any texture, range gives you a slider with specified minimum and maximum values, and color is a the color picker.

After the equals sign you specify the default value for the objects. This is important for the textures, since if you omit one, you want to make sure that it all still works right. Since both the ambient occlusion and specular mapping are done with multipliers, white means no change, which effectively turns that feature off if you don’t specify a value for that variable. The same thing goes for color.

SubShader
{

Tags
{

“Queue”=”Transparent”
“IgnoreProjector”=”True”
“RenderType”=”Transparent”

}

LOD 300
Blend SrcAlpha OneMinusSrcAlpha

The next block starts the actual shader logic. The tags let you specify some settings for the shader. I pulled these from the built-in sprite shader, so I’m a little fuzzy on what they do. Based on what I read from the official documentation, the queue tag sets which layer things are rendered on, and the render type tag sets how the rendering is performed. All three of these tags are in the default shaders, so I felt safe using them. There were two more that I took out that may go back in if this runs into performance problems later on, but these work fine for now.

The LOD tag sets the level of detail this shader runs at. Unity recommends 300 for bumped specular, so that’s what I went with. The blend mode sets standard alpha blending, which supports translucent pixels so you can have nice anti-aliased edges.

CGPROGRAM
#pragma surface surf BlinnPhong vertex:vert

sampler2D _MainTex;
sampler2D _BumpMap;
sampler2D _AOMap;
sampler2D _SpecMap;
fixed4 _Color;
uniform float _AOIntensity = 0;
uniform float _SpecIntensity = 0;

From here on out, most of this code was pulled from the official shader documentation. CGProgram is what starts the actual shader code; this is a standard Unity thing. The pragma line is where we declare what the names of our surface and vertex shaders are, as well as the lighting model we want to use. BlinnPhong is the lighting model for specular lights, so that’s what I went with. The rest of this is where we declare the variables that actually go into the shader code. These map by name, so they have to be exactly the same as they are defined in the properties block. I got the data types from looking at the included shaders, they seem to map 1:1 to the types in the properties section.

struct Input
{

float2 uv_MainTex;
float2 uv_BumpMap;
float2 uv_AOMap;
float2 uv_SpecMap;
fixed4 color;

};

This section just defines what information is fed to our surface shader from the vertex shader. Since all of these values are affected by their position within the face of the quad that is holding our sprite, we need them all. The ones that aren’t affected by position, the intensity values for both ambient occlusion and specular mapping, we don’t need to send those through. They have been declared so we can access them at the top level, and since they are constant (within a given pass) we don’t need any more detail.

void vert (inout appdata_full v, out Input o)
{

v.normal = float3(0,0,-1);

UNITY_INITIALIZE_OUTPUT(Input, o);
o.color = v.color * _Color;

}

This is the vertex shader. I ripped this almost entirely from the built-in diffuse sprite shader. All this really does is clear the normal based on the geometry and then apply the color from the vertex. This makes it so that we can use the tint of the sprite itself or the tint applied to the shader, they will both work fine. The only thing I took out was the pixel snapping code. My game doesn’t use pixel art, it’s all rasterized vectors. I didn’t need or want it in there, but if you want to enable pixel snapping, you can modify this based on Unity’s diffuse sprite shader.

void surf (Input IN, inout SurfaceOutput o)
{

fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * IN.color;
fixed4 a = tex2D(_AOMap, IN.uv_AOMap);
fixed4 s = tex2D(_SpecMap, IN.uv_SpecMap);
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
o.Albedo = c.rgb * c.a;
o.Albedo *= (1.0 – _AOIntensity) + (a.r * _AOIntensity);
o.Albedo *= (1.0 – _SpecIntensity) + (s.r * _SpecIntensity);
o.Alpha = c.a;

}

This is the real meat of the shader. This is what actually applies the different lighting concepts to the sprite. The first thing we do is grab the color of the pixel at the point of the current fragment being lit from all of our source textures. Remember that we defaulted these to white if they weren’t supplied, so if they aren’t available, we just get a fully opaque white pixel. We also multiply the texture color by the light that was supplied by our vertex shader. This is standard practice from the built-in shader to make the tinting and colored lights work.

The bottom part is where we set the actual values used for this fragment on the screen. First we grab the normal from our nomal map. This uses the built-in unity function for unpacking normals from a normal map. I have no idea how it works, it’s magical, and it basically handles all of the normal mapping for you. All that’s left is setting the albedo. I have no idea what albedo even means, but it seems to want a color based on the shaders I looked at, so I started with that. The rgb value from the input texture, multiplied by its alpha. I don’t know why, exactly, you have to multiply the color by the alpha, but if you don’t translucency just breaks completely.

From there it’s a simple matter of just multiplying in the ambient occlusion and specular maps and setting the alpha. These are always in grayscale, so I just grab the first color in the bunch since they are all the same. I will be honest, I have no idea if this is how baked ambient occlusion or specular mapping work. I did some googling on how to bake those things in, and the general consensus was that you just multiply them against your input texture. This makes sense, and the results look like I would have expected, so I left it at that. The math in there is just for tuning the intensity of the effects while ensuring that the top end is always 1.0.

So that’s it, in a nutshell. There may be some issues, and if you find any I’m all ears. I wrote this up quick and dirty just to get it working, and the results actually came out pretty nice.

Usage Instructions

To use this shader, there are a couple of things you need to make sure of. First, your diffuse texture should be set as a sprite in unity. None of the settings on the sprite really matter, though they will affect the image in the expected ways. Second, your normal map should be set as a normal map. If you’re using the output from Sprite Lamp, it needs to be normal only, not normal + depth or anything like that. When you set it to a normal map in unity, it has a tendency to default the Create from Grayscale setting on, but that will mess up the normals so make sure that’s unchecked. Third, the other textures, ambient occlusion and specular map, should be set as textures. That’s it, plug your values in and go.

There are a number of features that this shader does not support right now. The first is shadow casting. I don’t have Unity Pro, so I can’t use shadows. I could implement something for self-contained shadows, but since that would look awful in my game without external shadows as well, I didn’t. As a result, there is really no use for the depth map, so I didn’t include that either. The second is the lack of emissive light. Since this shader supports global directional lighting, I didn’t see a need to include it or wraparound lighting. The last thing it doesn’t have is cel shading. I may end up implementing this later, but for right now I’m going forward without it to see how it turns out. I will update this article and the attached download if I make that change.

Download the shader

Click to view, right-click save as to download.

I mentioned in my post last week that I was on the fence about purchasing Sprite Lamp. I’m happy to announce that fence is long gone. After a cordial interaction with the developer, I went for it. This post is going to be mostly about that, so there will be lots of pretty pictures.

Getting Started

After getting my download link and cracking open the software, I was immediately struck by the fact that I had no images suitable for lighting. The body work I had done in the past was fragmented and didn’t apply to the new Spine animations. I wasn’t sure how this would work with the vector graphics style I had going for my images so far, and I had no idea how to actually shade anything in that style either. I sketched out the outline of a Corinthian column header, exported it into gimp, and set to work trying to make gradients wrap around complex shapes. Needless to say, this didn’t work. I found a plugin that did a good bevel effect and set to using that to shade my object. It sort of worked, but it felt very round and fuzzy.

Initial inputs and static output. Apparently I can't tell left from right at times.

Initial inputs and static output. Apparently I can’t tell left from right at times.

Not so much teddy bear fuzzy, as doesn't brush its teeth fuzzy.

Not so much teddy bear fuzzy, as doesn’t brush its teeth fuzzy.

It worked, sort of, and I had an object reacting to the light. I was ecstatic. I knew I could do better though, so the next day I set out to do just that. I had always wanted something closer to cel shading than normal lighting for this game, and getting smoothly-lit input images had been an incredible pain anyway. I decided to try and bake the cel shading into the lighting images used to generate the normal maps. This carried a couple of benefits. The first was that I no longer had to deal with creating perfectly smooth light maps. Creating cel-shaded input images in inkscape is pretty easy compared to making something perfectly smooth by hand or with linear gradients. It also means I don’t have to switch tools half-way through the process. The second benefit is that I can get something sort-of like cel shading without the somewhat erratic lines of light that normally come with those types of shaders. I wanted to test this quickly, so I made something very simple.

Clearly my mastery of lines and circles shows how great of an artist I truly am.

Clearly my mastery of lines and circles shows how great of an artist I truly am.

Painstaking Detail

This proved the concept, but it didn’t exactly look amazing. It looks closer to claymation than it does to cel shading, but I was pleased with it nonetheless. To make sure that the style held up in more complicated pieces, it was back to the column header. This took quite a while as I’m no master artist. Figuring out what the shapes should actually look like with orthogonal lighting was the biggest challenge; there aren’t a lot of references with that kind of lighting. After several hours tweaking nodes, I felt like I had something that might be passable. I have to say that I’m shocked with how nice this turned out, though I guess I wasn’t expecting much.

It kind of looks like a bucket. Catch the falling Cthulus mini-game, anyone?

It kind of looks like a bucket. Catch the falling Cthulus mini-game, anyone?

Apparently it just needed a new toothbrush.

Apparently it just needed a new toothbrush.

Lurking in the Shaders

With that, I decided I was content with my asset creation pipeline and decided to actually try and put this into the game. One of the key question I asked the developer before I bought this software was if it had an implementation for Unity. He said it had one, but it wasn’t fully done, and it wasn’t a top priority. Ok, not a big deal, I can work with partial support. It turned out though that partial support meant something different to me than it did to him. Hard edges on lights, no alpha transparency support, and a general approach that didn’t look like it was actually built for unity. I was a bit dismayed, but to his credit, he did warn me. I initially tried to get it working as-is, and when I saw that lights just stopped at their edge instead of falling off I knew I was going to be writing shaders for the rest of the weekend.

I kind of feel bad for that one off to the right. What did he do to deserve being ostracized?

I kind of feel bad for that one off to the right. What did he do to deserve being ostracized?

Prior to this weekend, I had never touched 3D lighting. I’ve dabbled in shaders for special effects, but dealing with lights was completely foreign to me. The included implementation had a whole lot of vector math that I wasn’t comfortable with, so I prepared myself for the long haul. Just getting it to where I had it had taken the better part of a day, how long would it take to make a new one from scratch? Not long at all, it turns out. Unity does, occasionally, make something really easy for me. The built-in shader lab made getting that shader up and running incredibly easy. It doesn’t support all of the features that Sprite Lamp offers, but for my purposes, it’s perfect. In a matter of a couple of hours, I had every feature besides shadows, depth, and pixel snapping up and running. A few minutes later and I had the whole thing integrated into my existing project.

This is actually an awful screenshot to show off the shader. Programming: A+. Media: F-.

This is actually an awful screenshot to show off the shader. Programming: A+. Media: F-.

It felt like a very productive week, and I’m looking forward to working on integrating spine, and getting my procedural decorator up and running in the coming week. These should be the last two graphical development pieces for a good long while. After this it’s all content and gem programming. This is when things get really exciting.

Sorry I missed out on the post last week. I didn’t have work for Labor Day, and that threw me off schedule, and I just forget. Never you worry though, I’ve got you covered today. Aside from some general housekeeping and refactoring, I’ve really focused on two major aspects of the game in the past two weeks: physics and rooms.

Relaxing the Bodies

When I initially set out to learn how to make a platformer in Unity, I tapped into my experience from working with Box2D on both In Vivo and my Ludum Dare 29 entry. I had it in my head that there was a right way to use a physics engine for platforming, despite all of the advice to not use a physics engine and just handle the collisions yourself. This started out really well. I had a collider for the body, to handle actually running into things, and some trigger colliders for detecting various states. There was a trigger beneath the main collider to determine if the player was standing on something, and was able to jump. Implementing ledge grabs introduced two more detectors, one for actually hitting the wall, and one above it so it could tell the difference between a ledge and a wall. As I went on to things like wall jumping, stalling, and various other acrobatic skills, the number of detectors ballooned. On top of that, there was a lot going on that I wasn’t happy with in regards to the general way it was handling movement. I needed a different approach.

So many triggers, and I hadn't even added the tentacle detectors. It is a Lovecraft game...

So many triggers, and I hadn’t even added the tentacle detectors. It is a Lovecraft game…

After reading as many articles as I could find about platformers in Unity, I decided it was time to bite the bullet and actually roll my own physics code. Thankfully Unity makes this nice and easy on me, with things like ray and box casting to do the brunt of the math. Essentially, I let Unity handle detecting collisions, and then I step in and deal with responding to them. This also gives me a lot greater control over the way movement is handled, which means I don’t have to figure out the right mass ratios of my objects, or relative friction between entities and the floor. I don’t need that level of detail from my physics, I just need a guy to walk around. I have done collision detection and response in the past, and even written a physics platformer from scratch just to see if I could. This was a walk in the park by comparison.

A Place to Play

I talked a little bit about the map generation in my initial announcement post, but for those that don’t remember, it’s going to stitch together a map from a bank of possible room layout. I have done procedural level generation in the past, and I have a good understanding of how to make a map that can be completed with the ability gates that are inherent to this type of game. I haven’t started on that algorithm quite yet, but I have laid out the design, and I know I’m going to need a lot of rooms and a way to load them. I’m using the Tiled map editor to create my levels. I used it for In Vivo and for one my Ludum Dare entries, and I’m pretty comfortable with its ins and outs. I threw together a basic room layout using the shape layers and set to getting it into the game.

I know it looks really exciting like that. It will look nothing like that in the finished game...unfortunately?

I know it looks really exciting like that. Unfortunately it will look nothing like that in the finished game…

This is obviously just a bare-bones outline of what the levels will look like, and it’s missing things like item caches, secrets, hiding spots and doors, but it’s enough to test out the concept of actually loading levels. Up until this point I hadn’t attempted to get anything actually loading from an external resource that wasn’t a graphic. It wasn’t that difficult, though I did run into some file format issues. I used to build web apps, and I know the pain of the xml overhead all too well, so I had taken to the habit of exporting my levels in JSON format. It turns out that neither C# nor Unity have built-in JSON parsers, so instead of doing the obvious thing and using XML, I tried to find a good open source JSON library. After spending far too much time doing that, I snapped out of it, switched everything to XML and got the levels loading. Sure they were upside down and nothing lined up, but it was a start. After a few hours working out all of the bugs and making everything speak the same language, I had the level loading in game, and I was able to walk around in it. Success!

But What About…?

Previous to my foot injury, I had been working mostly on graphics, and that has come to a standstill. I’m very interested in using the Sprite Lamp tool to enhance my visuals from my shoddy attempts at 2D lighting, and I’m still trying to decide if I should go for it. In the mean time, I wanted to get something that people could actually play out sooner rather than later. It has been months since the announcement, and I don’t have anything playable yet. I’m hoping to get the fundamental platforming mechanics up and running and out to you all as soon as possible, so I can get the feel of movement just right. I would like for that release to also look pretty, but if I have to release it with a flat character and red squares for the ground, I will.

Red on purple is such a nice color scheme.

Red on purple is such a nice color scheme.

I’m hoping to make a decision on Sprite Lamp soon. It looks really nice on their site, and it can be used to great effect, like in Hive Jump (shameless plug: you should go pre-order the game, it looks very promising and I want the extra guns). As soon as I know if it will work with Spine in Unity I’ll have my decision, but since it changes the workflow I have to hold off on making things pretty for now. I’m pushing for a five-room demo, with all of the graphics, basic gameplay elements, and some sound within the coming month. It seems a bit ambitious, but now that I can actually focus on the project instead of the tools and framework, I’m hoping I can power through and start turning this crazy idea into a game.

This past weekend I participated in the 30th Ludum Dare 48-hour competition and created Fusebox, an energy management simulation game. What follows is a summary of my experiences creating it, and what I learned from doing so. I had a lot stacked against me, and while I missed some milestones that would have taken the game from mediocre to great, I think that I did really well considering the situation. Before we can assess that though, we have to start from the beginning.

Sure-footed as a Mountain Goat

One week before the Ludum Dare competition started, I was at the local rock gym with a friend of mine. They had more than just rock walls there, and in my first (and most likely last) attempt at this whole slack-lining thing, I fell and landed sideways on my foot. It instantly swelled up to the size of a potato, and I haven’t been able to walk properly since. I have made a pretty decent recovery so far, but one thing I can’t do is sit up. If my foot isn’t elevated above my head, it swells back up like a balloon and becomes incredibly painful. Since working on a computer with any amount of comfort necessitates putting your feet beneath the desk, I wasn’t sure how it would turn out.

This is what you get for hanging out with extroverted adrenaline junkies.

This is what you get for hanging out with extroverted adrenaline junkies.

Over the course of the weekend, my foot turned out to be both more and less of a problem than I expected. By lowering my chair, throwing a pillow on my desk, and leaning back, I could actually sit somewhat comfortably for more than an hour at my desk. It required me to be a bit twisted, and it probably wasn’t good for my back, but I was actually able to sit at the computer. Had this not been possible, I had a backup plan of writing a game in javascript from my laptop, since it can’t run Unity. In retrospect, I probably should have gone that route, but I really wanted to try this out with Unity, so I suffered through.

The downside to this was that I couldn’t get into a position that was ideal for either my foot or for writing code. I was at least slightly uncomfortable the entire time, and several times throughout the weekend I had to stop and move to the couch to give my foot a proper rest. This had two side effects. The first was that I lost a lot of development time to laying on the couch with my foot on the back. The second was that in order to try and take advantage of this time I brought my notepad and did as much design and planning as I could while I was away from the computer. This is probably the main reason the game is so complicated and over-engineered.

This is the first time I've taken hand-written notes in years. I had to use one of those weird scratchy tube things to scrape my thoughts in stone.

This is the first time I’ve taken hand-written notes in years. I had to use one of those weird scratchy tube things to scrape my thoughts in stone.

What is it Even Uniting?

One of the main reasons I do these competitions is to force myself to try something new. I’ve used new engines, tools, or frameworks every time, and I’ve never made a game in a genre that I’ve done before. It’s a great way to learn a lot in a very short amount of time, and when you’ve been programming for a length of time measured in decades, it’s not a stretch to try and figure something out in that time frame. Since my current game project is in Unity, and I’ve been struggling with it since day, I decided that I would force myself to figure out and use Unity for this competition. In retrospect, I’m glad that I did, but it definitely slowed me down quite a bit.

I also chose to do a very UI-intensive project this time around, for a couple of a reasons. I felt that my foot might get in the way, and I wanted something I could work on from the couch if the need arose. I also know that I hate writing interface code, mostly due to the fact that I’m not very good at designing interfaces, and I find the whole thing very tedious. I may have been setting myself up for failure here, but the goal was never to win the competition. In all of the work I’ve done on Project Dunwich, I have not even touched the interface yet. At one point I actually had to look up how to make a button. I was starting from scratch here.

So... Much... UI Code...

So… Much… UI Code…

The Thought

Despite using tools and techniques I was unfamiliar with, and dealing with Quato growing on my foot, I felt pretty good going into the competition. I had read through the list of themes, and I focused my thoughts on the highest rated candidates from the first four days of voting. This is by no means a fool-proof method of predicting the winner, and I wasn’t writing any code or committing anything to paper, just idly thinking about the design possibilities. I ran through some ideas while I went about my day, and initially I wanted to make something with more action, since my last two attempts sort of fell flat in that regard. Most of the ideas I thought of with any amount of action seemed either too obvious, or not connected enough to the theme, and UI was another focus area, so I settled for a management sim game.

Once the theme was actually announced, I was a bit relieved that the top theme won out, since it meant I already at least knew what genre of game I would make for it. The idea was simple, connect worlds together through some interface, but give those worlds multiple, intricate layers of connection. I like to make my games hit the theme in multiple ways, and that satisfied that requirement. Connecting worlds to an energy source was the obvious take on the theme, but having them be linked to each other as well added a nice extra layer of depth to the interpretation. I’m not sure any ever notices these little touches, but it makes me feel better about my interpretations.

I'm really happy with how the planet rendering turned out. It's a shame I never got a chance to make that interface actually useful though.

I’m really happy with how the planet rendering turned out. It’s a shame I never got a chance to make that interface actually useful though.

The Look

I immediately started with graphics, since I didn’t think there would be very many, and I wanted to get it out of the way. I drew up a mock interface, some icons for the planet stats, and some graphics for the planets themselves, and actually had something passable by the end of the first night. I have used Inkscape quite a bit over the past few months, but I had never done clouds or noise in it, so that was a fun little challenge to overcome. In the end, I spent less than four hours total on graphics, and I’m glad I didn’t have to fight with them at all once I got into the interface code.

With graphics in hand, I set to create the game objects and renderers that would use them to actually put the images on screen. Unity actually made this really easy, though I have no idea if the setup I used is proper for an entity-component system. Since most of the game objects were just data containers, that didn’t take very long, and well before the half-way mark, all I had left was to write the code to process the interactions on the game objects, and then do a whole lot of UI work. After a brief stint on the couch to rest my foot, I started in on the UI.

It is a damn good thing I don't have to actually draw interfaces. I drew it and I can't even tell what's going on in some spots.

It is a damn good thing I don’t have to actually draw interfaces. I drew it and I can’t even tell what’s going on in some spots.

As I mentioned earlier, I had no idea how to approach the interface. I created things using GUIText and GUITextures, I switched to converting screen coordinates to world coordinates and driving the UI with game objects, and eventually discovered the OnGUI method and settled on using that. Throughout the course of the day on Saturday I created as many interfaces as I could, to enable interaction with the game objects. I could just as easily have started by coding that all into the setup and working on the simulation, but that seemed like it would be harder to iterate on. Once I learned how to make the interfaces, it was a pretty smooth ride of create, test for usability, modify, repeat. I didn’t do things in a very efficient way, there’s a lot of copy/pasted code for UI stuff, but I just kind of zoned out and started writing.

The Logic

By the end of Saturday, I had about half of the interface done, and none of the game logic. That seemed like a bad situation to be in, so I set out to right that first thing on Sunday. Since I had spent a fair amount of couch time writing out notes on how I wanted it to work, that actually went pretty fast. The logic is pretty complicated, there are a lot of moving parts that determine how the hardware will react, and how the planets will respond to their situations. The biggest problem with all of that is that I couldn’t get the interfaces done in time to actually explain all of that to the player.

My half-baked attempt at a tutorial. The best part is that I didn't even get to implement some of the stuff I explained in here. Super useful.

My half-baked attempt at a tutorial. The best part is that I didn’t even get to implement some of the stuff I explained in here. Super useful.

The final interfaces were the ones that told you what was going to happen when you advanced the day, and the one that you manage your circuit board. You know, where you actually play the game. I knew what needed to be done, but by Sunday my foot was in open revolt against me. I spent a lot of that final day on the couch resting, and with nothing to plan, I just sat there mentally writing interface code to draw out how I wanted it to look. The funny thing about mentally writing out code, is that it’s a completely useless activity. When I felt good enough to try and implement it, everything fell apart on me. I had planned on whipping up those last two screens and then playtesting the game for bugs and balance. What ended up happening was a mad dash to get the interfaces working that ended about 15 minutes before the deadline.

At Least I Finished… Sort Of

I decided that I was done, and 15 minutes wasn’t enough time to get any of the last things I needed from where they were to where they needed to be to even be passable. I set to build my project and upload, and then Unity decided to remind me why I hate it so much. Apparently the method I was using to color the planets with HSV colors was only available when you ran the game through the editor. It wouldn’t even compile. Fortunately, HSV to RGB implementations are a easy to code, so I started throwing one together. In the past I’ve worked on them with hue being an angle from 0 to 360. Unity’s method had it as a float from 0 to 1. No sweat I thought, I’ll just multiply it by 255. If that doesn’t make sense to you, it’s because it shouldn’t. All of the planets turned green because I mixed up angles with rgb values, but I didn’t have time to figure out why. Up it went, and just like that it was all over.

I’m glad that I made the choices I did. I learned more about unity and its UI features in this past weekend than I have in the past four months. Sure, I could have scrapped the planet interface and focused more on good UI, and I probably would have been better off spending time on tutorials rather than tweaking button positions. Given the constraints I was working under, I’m really happy with how things turned out. I do have some ideas for how to improve next time though.

  • Simple design with good balance – I had so much time away from the computer that my end product was as overly complicated mess, and there’s really no way for the average player to figure out what’s going on. Having a more simple design with a better balance would have been a better approach. Instead of having four types of compatibility and a two-hundred line calculation for fuse load, cut it down to two stats and spend more time on making sure the numbers work out over the course of the game.
  • Interaction before eye candy – My planets look way better than they need to for a game that is mainly driven by button clicks. If I had put that off until the end, I would have been able to see that before wasting time on them, and I might have had time to implement things like a proper tutorial or a win condition.
  • Playtest as early as possible – I put the core logic off for so long, that by the time I had it finished, I was already in crunch mode. This left me no time to make sure the numbers worked out, or that the game was even fun. With a game like this there’s really no excuse, I could have had unit tests written to test out the formulas and algorithms through all 100 days by Saturday morning if I had prioritized it. Good balance is going to be my main goal for next time.

That about covers it. I had a good time, and in the end I have another game to throw on the website and say “look what I can do in a weekend.” No matter how bad I do, or how stressful it is, that sense of accomplishment will always be worth it.

I apologize for being a little late with the weekly update post this week. If you’ve been following my twitter feed at all, you would have seen that I sprained my ankle pretty bad in an accident at my local rock gym. The result was that I’ve been unable to sit at a desk properly for about a week now, and it really makes game development rather uncomfortable. It’s getting better, but it’s pretty slow going. Since I lost the entire weekend to laying on the couch with my foot in the air, I really only got about two hours of development time since the last post, so there’s not a lot to update.

On a brighter note, Ludum Dare #30 is happening this weekend, and I’m going to attempt to participate. Since my foot is still pretty tender, it’s going to be a bit dicey, and my scope will have to be reduced to accommodate the reduction in development time. This will be my first attempt at using unity and spine to actually put a product out, so we’ll see how things go. I’m also going to attempt to stream the process, since the new computer can actually handle it. That all starts tomorrow, and I’m as ready as I’m going to be, so I’ll be back then with some updates.

 

I would like to start this post by saying that I’m moving the weekly posts from Sunday to Monday. It’s easier with my current schedule to do it that way, and really with as late as I was posting the Sunday stuff, it might as well have been Monday anyway. On to the good stuff.

Moving and Shaking

One of the biggest challenges for me on this project is the animation system I’ve chosen. I’ve done a lot of pixel art over the years, and I’m fairly familiar with animating frame by frame. I’ve done enough paper sketching that picking up the hand-drawn, cleaned-up vector art wasn’t a huge hurdle. What I’ve never even attempted before was skeletal animation. To put it bluntly, I have no idea what I’m doing. I move some bones around to try and approximate what I think walking should look like, and end up with something would grant my character immediate access to Monty Python’s Ministry of Silly Walks. Add on to that the fact that I’m using free-form deformation, so getting my joints to bend without everything looking awkward is taxing. I’m getting there, but it’s slow going. As of right now, I have a decent, albeit excessively deep, walk animation, an incredibly awkward jumping animation, and something that is supposed to be for grabbing ledges, but looks more like a creepy robotic pitching machine. Progress.

I have no idea why I thought this would look good. I can't even make my arms do that...

I have no idea why I thought this would look good. I can’t even make my arms do that…

That doesn’t seem like a lot of progress, especially since it’s been a few months that I’ve been working on this. That’s an entirely fair assessment, but there’s more going on here. One of the biggest benefits to skeletal animations is the ability to blend between keyframes of different animations. When the characters are standing still, rather than having to creating transition animations, or having them start immediately and relying on the low fidelity of pixel art to make it not seem so jumpy, I can simply blend. Since it’s all just bones with keyframes, the spine runtimes gradually transition from wherever the bones are now, to where they should be for the new animation. It looks amazing, when it works.

Undocumented Features

Another one of the biggest challenges I’ve faced on this project is that I’m learning a lot of new skills at once, and many of them aren’t well documented. I’m used to libraries and utilities from big companies, with massive libraries of documentation aimed at software engineers. I’m accustomed to being able to find the exact requirements of any given piece of functionality, with common pitfalls laid out. If you mess something up, it’s because you weren’t paying attention. The set of tools I use now don’t have this level of detail. The internet is awash with video tutorials, and for Unity in specific it seems to be the preferred means of information conveyance. This bothers me. I’m used to reading technical documentation, I can chew through technet, msdn articles, or javadocs with ease. I know how to find what I’m looking for. When you present me an hour-long, rambling video that more showcases than informs, I get a little irritable.

I guess the animation suite wasn't expecting me to not have animations. I stand completely motionless all the time, so I don't see why that's a problem.

I guess the animation suite wasn’t expecting me to not have animations. I stand completely motionless all the time, so I don’t see why that’s a problem.

Unity is not the only offender here. I ran into an issue with spine, where my animations weren’t transitioning properly. I downloaded the examples and compared code, everything looked good. I checked the internet for common issues, and since there isn’t a stack exchange for spine, I mostly ended up on the same few threads on spine support forums. None of the proposed solutions helped me at all; I was completely stuck. In the end, I opened the runtime source code and tracked down the cause of the issue, which was that my idle animation had a 0-length duration. I double checked the documentation to see if I just missed it, but I couldn’t find it. At least it gave me a good reminder of why it’s important to be able to read other people’s code.

Which Leaves Us…

After fighting with animations and runtimes all week, I feel like I haven’t accomplished a whole lot. There’s very little to show, since it was mostly tracking down bug fixes and making animations that nobody would be proud of. On that front, it seems the Sprite Lamp update with Spine integration is coming out very soon. Once I can try that out and see if it works with my game, I can actually start working on production-quality animations to replace my grey-man placeholder. Hopefully then the screenshots will be a little more interesting.

What, your pants don't do that? I guess there's still some more work to do on those.

What, your pants don’t do that? I guess there’s still some more work to do on those.

In the coming week, my goal is to focus on getting the sprite skinning and attachment done, so I can put pants on my character. This will let me build out a town’s worth of characters that don’t all look the same, and then I can start on some of the more involved gameplay mechanics. It’s really hard to sneak past anyone when you aren’t wearing pants, so I guess I have to do that first.

One of my favorite aspects of programming is solving new problems. This is part of why I spent so much time making prototypes without ever finishing anything. Once the initial problem was solved, I didn’t really care about the rest, I just needed a new problem. The reason I bring this up is that in creating this new project I set myself a lot of learning goals. Accomplishing these goals was an important step in getting this project off the ground, and the main reason I haven’t made any blog posts lately. There isn’t enough material in a given week of doing research to make for something interesting. Since I am, for the most part, past that phase now, regular blog updates should be returning.

Project Dunwich

I’m using this as an unofficial logo. Seems appropriate to have one to go with the unofficial name.

Uniting development

I have this notion in my head that at some point I want to target consoles with my games, and the code libraries I’ve used in the past don’t support that. The Unity game engine does. It offers a number of other benefits, chief among them being the fact that I can work in C#, which is what I use at my day job. I find it’s easier to focus on one language at a time, so that’s a huge plus for Unity. Between those two features, and the almost ubiquitous acceptance it has in the indie scene, it felt like a good choice.

I’ve written a lot of code in a lot of different languages in my day; code is where I thrive. Unity differs from most of what I’ve worked on in the past, in that it’s largely driven by the editor interface. I don’t really care for drag and drop interfaces, or attaching disparate pieces of code to objects in a scene through some GUI. I just want to write code. I’ve figured out how to drive everything through code with unity, but it was not as straightforward of a task as I had initially expected. It doesn’t help that this is my first foray into the entity-component-system structure that Unity uses. It also doesn’t help that I am vehemently opposed to watching video tutorials for programming, and the vast majority of unity tutorials are in video form. In the end, I have things working in code instead of through the interface, and I’m happy with it, but I spent a lot of time stumbling through that problem.

Pictures of code are boring, but this is what I can produce in a short amount of time through code in unity. Mostly naked men.

Pictures of code are boring, but this is what I can produce in a short amount of time through code in unity. Mostly naked men.

After getting Unity set up to work more like a software library and less like a tool, I set to work implementing the most basic parts of the game system. Setting up a platformer was easy. Most of that stuff is handled automatically, I just have to tweak the numbers to make a physics engine run a platformer that feels right. After getting that up and running, with just basic movement, collision, and jumping, I started in on the game’s upgrade mechanics. Implementing ledge-hanging was so simple, it surprised me. I was expecting it to be much more work, but Unity makes a lot of that stuff really easy. With a very rough gameplay prototype in hand, I feel confident in being able to implement the rest of the gameplay without too much trouble, at least from a programming standpoint.

Lighting the way forward

Since this is a horror game, mood and atmosphere are incredibly important. I knew I wanted to move away from pixel art with this game and move to something a bit more realistic looking. I opted for vector art initially, rendered out to standard images, since basically no 3d graphics library supports vector rendering. The art style was something akin to a comic book, with hand-drawn shadows and lighting across the vector images. I have only dabbled in vector art in the past, so my initial attempts were pretty terrible, but I persisted. In the end, it looked pretty good, but it lacked reaction. For a game with a lot of emphasis on mood, having the game world not react to the different lighting effects seemed less than ideal.

Some cultist hoods in the original style. Way too clean to be horrifying.

Some cultist hoods in the original style. Way too clean to be horrifying.

I haven’t worked on a 3d game since I was in high school. The concepts of lighting in a 3d environment are completely foreign to me. In light of that, I thought it would be a good idea to try and apply 3d lighting concepts to my 2d game. By applying normal maps to my 2d images, I get that reactivity that I wanted. My shadows aren’t hand-drawn onto every image, the lighting system takes care of that. This system worked great; I was able to put lights under faces and create that foreboding camp-fire storyteller feel. I was able to get a group of robed cultists to have long shadows cast over their faces to hide them without resorting to blacking out the front of the hood. It looked awesome, at least when everything was sitting still.

A world in motion

Once animation came into the mix, everything kind of fell apart. I was using skeletal animation with my vector images to make my character walk. Normally skeletal animation is very easy with vector graphics, but when you add in lighting, everything gets a little weird. The standard method of overlapping the joints to allow for rotation without breaking the shape just doesn’t work. Since each part is rotated independently, they get different values from the lighting system, so it becomes really obvious that the two parts are just sitting on top of each other. It looks bad when joints are rotated just standing still. It is incredibly distracted when parts starts to move. I tweaked this for a few days and was able to get the base male body to animate with only minor lighting issues, and was, for the most part, pleased with the results. Then came the pants.

This looks pretty good for a horror game, but then you have...

This looks pretty good for a horror game, but then you have…

For as bad as this looks now, it looks significantly worse in motion.

This. Which, for as bad as this looks now, it looks significantly worse in motion.

One of the biggest issues with using the overlapping shapes method of animation, is that you can’t have patterns on the parts that rotate. Without thinking, I drew up some pants to put on my naked man, and I drew a seam down the leg. As soon as I started cutting it apart I noticed my error, but I decided to run with it to see what it would look like. It was awful. The line down the side just broke at every joint, and in the worst spots it created new seams perpendicular to the main seam as rotated parts started overlapping. This was a deal breaker. I didn’t want generic, single-tone pants, and that restriction would have applied to everything. I needed a better way.

This is where Spine comes in. Spine is a 2d skeletal animation tool that lets me do some really advanced animation techniques. One in particular caught my eye, called free-form deformation. This allowed me to set polygons under the images, and set each vertex to be weighted to different bones. This let me cut the pants into a few pieces and rotate the vertices between the different bones. With this, my pant seams simply crumpled up in the polygons at the hip of the pant, just like they would in real life. I wrote the check for the software after about two days of playing with the demo, and the results are already looking pretty amazing.

This was an early attempt at using spine, so ignore the fact that it looks awful. At least the seam isn't broken.

This was an early attempt at using spine, so ignore the fact that it looks awful. At least the seam isn’t broken.

The last bit is to get these Spine animation to work with the lighting system I had used earlier. My initial attempts had some quirks that need to be worked out, but I’ve decided to put that part of the project on hold for a little while. The reason being that the SpriteLamp tool seems to solve this issue for me, as well as making my awful attempts at normal-mapped sprites easier to create and look better. They don’t have a release that includes this Unity shader for Spine yet, so I’m going to keep an eye out for that. In the meantime, I still have good animations with flat lighting, so it will have to do for now.

A goal in sight

Now that I have the most important aspects of the game that I initially had no idea how to handle at least conceptually figured out, I can start turning this into a game. It’s still a long ways off, and my initial estimate is probably going to be pretty wrong, since I didn’t anticipate spending this much time just learning techniques. At least now I can start actually making the game. This also means I should have a lot more to write about every week, so I guess this is where the crazy roller coaster that is game development really starts. Buckle up, it’s probably going to be a bumpy one.

I have been hard at working these past few months learning all sorts of new ways to make games. The up side to this is that it’s going to make a big difference in the quality of my next project, the down side is that there hasn’t been any real, sharable progress on the new game yet. That should all change very soon, as I now have all of the skills I need to start putting together playable, animated builds, so thank you for your patience while I’ve silently toiled away, honing my craft.

As a sort of apology for not keeping the website up to date for a while, In Vivo is 50% off through Desura all weekend long. I’m going to start back up with the weekly blog posts starting this Sunday, so remember to check back, or sign up for the newsletter and I’ll send the updates right to your inbox.

“I can’t say for sure what it is, but something isn’t right here. I can’t turn back now, not knowing what it is that puts me off so, but I fear that its discovery will undoubtedly end in tragedy.”

Project Dunwich

What is Project Dunwich?

Project Dunwich is the new game in development by Electric Horse Software. Inspired by the works of H. P. Lovecraft, Project Dunwich is a horror game in the metroidvania style, with elements of stealth filling the role of conflict resolution, instead of the traditional forms of combat. You will take on the role of an investigative reporter, hired to look into the disappearance of a small child last seen near a remote town just outside of city limits. While its inhabitants swear they’ve never seen the child, there is much more going on in the desolate town than meets the eye. Explore a town filled with hostile inhabitants, sneak past mobs of unsettling villagers and try to discover what happened to the lost child, without succumbing to the same fate yourself. Project Dunwich will not be the final name of the game, it’s simply a placeholder to reference the project until a suitable name is selected.

Run!

How Will it Play?

The core idea of the game comes from the chase scene in The Shadow Over Innsmouth. I wanted to recreate the feeling of running for your life from a horde of terrifying inhuman creatures bent on apprehending you for some unknown yet undoubtedly sinister purpose. Running for your life makes for a great story, but is hardly as compelling as a game. I want to create the feeling of terror and helplessness that story conveys, but to do so in a game that is also fun to play requires some adjustments. A single chase scene doesn’t provide enough engagement to make an entire game. Instead, you will be running headlong into danger and then attempting to escape again. The best way to do this while retaining the feeling of being lost and alone, is with lots of exploration and backtracking. This is where the metroidvania style comes in.

With most metroidvania games, your progress is hindered by countless minor enemies. You can happily mow these creatures down, providing a bit of challenge to add engagement to the act of exploring, without turning the focus toward combat. Since this is a horror game, feeling powerful and slaughtering countless easy-to-kill enemies doesn’t fit. This is where the stealth aspect comes in. Instead of blasting your way through screen after screen of bats that go down in one hit, you will be sneaking and dodging your way through a town filled with creatures that don’t want you around. To keep this interesting you will have a slew of abilities that increase your stealth capability, but defeating your antagonists is not on the table.

Running away seems to fit pretty nicely as the main mechanic in a horror game, and while you will get better at it as you progress through the story, the threat around you will never diminish. As you backtrack, things will be easier, but only in that you can move through the map a bit better, or you may have some new tools to distract or mislead your enemies. The penalty for being caught will never diminish. Death would be too harsh, and would really break the feeling of the game, so with another nod to The Shadow Over Innsmouth you will instead be dumped on the outskirts of the town (or somewhere else equally inconvenient depending on where you were caught). This may be disorienting at times, as you won’t always know where you are when you wake, and it may not always be somewhere you’ve been, but that only adds to the feel.

Earth

Where Will it Take Place?

One of the biggest pitfalls of the horror genre, and of exploration games, is that there is very little replay value. Once you know what creepy things are around what corners, they lose much of their impact. Similarly, once you know all of the hidden nooks and crannies of a map, exploring it loses its thrill and becomes a chore. To counteract this, I will make sparing use of procedural content generation. The map rooms will all be crafted by hand, to ensure they are interesting and present the right kinds of challenges. Each room will also have a myriad of hidden passages, storage areas, and other secrets to discover, but they won’t all be active during any given playthrough. From the bank of rooms, a map will be put together, and just enough secrets will be enabled to hide all of the items you can find in that run. This means that subsequent plays will not only put you through rooms you might not have seen, and in ways you most certainly haven’t, but the same room will have different hidden compartments to keep you on your toes.

In addition to randomizing the map, elements of the story will be different each time through. The main theme is always the same, the town is run by a secretive cult that seems to have taken the child, and you have to investigate what they are doing. What eldritch horror the cult worships, what creatures they interact with, how they are structured, and what rituals and rites they perform will all be pulled from a pool. There won’t be limitless scenarios, but each one should provide a unique and engaging experience. Each creature and horror will confer its own unique gift to its followers, which means each run will provide you with at least a few new tricks to master as you discover what twisted ways this cult defiles the natural order. Some of the creatures and horrors will be pulled directly from the stories of H. P. Lovecraft, while the rest new creations based on those works. The town and the cult are largely based on Innsmouth and The Esoteric Order Of Dagon, but due to the variable nature of the game will not be direct recreations.

As I have a background in linguistics and have always been interested in languages, I will be creating a written language to go along with this world. Unlike the cipher I used in my last game, this one will be a fair bit more complex. Drawing from frequent references to hieroglyphs in the source literature, I have decided to create my own set of logograms to add to the atmosphere and frankly because it sounds like a fun thing to do. These will be used for decoration to add depth to the locations in the game, and will also act as keys when engraved onto stones like the images you see here. They will be another one of the many things that will have you trekking back and forth across the map to uncover secrets in new areas you couldn’t access before.

Time

When Can I Play It?

The idea for this game came from the concept I used in the 29th Ludum Dare competition. While not a direct follow-on, and indeed written in a different language, I intend to use that project as a baseline for this one during the early stages. I don’t know exactly how long this will take, but I intend to make this as open and transparent of a development process as possible. Every Sunday night I will let you know what has happened with development in the past week, every day on twitter you can see what progress I’m making, and I’m always here if you have any questions about what’s going on. As soon as I have a stable playable build, I will upload it here and update it each week.

Bear in mind that this a side-project for me. I have a day job that is unrelated, and during the week I won’t always get a ton of time to work on this. The last game I released took me roughly six months from conception to release. I also moved across an ocean and lost access to my code in the middle of that, so really I got about four good months of work on it. This project is a little bit bigger, though the focus is on content creation instead of mechanics and puzzle design. I haven’t made a game like this before, so I can’t say what challenges it will present and how that will affect the timeline, but based on my previous work, I’m going with a ballpark estimate of six months. As I get closer, that will undoubtedly change, but for now that’s what I’m going with.

If you’re still reading at this point, then clearly your interest has been piqued. Stick around and join me on the crazy journey that is game development. It won’t always be pretty, and it won’t always be fun, but it will always be interesting, and at the end a game comes out that will hopefully be entertaining, and will definitely be like nothing you’ve played before.