Monday, November 8, 2010

Doug Church and Venturelli on Game Design Tools

Intro

Last week I read the Doug Church, Formal Abstract Design Tools article, and Venturelli's Space of Possibility and Pacing in Casual Game Design - A PopCap Case Study article.

In this blog post I will discuss the design tools highlighted by both articles.

FADT

Doug Church uses the game design vocabulary, FADT, to look at existing games and extract tools from them. FADT stands for; formal, abstract, design, tools.

Formal - Precise definition, explainable to someone else.

Abstract - Underlying ideas not specific game elements, such as a +2 magic sword. The FADT in this case would be a player power up curve.

Design - The process of design, because we're designers.

Tools - Tools are what we use to put the building blocks of game design together.

Tools

Doug Church focuses his article on talking about Mario 64, and partially about the Final Fantasy series.

Intention: The Player makes an implementable plan of their own created in response to the current situation in the game world. The player can create plans on both large and small term scales. A small scale plan in Mario 64, for example, would be to navigate across a series of tricky moving platforms to reach the other side. A long term plan in Mario 64 could be to collect every star in world's 1-3.

Perceivable Consequence: The game world reacts to the player and gives them immediate feedback. If for example in Mario 64, the player tries to jump a bottomless pit and misses the other side, they will immediately fall to their doom, and realise that next time they need to try a different kind of jump to get more distance.

Story: The narrative thread of a game. This can be both designer-driven and player-driven. An example of a designer-driven story is the Final Fantasy series. In Final Fantasy 8 (FF8) the player always starts off the game at a set location, the Ballamb Gardens, and will finish the game after defeating the main villain, Ultimecia. Every time the player plays the game fundamental aspects of the story remain unchanged, they will always occur.

An example of a game that can use player-driven narrative is Worms Reloaded. The worms games have traditionally always lacked complex stories. A reason why the worms wish to wage war and destroy each other is never made clear. Instead the games story is told by the players, typically across the single player and multiplayer modes. The worms series always provides a great deal of customisation to the player, and Worms Reloaded is no different. In Worms Reloaded you can change the standard options of, team names, worm names, worm voices, victory anthem, flag, with new additions such as the ability to give your team hats. The game contains a massive variety of hats ranging from; cowboy hats, space helmets, knight helmets, a crown and even a Boba fett-esque hat.

In Worms each player takes timed turns, with the goal of navigating the landscape to attack and eventually kill enemy worms. The story of Worms is whatever a player does on their turn. If for example player A lobs a grenade across the map, and it bounces off a mountain and lands next to one of the worms belonging to Player B, this becomes part of the story. Player B's injured worm could retaliate by using the jetpack utility, flying over the map to Player A's worm and using the baseball bat to knock it into the water, equalling an instant kill. This action also continues the story. In this type of game, the story is determined by the players actions. Worms being what it is, the story can also be effected by semi-random factors such a the fuse-time of mines, the contents of crates, but it is still ultimately the players choice whether to risk moving past the mine, or which crate to collect.



Longer term stories are also possible in the game, to explain this I'll use an example taken directly from a succession of Worms Reloaded matches I recently played. When people play games they develop their own style of play, unique to the person. Sometimes in tabletop games such as Warhammer 40K a player may grow attached to certain units in their army and not want them to fall to the enemy in battle, sometimes due to their usefulness. My friend uses similar logic when playing Worms Reloaded. He has one Worm, Sir Battenberg (nicknamed the Berg), who he tries to keep alive at all costs. If the Berg dies he loses all his morale and willpower to play strategically, and thus is much easier to defeat. (He also does his best to get swift, brutal revenge on the Worm responsible for the Bergs demise).

It has gotten to the point where I 'honour' the berg' by killing him only with an interesting weapon, or combination thereof, so at least if he dies, he dies in an interesting fashion, this appeases my friend somewhat. I suppose I should point out he doesn't really mind when the Berg is killed, he only pretends to, since it adds an interesting new mechanic to the gameplay, and keeps it interesting. This situation is similar to an on-going 'story' in the game. It's a game within a game if you will.

That's enough about story, now onto the Venturelli article.

Venturelli PopCap Tools

Venturelli has detected different tools than Church, I believe this is because he chose very different games to examine, mainly Plants for Zombies and Bejelwed.

Tools

Pacing: This is the time between every major decision the player makes. Tension, Threat and Movement Impetus are all used to control the pacing of the game.

Tension: This is the possibility that the player might become the weaker side of a conflict; might become weaker than the opponents that they are required to overcome. If the tension keeps increasing it is possible players will reach a state of 'perceived defeat', that is to say; thinking they have no chance to complete the objective or level, and thus giving up before they are physically defeated.

Threat: This is the power of the directly opposing force, in the conflict which is taking place within the game. In an RTS game such as Command and Conquer Zero Hour for instance, this is the base, army and competing player controlling them. In an FPS such as UT2004, the threat is other human players, or AI (artificial intelligence) depending on the game mode being played. In a platform game such as Rayman the threat would be the different types of enemies and bosses Rayman has to overcome to complete the game. These examples show that the form the threat takes will vary depending on the type or genre of game.

Movement Impetus: This is the players desire to beat each level or world within the game, and ultimately to continue playing. If movement impetus is low it is possible that the player will lose the will to continue playing the game, and so may quit the game out of boredom, before being defeated.

Tempo: This is the intensity or speed of play; it is the time between each significant decision made by the player. A lower tempo, for example, represents quicker decision making by the player, since the 'time' between each decision is small. A higher tempo by contrast is slower, since the 'time' between each decision will be longer.

Space of Possibility:  This is the space of possible action that players will explore as they play the game. By space of possible action, I mean any and every action the player can carry out throughout the game. In a game with a defined rule-set like Football, the space of possibility covers the movements of every player and every possible position of the ball. It also includes the different types of fouls, goals etc. Basically it is every action that can be carried out in the game.

As the above example shows, the game of football has quite a wide space of possibility. Lets look at another game, Tic Tac Toe. Tic Tac Toe is a very simple game, the player can draw a circle or cross depending on which team they decide to play. Either way the player can only draw their shape on a very limited number of squares. This causes the game to get boring quickly, and have little replayability.

Restricting the Space of Possibility: Venturelli also talks about the importance of restricting the space of possibility, the importance of artificially limiting a players options to stop them feeling overwhelmed. Going back to the above example, Tic Tac Toe is extremely simple and can be learnt in minutes by small children, whereas Chess is complicated and takes a lifetime to master. This is worth keeping in mind, particularly if the developer is making so called 'casual' games; since this type of game is typically quite easy to pick up and play, the rules of the game are simple and obvious to the player.

These two examples show that there should be a balance between difficulty and complexity. This balance can be achieved by restricting the space of possibility, by limiting the number of moves the player can make, you automatically make the game more simplistic and thus easy to understand and play.

This got me thinking about games I've played recently and how big the space of possibility is in those games.

As a general rule I don't play so called 'casual games' very often. (I own a Nintendo Wii, but mainly for the 1st party Nintendo titles, the Mario's, the Zelda's the Metroid's etc. This deep, immersive style of  game is what I enjoy playing the most).

That said, a few years ago, I thoroughly enjoyed playing Pop Caps, Heavy Weapon, a 2D side-scroller game. The premise of the game is relatively simple, the player controls a customisable tank and must move from left to right through each level blasting all the enemies, usually, planes, helicopters, blimps, tanks etc, to reach the exit.


Heavy Weapon didn't strike me as a 'casual game' when I played it, it was challenging, fast, and gave me, as the player some freedom, allowing me to choose which weapons I wanted to use and upgrade for each level of the game. I found that this customisation was the most appealing aspect of the game, since every level played differently with a different set of armaments.

The space of possibility in Heavy Weapon is small, often when the player is playing the game, the enemy planes are dropping so many bombs, and firing so many missiles, it's usually a case of moving to a specific 'safe' spot on the screen to dodge them. Alternatively the player can shoot down enemy bombs and missiles, before they reach the tank. I've found that a combination of constantly moving while shooting will ensure the player stays alive, at least until the end of the stage where the player must fight a massive boss to advance to the next level.

This concludes my discussion of FADT and the different design tools which you used in PopCap games'. Thanks for reading.

Bibliography

Church, Doug, "Formal Abstract Design Tools", 1999.
Venturelli Marcos, "Space of Possibility and Pacing in Casual Game Design - A PopCap Case Study", 2009.

Games Referenced

Mario 64 (Nintendo 64) Nintendo EAD

Final Fantasy 8 (Playstation) Square Enix

Unreal Tournament 2004 (PC) Epic

Command and Conquer Generals: Zero Hour (PC) EA Games

Rayman Gold (PC, Playstation) Ubisoft

Worms Reloaded (PC, Steam) Team 17

Heavy Weapon (Xbox Live, Steam) PopCap

Blog Activity: Update

A few things to note about this and forthcoming blog posts:

I'm not a very spontaneous person, and I like to be organised, this is probably the reason I've never written a Blog before, I'm not very good at writing up a short, 1-2 paragraph summary in a short time period. I prefer to take my time over something, usually writing it up in a word document with correct formatting. In this case, I  first write the blog posts in Microsoft Word and then copy them into Blogger. This method has the downside of me half writing a blog entry, one week, and forgetting to finish and post it the next. That's the main reason why I'm not quite up to date with this blog. I also have a tendency to over-think things, especially when writing something that's supposed to be fairly brief... Like this small blog update. Doh!

So the main point of this entry is to announce that over the next few days I plan to get 100% up to date with my blog. (Still as the saying goes, better late then never, right? (Assuming it's within the set Uni deadlines :) ).

The Delivery Platform

The Game will be made primarily focusing on PC, but a Xbox360 and PS3 port would be a great idea for extra revenue.

The Game Genre

I would classify the game as a First Person Shooter, with the element of Role Playing Game, similar in style to Fallout3 and Borderlands. I chose FPS because of the level of immersion you get looking out from the character’s eyes, seeing what he seeing. I also chose to go down the RPG road because it suits the gameplay I was going for, levelling up your character the way you want too. Having the RPG element lets the player decide how he/she wants to play the game.

The Target Audience

The target audience for the game is young adults and adults from age 15 up. The mature themes in the game such as violence, and foul language make it unsuitable for young children.

Thursday, November 4, 2010

Tech Feature: Terrain geometry

Introduction
The past two weeks I have been working on terrain, and for two months or so before that I have (at irregular intervals) been researching and planning this work. Now finally the geometry-generation part of the terrain code is as good as completed.

The first thing I had to decide was what kind of technique to use. There are tons of ways to deal with terrain and a lot of papers/literature on it. I have some ideas on what the super secret project will need in terms of terrain, but still wanted to to keep it as open as possible so that the tech I made now would not become unusable later on. Because of this I needed to use something that felt customizable and scalable, and be able to fit the needs that might arise in the future.

Generating vertices
What I decided on was a an updated version of geomipmapping. My main resources was the original paper from 2000 (found here) and the terrain paper for the Frostbite Engine that power Battlefield: Bad Company (see presentation here). Basically, the approach works by having a heightmap of the terrain and then generate all geometry on the GPU. This limits the game to Shader Model 3 cards (for NVIDIA at least, ATI only has it in Shader model 4 cards in OpenGL) as the height map texture needs to be accessed in the vertex shader. This means fewer cards will be able to play the game, but since we will not release until 2 years or so from now that should not be much of a problem. Also, it would be possible to add a version that precomputes the geometry if it was really needed.

The good thing about doing geomipmapping on the GPUis that it is very easy to vary the amount of detail used and it saves a lot of memory (the heightmap takes about about a 1/10 of what the vertex data does). Before I go into the geomipmapping algorithm, I will first discuss how to generate the actual data. Basically, what you do is render one or several vertex grids that read from the heightmap and then offset the y-coordinate for each vertex. The normal is also generated by taking four height samples around current heightmap texel. Here is what it looks in in the G-buffer when normal and depth are generated from a heightmap (which is also included in the image):


Since I spent some time with figuring out normal generation algorithm, here is some explaination on that. The basic algorithm is as follows:

h0 = height(x+1, z);
h1 = height(x-1, z);
h2 = height(x, z+1);
h3 = height(x, z+1);
normal = normalize(h1-h0, 2 * height_texel_ratio, h3-h2);


What happens here is that the slope is calculated along the x-axis and then z-axis. Slope is defined by:
dx= (h1-h0) / (x1-x0)
or put in words, the difference in height divided by the difference in length. But since the distance is always 2 units for both the x and z, slope we can skip this division and simply just go with the difference in height. Now for the y-part, which we wants to be 1 when both slopes are 0 and then gradually lower as the other slopes get higher. For this algorithm we set it to 2 though since we want to get the rid of the division with 2 (which means multiplying all axes by 2). But a problem remains, and that is that actual height value is not always in the same units as the heightmap texels spacing. To fix this, we need to add a multiplier to the y-axis, which is calculated like this:

height_texel_ratio =
max_height / unit_size


I save the heightmap in a normalized form, which means all values are between 1-0, and max_height is what each value is multiplied with when calculating the vertex y-value. The unitsize variable is what a texel represent in world space.

This algorithm is not that exact as it does not not take into account the diagonal slopes and such. It works pretty nice though and gives nice results. Here is how it looks when it is shaded:


Note that here are some bumpy surfaces at the base the hills. The is because of precision issues in the heightmap I was using (only used 8bits in the first tests) and is something I will get back to.


Geomipmapping
The basic algorithm is pretty simple and is basically that the longer a part of the terrain is from the camera, the less vertices are used the render it. This works by having a single grid mesh, called patch, that is drawn many times, each time reperesenting a different part of the terrain. When a terrain patch is near the camera, there is a 1:1 vertex-to-texel coverage ratio, meaning that the grid covers a small part of the terrain in the highest possible resolution. Then as patches gets further away, the ratio gets smaller, and and grid covers a greater area but fewer vertices. So for really far away parts of the environment the ratio might be something like 1:128. The idea is that because the part is so far off the details are not visible anyway and each ratio can be a called a LOD-level.

The way this works internally is that a quadtree represent different the different LOD-levels. The engine then traverse this tree and if a node is found beyond a certain distance from the camera then it is picked. The lowest level nodes, with the smallest vertex-to-pixel ratio, are always picked if no other parent node meet the distance requirement. In this fashion the world is built up each frame.

The problem is now to determine what distance that a certain LOD-level is usable from and the original paper has some equations on how to do this. This is based on the change in the height of the details, but I skipped having such calculations and just let it be user set instead. This is how it looks in action:

White (grey) areas represent a 1:1 ratio, red 1:2 and green 1:4. Now a problem emerges when using grids of different levels next to one another: You get t-junctions where the grids meet (because where the 1:1 patch has two grid quads, the 2:1 has only one) , resulting in visible seams. The fix this, there needs to be special grid pieces in the intersections that create a better transition. The pieces look like this (for a 4x4 grid patch):

While there are 16 border permutations in total, only 9 are needed because of how the patches are generated from the quadtree. The same vertex buffer is used for all of these types of patches, and only the index buffer is changed, saving some storage and speeding up rendering a bit (no switch of vertex buffer needed).

The problem is now that there must be a maximum of 1 in level difference between patches. To make sure of this the distance checked, which I talked about earlier, needs to take this into account. This distance is calculated by taking the minimum distance from the previous level (0 for lowest ratio) and add the diagonal of the AABB (where height is max height) from the previous level.


Improving precision
As mentioned before, I used a 8bit texture for height for the early tests. This gives pretty lousy precision so I needed to generate one with higher bit depth. Also, older cards must use a 32bit float shader in the vertex shader, so having this was crucial in several ways. To get hold of this texture I used the demo version of GeoControl and generated a 32bit heightmap in a raw uncompressed format. Loading that into the code I already had gave me this pretty picture:

To test how the algorithm worked with larger draw distances, I scaled up the terrain to cover 1x1 km and added some fog:

The sky texture is not very fitting. But I think this shows that the algorithm worked quite well. Also note that I did no tweaking of the LOD-level distances or patch size, so it just changes LOD level as soon as possible and probably renders more polygons because of the patch size.

Next up I tried to pack the heightmap a bit since I did not want it to take up too much disk space. Instead of writing some kind of custom algorithm, I went the easy route and packed the height data in the same manner as I do with depth in the renderer's G-buffer. The formula for this is:

r = height*256

g = fraction(r)*256
b = fraction(g)*256


This packs the normalized height value into three bit color channels. This 24 bit data gives pretty much all the accuracy needed and for further disk compression I also saved it as png (which has non-lossy compression). It makes the heightmap data 50% smaller on disk and it looks the same in game when unpacked:

I also tried to pack it as 16 bit, only using R and B channel, which also looked fine. However when I tried saving the 24bit packed data as a jpeg (which uses lossy compresion) the result was less than nice:


Final thoughts
There is a few bits left to fix on the geometry. For example, there is some popping when changing LOD levels and this might be lessened by using a gradual change instead. I first want to see how this looks in game though before getting into that. Some pre-processing could also be used to mark patches of terrain that never need the LOD with highest detail and so on. Using hardware tesselation would also be interesting to try out and it should help add surfaces much smoother when close up.

These are things I will try later on though as right now the focus is to get all the basics working. Next up will be some procedural content generation using perlin noise and that kind stuff!

And finally I willl leave you with a screen container terrain, water and ssao:

Wednesday, November 3, 2010

Nick Rathbone: Codemasters

Position: Games Designer
Worked on: Silent Hill: Shattered Memories

Nick Rathbone, a University of Bolton graduate of 2008, gave us an interesting lecture on getting into the industry and what to expect from our first jobs, as well as continuing in the industry.

Nick started with an introduction of his time in the games industry. He started at Climax Studios, working on Silent Hill: Shattered Memories, before moving to Codemasters UK.

Since Nick was in our position only two years ago, the advice he gave was very helpful in regards to writing our CVs and making our portfolios, and he gave examples on how to get our CVS to stand out. He also told us what to expect from interviews, how that studios may put us under pressure and give us tasks to accomplish, to see how we perform under pressure. What was most important is to research the studio, and alter my CV and cover letter to appeal to them.