Tuesday, March 28, 2017

Wargame Design: The False Granularity of the D20

The use of dice as random number generators is a pretty common feature of wargames.  Therefore, designers often spend a great deal of time thinking about, utilizing, and rolling dice.  Lately, d20’s and d10’s have been all the range over the humble d6.  The main reason is the range of possible results on a larger dice rather than a smaller dice.  Before proceeding, let me preface this by saying that I am not a huge stats/probability guy, but this is the bit I have gleaned along the way.    

The question is what method of using dice is best?  In truth, it is a trick question.  The best dice method is the one that does what you want it to do! 

I see many games moving towards the d20 as the dice of choice, such popular games as Frostgrave and Rogue Stars are good examples.  The designers claim this allows them more room to add granularity through modifiers to the results of the roll.  You have 1-20 options instead of 1-10, 1-12, or 1-6.  That seems intuitive enough.  However, is it ture and is it desirable?  


The D20
Let’s take a look at the standard probability of the roll of a d20.  When you roll it you have a 5% chance of scoring any individual number on the roll.  So, if you needed to roll a 14, the chance is a 5% chance.  If you make it a success matrix you add up the 5% of each number above the target number to get the probability.  Therefore, if we are looking for a 14+ the probability is 30% or so.  In Rogue Stars you can initially activate on a 9+ when you roll 3d20, so you have a chance of rolling an initial activation of 55%. 

So, what happens when you add modifiers and success tests?  Well, it is pretty simple.  Every modifier on a d20 adds 5% so if you need to roll a 14+ and have a +2 modifier then you have a 40% chance of hitting the target number.  Essentially, each modifier is only 5%.  So, these little modifiers do give you a level of granularity. 


 Now, take a game like Frostgrave as an example.  In this game, you use a d20 for an opposed test.  If both players roll simultaneously and have no modifiers, what is the chance that one player will beat another?  There is no way to tell, it is literally the roll of a dice as anyone number can be hit on a 5%.  The chances of beating your opponent in uncontrollable and has no level of prediction.  The controllability of the result only comes into play with modifiers.  So, let’s pretend that player A has a +6 modifier and player B only has a +2.  Player A has a 20% chance of winning.  Here is how, Player A’s modifiers are +30% and Player B has +10% which equals a difference of 20%.  In essence, in Frostgrave, there is very little player control over who will win these opposed rolls.  Therefore, you get the granularity of results but the granularity doesn’t lead to any appreciable performance difference between models.    

The Double D10
Now, if you are a statistician or need to be involved with probability much you have probably heard of something called a Bell Curve.  Most sets measurements will eventually conform to a Bell Curve if they have a normal distribution.  A Bell Curve simple has tapered ends with the end results being less likely occur than the middle numbers.  This is important to understand as a given result becomes easier to “predict” or “control” when the number distribution of results falls on a Bell Curve.  

Individual d20 rolls do not fall on a Bell Curve, any result is just as likely to occur.  The results of rolls on a 2d10 are going to fall on a Bell Curve.  By falling on a Bell Curve, the results of the dice can be more predictable or controlled by the designer. 

https://glimmsworkshop.com/2011/08/22/core-mechanics-randomization/
As you can see, you start to weed out the more extreme outlying results, with a more likely occurrence at the center of the Bell Curve.  It is no longer a straight 5% calculation; instead changes such as modifiers are on a sliding scale, with results that bring you closer to the middle having a proportionate impact on the results.  If a 2d10 has the average score of 11 of 10% as opposed to a 5% for a d20.  If you go to the extreme edges the chance of scoring a 2 or 20 is just 1% instead of 5%. 

If you are using 2d10 versus a Target number for success, your chance will vary more by the difficulty of the task.  So, if you need a 14+ you have a chance of 28% chance.  If your target number was 9+ it would be 72%.     

So, let’s pretend that Frostgrave used a 2d10 mechanism instead of a d20 system.  If Player A and Player B fights with the highest number winning both players still have the same chance of winning, as they are both just as likely as the other to roll higher than the other.  However, the range of results will probably be less extreme, and the chances to drawn combats more likely.  Both players have the same chance to get a result.    

Now, let’s pretend that Player A has the same +6 while Player B has a +2.  What do you think will happen?  Well, due to the curve of the distribution the exact percentage change is harder to calculate, but the average distribution means the player with the highest modifier will most likely win.  Each modifier is not a given +5% like in a flat distribution but is instead proportional to the change on the Bell Curve.  This method actually allows a player slightly more control of a potential dice result if they understand the Bell Curve.                    

Final Thoughts

The primary reason designers want a d20 is to create granularity.  The Granularity of the d20 is false.  Instead, it leads to a more chaotic system.  In a game using a multi-dice system I can be confident that a modifier of +/-1 will have an impact on the game.  The same can not be said of the flat distribution of a  single dice.  Therefore, multi-dice mechanics leads to greater granularity than a straight d20.  The granularity of the d20 is a false promise for game designers.                 

No comments:

Post a Comment