|
vey2000 Novice
Joined: 21 May 2004 Posts: 32
|
Posted: Fri Mar 16, 2007 11:16 am
%dicedev |
I was wondering how the %dicedev function calculates the standard deviation. I've been doing a bit of reading in wikipedia and haven't really found a method to do this.
|
|
|
|
Tech GURU
Joined: 18 Oct 2000 Posts: 2733 Location: Atlanta, USA
|
Posted: Sat Mar 17, 2007 5:42 am |
The %dicedev does calculate the standard deviation but only for the specified dice roll. Here's the actual text.
Syntax: %dicedev(d)
return the standard deviation of a dice roll from the dice d (floating point value). Dice have the format xdy where x is the number of times to roll and y is the number of sides on the dice. An optional +n or -n may be appended to the dice.
Example:
#ECHO %dicedev(2d6+2)
Displays 2.04. This means that 66% of the time, the dice roll will be within the average (9) plus or minus 2.04. Or, roughly 66% of the time the result will be between 7 and 11 inclusive. |
|
_________________ Asati di tempari! |
|
|
|
vey2000 Novice
Joined: 21 May 2004 Posts: 32
|
Posted: Sun Mar 18, 2007 12:02 am |
Yes, I'm well aware of what the standard deviation is and how to use it. My question is how does that function obtain its result? As it stands, I can only think of calculating a large number of dice rolls and then calculating the standard deviation of that sample as an estimate. But that's not quite the same thing as the standard deviation of the dice.
Given that the function returns the same number up to 14 decimals I doubt it's doing this, specially since it to provide that much precision would probably require sampling an unimaginable amount of rolls, and would require a lot more time and processing that zmud seems to use when calling up that function.
The other matter is that I'm concerned that it may not be correct.
Nevermind, I've finally figured out how %dicedev calculates the standard deviation and it is indeed wrong. It seems to be currently using the formula for the variance of a continuous uniform distribution, (b-a)^2/12 where a=1 and b=number of sides of the dice, whereas it should be using the variance of a discrete uniform distribution, (n^2-1)/12 where n=number of sides of the dice. From there it simply multiplies the variance by the number of dice used and takes the square root to obtain the standard deviation. |
|
|
|
|
|