#1
|
|||
|
|||
Statistical significance test for missing sets?
For my sanity, please help me quantify how bad I'm running.
Variables: n : number of samples of pocket pairs seeing a flop s : number of times a set or better is flopped Constants: p : probability of flopping a set or better (0.118) Now I understand the expected value of s is np, and using binomial distribution I can count the probability of hitting s or fewer sets. Unfortunately this doesn't add much information, maybe it tells that I've been unlucky tell me something I didn't know. My question is, which statistical significance test should I use and how? I know online poker isn't rigged, but getting a decent statistical significance would confirm that I've ran like it was. |
#2
|
|||
|
|||
Re: Statistical significance test for missing sets?
The probability of getting s or fewer sets is exactly what you want (anyway there isn't much more to infer from it than that you've been unlucky in terms of flopping sets). Any statistical test with a fancy name would be just an approximative method for calculating Pr(s<=s_observed) and if you can calculate it exactly from the binomial distribution an approximation is not needed.
|
#3
|
|||
|
|||
Re: Statistical significance test for missing sets?
[ QUOTE ]
The probability of getting s or fewer sets is exactly what you want (anyway there isn't much more to infer from it than that you've been unlucky in terms of flopping sets). Any statistical test with a fancy name would be just an approximative method for calculating Pr(s<=s_observed) and if you can calculate it exactly from the binomial distribution an approximation is not needed. [/ QUOTE ] I'm not sure I understand. If I take one coinflip and get tails, then use binomial distribution to calculate the probability of hitting 0 heads in 1 trials, I'll get - surprise surprise - 0.5. Now I'm having a hard time interpreting 0.5 as the probability of my coin being random. Any reasonable statistical significance indicator should give me a number very close to 1, shouldn't it? |
#4
|
|||
|
|||
Re: Statistical significance test for missing sets?
I played about 1,000 hands this past weekend. I never folded a pocket pair pre-flop. I figure to have had 63 pocket pairs, and hit a set with them 8 times. I hit 0. What are the odds of that?
|
#5
|
|||
|
|||
Re: Statistical significance test for missing sets?
If the chance of hitting a set is .118, then the chance of missing 63 times in a row is (1-.118)^63.
This number is 0.000366877 Sorry dude. |
#6
|
|||
|
|||
Re: Statistical significance test for missing sets?
[ QUOTE ]
[ QUOTE ] The probability of getting s or fewer sets is exactly what you want (anyway there isn't much more to infer from it than that you've been unlucky in terms of flopping sets). Any statistical test with a fancy name would be just an approximative method for calculating Pr(s<=s_observed) and if you can calculate it exactly from the binomial distribution an approximation is not needed. [/ QUOTE ] I'm not sure I understand. If I take one coinflip and get tails, then use binomial distribution to calculate the probability of hitting 0 heads in 1 trials, I'll get - surprise surprise - 0.5. Now I'm having a hard time interpreting 0.5 as the probability of my coin being random. Any reasonable statistical significance indicator should give me a number very close to 1, shouldn't it? [/ QUOTE ] Not with a sample size of 1. You aren't going to get any test to give you a good answer with that. Though, true, it's not exactly the "probability of your coin being random" as you sort of suspected. What it really is, is the probability of observing data as or more extreme to what you observed, given that your null hypothesis (in this case, that your coin is fair) is true. (To be even more precise, I should say that it's the long-run probability of observing data as or more extreme to that which we saw, under repeated sampling from the model of our null hypothesis). This is what all Frequentist analyses (the most common approach) will give you. If this probability is very small, then we claim that there is evidence to reject the null hypothesis. Because, we would just say that if our null hypothesis were true, then what we actually observed is quite unlikely, so that is evidence that our null hypothesis actually isn't true. It's kind of backwards thinking, but unless you are a Bayesian, it's all you can do. So, calculating the probability of observing as many sets or fewer than you did, as you did, would be exactly what you wanted. If it's less than 0.05 (a somewhat arbitrary, but the most commonly used cut-off), then you would conclude that you ran bad. |
|
|