Two Plus Two Newer Archives  

Go Back   Two Plus Two Newer Archives > Internet Gambling > Internet Gambling
FAQ Community Calendar Today's Posts Search

View Poll Results: KQo
raise 38 71.70%
fold 11 20.75%
call 4 7.55%
Voters: 53. You may not vote on this poll

Reply
 
Thread Tools Display Modes
  #961  
Old 05-10-2007, 03:01 AM
NOSUP4U NOSUP4U is offline
Senior Member
 
Join Date: Mar 2006
Posts: 275
Default Re: NL Bots on Full Tilt

BTW I really hope this turns out to be some crazy spoof the OP was pulling with the help of some friends. zomg that would be so classic. And comforting [img]/images/graemlins/smile.gif[/img]

Mark
Reply With Quote
  #962  
Old 05-10-2007, 03:02 AM
Drignatio Drignatio is offline
Junior Member
 
Join Date: May 2007
Posts: 20
Default Re: NL Bots on Full Tilt

[ QUOTE ]
ok so basically everything has got off track and no one actually thinks they're bots anymore? good, i've made my point then. i really don't care if you guys nitpick other points.

[/ QUOTE ]

LOL... SO NOW that Chuck has supposedly got us to set aside what he is hiding he is satisfied. All of this is so obvious I can hardly believe what is going on.

1.) Is it possible that nlnut = nation or perhaps = BrandonJoseph47 as well? I mean honestly, who else would care this much about the issue to be posting nonstop for almost 12 hours? (BrandonJoseph47)

These two or three posters are actually the same person (just registered on 2+2, under 3 different emails?) Is there any way someone can see if their internet IPs match?

2.) Just because Full Tilt cleared his account for play doesn't mean he wasn't using bots... Full Tilt does not = FBI and it is definitely possible that they made a mistake and SHOULD have banned the accounts.

3.) GIVE ME STATISTICAL ANALYSIS ON THE RIVER NUMBERS PLEASE!!!!!!

you WILL be caught full_tilting/Charles Kuruzovich

ps: can someone explain "sweatshops"?
Reply With Quote
  #963  
Old 05-10-2007, 03:02 AM
ShaneP ShaneP is offline
Member
 
Join Date: Aug 2006
Posts: 80
Default Re: NL Bots on Full Tilt

[ QUOTE ]
Let me preface this by saying that I want to learn more about statistics testing, so please point out any mistakes.

As I said, I was just doing a quick test--I didnt' remember the test immediately (something similar for an F-test) so I just did what I wrote. A full test is the best thing to do here, but that would give a false sense of precision. The assumptions in such a test are of iid draws, and we can't say they're identical (as has been said, tweaks to the 'system' had been made, thus removing the first i.) That's the first (and maybe largest thing) that says the SD is underestimated.

[ QUOTE ]
First, I said it fell outside the 95% hypothesis. But I think my results are a bit better than cherrypicking the two most dissimilar results and comparing just those--the issue is with all four of them.

[/ QUOTE ]

I'm picking the two results that have almost equal sample sizes and so whatever strategy the guy made to the playbook would have equally influenced both of them.

First I used a Goodness of Fit test to test the hypothesis that the VPIPs of the 4 players were different:

http://forumserver.twoplustwo.com/sh...age=0&vc=1

That was almost 99% confident. Then somebody said that their strategy could change and therefore make the VPIPs of the two accounts with least played hands differ from the other two. So, I chose the two that had very similar hand samples. Since Trebek datamined randomly and he claimed the players played at almost the same times, we can assume whatever strategy change they made half-way through the game (or quarter through, or w/e) would affect both equally.

That's probably about the best one can do, but it's not perfect.

That's how I did my two sample population proportion test.

Also, like I said, you're rejecting something with 95% confidence based solely on "feel" and looking at them, you don't have any objective way. It seems like the only way you'd reject botting would be if all 4 differed from the mean by more than 3SDs...which to me seems impossible. I really think the flaw is in your test.

Well, one other problem in both of our stuff is that the data is cherrypicked to some extent. The initial accounts were chosen because of their similarity, and the ones you chose happen to involve the one the furthest away from the others.

And I think it would be quite easy for them to be statistically different...my VPiP is about 24% for instance, which is most defintely statistically different than the players in question. And to be honest, if two of them were 3 or more SD away, I'd be willing to say that it would look like they're different (or at least not arising from a bot playing *every* hand). The trouble with 'just' a 2.9 SD result is that if the SD was underestimated by say 25%, then the 2.9 SD result suddenly becomes a 2.3 SD result.

Say for instance there were two algorithms in play here, each accounting for 1/2 of the play. The first plays 13% of hands, the second 15%. The SD expected for 14% (just looking at the mean and assuming the draws came from that distribution) would have sqrt(.14*.86) in the numerator (divided by sqrt n). The actual SD would be larger (ugh, I don't have the correct book to look the formula up, and I can't find a reference on the web for adding two distributions together). the SD becomes larger because for half the data essentially you're shifting 1% closer to the mean, and the other half you get further. And since there's a square in the formula, the 1/2 that gets shifted away adds more than the 1/2 that gets shifted closer. Thus, it could be the SD is underestimated.


[ QUOTE ]
And I've dealt with enough tests to know that 2.5SD while according to the 'book' is enough to reject, especially with other issues going on. It reminds me of a quote from a physics prof here (about physics results): "half of all three sigma results are wrong".

What I'm saying is that 'rejecting' a 2.5 SD result while technically correct is a little quick. A slight tweak or human intervention a little bit could cause this difference, and thus just isn't convincing in my mind.

[/ QUOTE ]

You have a point that stats tests aren't 100% accurate, and you probably have a ton more experience with stats testing than I do. However, my test was over 2.9SDs away, and one of yours was 2.6, which is more than 2.5.

Generally what I've been taught is that 2 SD means you cant' reject, and with data such as this, you really want at least 3 SD from the mean to reject. The area in the middle is sort of a grey area, where essentially you want more data. That's with Econ type data, where the underlying paramaters can change. The 2 SD would be the correct test statistic if we went forward watching the players play in the future, and their strategy didn't change. Because of the changing of the underlying parameters and how the data was gathered (in the past rather than making a hypothesis and going forward) 2 SD is overestimating the results.

[ QUOTE ]
Oh, and I think you have an extra 0 in there? 3SD is 99%, so shouldn't that be 0.03, not 0.003?

[/ QUOTE ]

Surprisingly, no. I think the rule is 68-95-99.5, so 3SDs is approximately 99.5.

I was also surprised to see that it was .003, but you can verify it for yourself on a calc (I actually used the table in the back of my stats book and rounded Z to 2 decimal places).

http://www.fourmilab.ch/rpkp/experim...sis/zCalc.html

Enter Z as -2.675179739, and you'll see it's 0.003734.

[/ QUOTE ]

Eh, I'll claim lateness of the night on that one. or I was thinking of 2.5 and not almost 2.7 or something stupid like that.
Reply With Quote
  #964  
Old 05-10-2007, 03:03 AM
NOSUP4U NOSUP4U is offline
Senior Member
 
Join Date: Mar 2006
Posts: 275
Default Re: NL Bots on Full Tilt

[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


I think pretty much everyone by now knows they're not botting, so now it's on to sweatshops. It's basically one far-fetched story vs another.

[/ QUOTE ]

And by everyone, you mean 3 of you guys right?

Its been constant lies by everyone involved in the scam. As someone else said, eveyrone with a halfway decent BS meter knows whats up.

Mark

[/ QUOTE ]

Sure, I think there is a reasonable chance that "something is up", but that something is very unlikely to be botting. I thought we moved on to sweatshops now.

[/ QUOTE ]

yes, if by unlikely you mean impossible not to be.

Mark
Reply With Quote
  #965  
Old 05-10-2007, 03:06 AM
SukitTrebek SukitTrebek is offline
Senior Member
 
Join Date: Feb 2006
Location: The day is mine!
Posts: 304
Default Re: NL Bots on Full Tilt

Nation,

When you went on and on a few months ago about not being able to switch affiliates for FT rakeback, was it for the team you were going to fund that already had FT accounts?
Reply With Quote
  #966  
Old 05-10-2007, 03:07 AM
cwar cwar is offline
Senior Member
 
Join Date: Dec 2005
Location: Cwar LLC
Posts: 2,491
Default Re: NL Bots on Full Tilt

[ QUOTE ]
nation shouldn't have been made a mod in the first place. someone get it revoked.

[/ QUOTE ]
I know nothing about this can you fill us in? How do we go about removing a mod?
Reply With Quote
  #967  
Old 05-10-2007, 03:08 AM
BrandonJoseph47 BrandonJoseph47 is offline
Member
 
Join Date: May 2007
Posts: 57
Default Re: NL Bots on Full Tilt

[ QUOTE ]
[ QUOTE ]
ok so basically everything has got off track and no one actually thinks they're bots anymore? good, i've made my point then. i really don't care if you guys nitpick other points.

[/ QUOTE ]

LOL... SO NOW that Chuck has supposedly got us to set aside what he is hiding he is satisfied. All of this is so obvious I can hardly believe what is going on.

1.) Is it possible that nlnut = nation or perhaps = BrandonJoseph47 as well? I mean honestly, who else would care this much about the issue to be posting nonstop for almost 12 hours? (BrandonJoseph47)

These two or three posters are actually the same person (just registered on 2+2, under 3 different emails?) Is there any way someone can see if their internet IPs match?

2.) Just because Full Tilt cleared his account for play doesn't mean he wasn't using bots... Full Tilt does not = FBI and it is definitely possible that they made a mistake and SHOULD have banned the accounts.

3.) GIVE ME STATISTICAL ANALYSIS ON THE RIVER NUMBERS PLEASE!!!!!!

you WILL be caught full_tilting/Charles Kuruzovich

[/ QUOTE ]


You sound exactly like the jurors after the OJ trial. Quoting Johnny Cochran about the glove. I already heard FullTilt doesn't = FBI. Did you have any thoughts of your own? Or do you just quote your friends in this site? The same stuff doesn't blow my skirt up twice, Jack.
Reply With Quote
  #968  
Old 05-10-2007, 03:10 AM
nation nation is offline
Senior Member
 
Join Date: Dec 2005
Location: actually grinding now
Posts: 6,242
Default Re: NL Bots on Full Tilt

[ QUOTE ]
Nation,

When you went on and on a few months ago about not being able to switch affiliates for FT rakeback, was it for the team you were going to fund that already had FT accounts?

[/ QUOTE ]

no it was so i could get rakeback on my account. unfortunately i made an account long ago and got 10 free bucks for joining. now some affiliate gets all my rb. yay!
Reply With Quote
  #969  
Old 05-10-2007, 03:12 AM
ianisakson ianisakson is offline
Senior Member
 
Join Date: Sep 2006
Location: Madison, WI
Posts: 1,063
Default Re: NL Bots on Full Tilt

[ QUOTE ]
[ QUOTE ]
Nation,

When you went on and on a few months ago about not being able to switch affiliates for FT rakeback, was it for the team you were going to fund that already had FT accounts?

[/ QUOTE ]

no it was so i could get rakeback on my account. unfortunately i made an account long ago and got 10 free bucks for joining. now some affiliate gets all my rb. yay!

[/ QUOTE ]

more importantly, is it at all possible for me to get RB on my FTP account? if not, BOGUS.
Reply With Quote
  #970  
Old 05-10-2007, 03:15 AM
RagzMaster RagzMaster is offline
Senior Member
 
Join Date: Oct 2006
Location: Playing Jai Alai
Posts: 133
Default Re: NL Bots on Full Tilt

[ QUOTE ]
[ QUOTE ]
Let me preface this by saying that I want to learn more about statistics testing, so please point out any mistakes.

As I said, I was just doing a quick test--I didnt' remember the test immediately (something similar for an F-test) so I just did what I wrote. A full test is the best thing to do here, but that would give a false sense of precision. The assumptions in such a test are of iid draws, and we can't say they're identical (as has been said, tweaks to the 'system' had been made, thus removing the first i.) That's the first (and maybe largest thing) that says the SD is underestimated.

[ QUOTE ]
First, I said it fell outside the 95% hypothesis. But I think my results are a bit better than cherrypicking the two most dissimilar results and comparing just those--the issue is with all four of them.

[/ QUOTE ]

I'm picking the two results that have almost equal sample sizes and so whatever strategy the guy made to the playbook would have equally influenced both of them.

First I used a Goodness of Fit test to test the hypothesis that the VPIPs of the 4 players were different:

http://forumserver.twoplustwo.com/sh...age=0&vc=1

That was almost 99% confident. Then somebody said that their strategy could change and therefore make the VPIPs of the two accounts with least played hands differ from the other two. So, I chose the two that had very similar hand samples. Since Trebek datamined randomly and he claimed the players played at almost the same times, we can assume whatever strategy change they made half-way through the game (or quarter through, or w/e) would affect both equally.

That's probably about the best one can do, but it's not perfect.

That's how I did my two sample population proportion test.

Also, like I said, you're rejecting something with 95% confidence based solely on "feel" and looking at them, you don't have any objective way. It seems like the only way you'd reject botting would be if all 4 differed from the mean by more than 3SDs...which to me seems impossible. I really think the flaw is in your test.

Well, one other problem in both of our stuff is that the data is cherrypicked to some extent. The initial accounts were chosen because of their similarity, and the ones you chose happen to involve the one the furthest away from the others.

And I think it would be quite easy for them to be statistically different...my VPiP is about 24% for instance, which is most defintely statistically different than the players in question. And to be honest, if two of them were 3 or more SD away, I'd be willing to say that it would look like they're different (or at least not arising from a bot playing *every* hand). The trouble with 'just' a 2.9 SD result is that if the SD was underestimated by say 25%, then the 2.9 SD result suddenly becomes a 2.3 SD result.

Say for instance there were two algorithms in play here, each accounting for 1/2 of the play. The first plays 13% of hands, the second 15%. The SD expected for 14% (just looking at the mean and assuming the draws came from that distribution) would have sqrt(.14*.86) in the numerator (divided by sqrt n). The actual SD would be larger (ugh, I don't have the correct book to look the formula up, and I can't find a reference on the web for adding two distributions together). the SD becomes larger because for half the data essentially you're shifting 1% closer to the mean, and the other half you get further. And since there's a square in the formula, the 1/2 that gets shifted away adds more than the 1/2 that gets shifted closer. Thus, it could be the SD is underestimated.


[ QUOTE ]
And I've dealt with enough tests to know that 2.5SD while according to the 'book' is enough to reject, especially with other issues going on. It reminds me of a quote from a physics prof here (about physics results): "half of all three sigma results are wrong".

What I'm saying is that 'rejecting' a 2.5 SD result while technically correct is a little quick. A slight tweak or human intervention a little bit could cause this difference, and thus just isn't convincing in my mind.

[/ QUOTE ]

You have a point that stats tests aren't 100% accurate, and you probably have a ton more experience with stats testing than I do. However, my test was over 2.9SDs away, and one of yours was 2.6, which is more than 2.5.

Generally what I've been taught is that 2 SD means you cant' reject, and with data such as this, you really want at least 3 SD from the mean to reject. The area in the middle is sort of a grey area, where essentially you want more data. That's with Econ type data, where the underlying paramaters can change. The 2 SD would be the correct test statistic if we went forward watching the players play in the future, and their strategy didn't change. Because of the changing of the underlying parameters and how the data was gathered (in the past rather than making a hypothesis and going forward) 2 SD is overestimating the results.

[ QUOTE ]
Oh, and I think you have an extra 0 in there? 3SD is 99%, so shouldn't that be 0.03, not 0.003?

[/ QUOTE ]

Surprisingly, no. I think the rule is 68-95-99.5, so 3SDs is approximately 99.5.

I was also surprised to see that it was .003, but you can verify it for yourself on a calc (I actually used the table in the back of my stats book and rounded Z to 2 decimal places).

http://www.fourmilab.ch/rpkp/experim...sis/zCalc.html

Enter Z as -2.675179739, and you'll see it's 0.003734.

[/ QUOTE ]

Eh, I'll claim lateness of the night on that one. or I was thinking of 2.5 and not almost 2.7 or something stupid like that.

[/ QUOTE ]

Dam you guys took the words right out of my mouth! [img]/images/graemlins/wink.gif[/img]

On a side note, are you really Brandon Joe or related to him?
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 09:10 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.