PDA

View Full Version : Game theory question


Pages : [1] 2

NLSoldier
05-17-2006, 06:20 PM
My prof. gave us this question on our econ final today. Im curious as to whether I got it right because I changed my answer like 3 times during the test.

Two players are playing a game in which they are trying to maximize their own payoffs and are not interested in any spiteful outcomes. They are rational and have perfect information about the game before it starts.

The players are given $100 and Player A makes an offer to player B for a distribution of the money. e.g. (60/40) If player B accepts, the game ends and they get their respective distributions. If player B declines, the amount of money goes down to $90 and now player B makes an offer to player A. If player A declines this offer, the amount goes down to $80 and he makes an offer to player B. If player B declines, the amount is reduced to 0 and the game is over.

What is the outcome/solution?

felson
05-17-2006, 06:28 PM
here's my stab at it...

in the 3rd round, A offers B one cent, keeping 79.99 for himself. B is forced to accept this since he is not spiteful. A can achieve this outcome by making a greedy offer in the first round and declining any offer in the second round. so A can get at least 79.99.

knowing this, B must offer A at least 80 in the 2nd round, which he will accept as A can do no better in the 3rd round. there is no reason for B to offer any more than 80 either. so B gets 10 at least.

knowing this, in the 1st round, A should offer B at least $10. so A offers B $10.01 and keeps $89.99 for himself. B accepts the offer, and the game ends in the first round.

TomCollins
05-17-2006, 06:33 PM
[ QUOTE ]
My prof. gave us this question on our econ final today. Im curious as to whether I got it right because I changed my answer like 3 times during the test.

Two players are playing a game in which they are trying to maximize their own payoffs and are not interested in any spiteful outcomes. They are rational and have perfect information about the game before it starts.

The players are given $100 and Player A makes an offer to player B for a distribution of the money. e.g. (60/40) If player B accepts, the game ends and they get their respective distributions. If player B declines, the amount of money goes down to $90 and now player B makes an offer to player A. If player A declines this offer, the amount goes down to $80 and he makes an offer to player B. If player B declines, the amount is reduced to 0 and the game is over.

What is the outcome/solution?

[/ QUOTE ]

It depends what you mean by spiteful outcomes. What will A prefer if he can get $0 and B will get $80, or they both will get $0. I am assuming he will be generous and let B have the $80.

If it gets to $80, A will propose $80 for himself, and $0 for B has to accept (since he is not spitefule).

So when its at $90, B must offer A at least $80 or else he will not accept. So he offers A $80, and himself $10, and A will accept (since he is not spiteful).

So when its $100, A has to offer B at least $10, so he takes $90, and offers $10 to B.

If they are spiteful (meaning they would prefer the other player to have less money if its a tie), then it goes to the following (assuming whole number dollar proposals only).

At $80, A gets $79 and offers B $1,, B must accept.
At $90, B gets $10 and offers A $80, A must accept.
At $100, A gets $89 and offers B $11, and B must accept.

So it depends on what the tiebreaking situation is, but it will come out to roughly $90 for A, $10 for B.

CallMeIshmael
05-17-2006, 06:36 PM
[ QUOTE ]
It depends what you mean by spiteful outcomes. What will A prefer if he can get $0 and B will get $80, or they both will get $0. I am assuming he will be generous and let B have the $80.

[/ QUOTE ]

I think he should reject. If he rejects 0, the other guy has to offer him 1, and therefore he does better by rejecting 0.

TomCollins
05-17-2006, 06:38 PM
[ QUOTE ]
[ QUOTE ]
It depends what you mean by spiteful outcomes. What will A prefer if he can get $0 and B will get $80, or they both will get $0. I am assuming he will be generous and let B have the $80.

[/ QUOTE ]

I think he should reject. If he rejects 0, the other guy has to offer him 1, and therefore he does better by rejecting 0.

[/ QUOTE ]

Why would he reject? He gets $0 no matter what. OP said he is not spiteful. In this case, follow my second argument, where the answer becomes 89/11.

CallMeIshmael
05-17-2006, 06:48 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
It depends what you mean by spiteful outcomes. What will A prefer if he can get $0 and B will get $80, or they both will get $0. I am assuming he will be generous and let B have the $80.

[/ QUOTE ]

I think he should reject. If he rejects 0, the other guy has to offer him 1, and therefore he does better by rejecting 0.

[/ QUOTE ]

Why would he reject? He gets $0 no matter what. OP said he is not spiteful.

[/ QUOTE ]


Because if he tells the opponent that he is going to reject 0, he knows the opponent will be forced to offer him 1. Its kind of weird, once he makes the offer of 0, rejecting and accepting are the same, and always rejecting is spiteful. But, BEFORE he offers 0, the statement of "Im going to reject all offers of 0" isnt spiteful, its rational since it increases his payoff.


There are situations in game theory where if you could somehow force yourself to make an irrational decision in later rounds, you can prevent the opponent from making a certain decision that, in the end, prevents them from making a decision that hurts your payoff. This is a somehwat similar situation.

felson
05-17-2006, 06:56 PM
[ QUOTE ]
Because if he tells the opponent that he is going to reject 0

[/ QUOTE ]

there is no mechanism for one opponent to deliver an ultimatum to the other.

anyway, the difference here is very small.

TomCollins
05-17-2006, 07:02 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
It depends what you mean by spiteful outcomes. What will A prefer if he can get $0 and B will get $80, or they both will get $0. I am assuming he will be generous and let B have the $80.

[/ QUOTE ]

I think he should reject. If he rejects 0, the other guy has to offer him 1, and therefore he does better by rejecting 0.

[/ QUOTE ]

Why would he reject? He gets $0 no matter what. OP said he is not spiteful.

[/ QUOTE ]


Because if he tells the opponent that he is going to reject 0, he knows the opponent will be forced to offer him 1. Its kind of weird, once he makes the offer of 0, rejecting and accepting are the same, and always rejecting is spiteful. But, BEFORE he offers 0, the statement of "Im going to reject all offers of 0" isnt spiteful, its rational since it increases his payoff.


There are situations in game theory where if you could somehow force yourself to make an irrational decision in later rounds, you can prevent the opponent from making a certain decision that, in the end, prevents them from making a decision that hurts your payoff. This is a somehwat similar situation.

[/ QUOTE ]

He could also say "I'm going to reject all offers less than 79", and get $79 for himself. But that's not what the OP posted. The OP posted that each person will try to maximize his own profits, said nothing about communication, and said they are not spiteful.

CallMeIshmael
05-17-2006, 07:06 PM
Just an exmple of what I was talking about, since it is (IMO) kind of a cool idea.

Lets say there is a company (A) that has control of a market, and another company (B) that is considering trying to enter the market.

Lets say that if B doesnt enter, A gets a payoff of 5 since they have 100% of the markey, and B gets 0, since they gain nothing.

Then, lets assume that IF B trys to enter, A has a choice: Act tough or accomodate. For example, if they act tough they might run a smear campaign, but it might be pretty expensive. And if they accomodate they dont run the smear campaign, and share the market.

Lets assume that if A acts tough, A gets paid -2, and B gets -1 (the smear campaign is expenseive to A, and its hurts B). If A accomodates, it gets 1 and company B gets 2.

The payoff matrix looks like:

--------- Tough ------- Accomodate --

Enter---- -2, -1 ------ 1,2

Out------ 0,5 --------- 0,5



What is interesting about this, is that company A can do best by being irrational. If B enters the market, company A does better for itself if it accomodates. BUT, if company A could credibly threaten that it would fight (even though its irrational) company B would not enter, and that gives A the best payoff.

So, if, for example, A publically declared in writing that they MUST act tough, regardless of cost, that company would do better than a company that acts rationally at each decision point.

CallMeIshmael
05-17-2006, 07:09 PM
[ QUOTE ]
He could also say "I'm going to reject all offers less than 79", and get $79 for himself.

[/ QUOTE ]

No he cant. Its not a credible threat. The post I just made is quite similar.

[ QUOTE ]
The OP posted that each person will try to maximize his own profits, said nothing about communication, and said they are not spiteful.

[/ QUOTE ]

communication like this is inplicitly assumed in game theory problems

felson
05-17-2006, 07:12 PM
[ QUOTE ]
communication like this is inplicitly assumed in game theory problems

[/ QUOTE ]

no.

NLSoldier
05-17-2006, 07:24 PM
Oh one thing I forgot to add is that all dealings are in whole dollars.

NLSoldier
05-17-2006, 07:26 PM
My answer was 89 and 11. Sweet. From talking to other people in the class I feel I may be the only one who got it right. All others I talked to were somewhere between 60/40 and 50/50. THey are not smart /images/graemlins/smile.gif

felson
05-17-2006, 07:34 PM
[ QUOTE ]
whole dollars

[/ QUOTE ]

89/11, then. good job.

can i ask which class this was, and which prof? i'm thinking about taking an econ class at ucsd this summer.

NLSoldier
05-17-2006, 08:07 PM
[ QUOTE ]
[ QUOTE ]
whole dollars

[/ QUOTE ]

89/11, then. good job.

can i ask which class this was, and which prof? i'm thinking about taking an econ class at ucsd this summer.

[/ QUOTE ]

I got to USD....it was just intermediate micro. prof. narwold

felson
05-17-2006, 08:11 PM
oh, okay. thanks!

Thythe
05-17-2006, 09:31 PM
[ QUOTE ]
[ QUOTE ]
communication like this is inplicitly assumed in game theory problems

[/ QUOTE ]

no.

[/ QUOTE ]

I agree that the answer is 89/11...also, unless otherwise stated in game theory problems, i would always assume that each player is perfectly rational, works to maximize profit, etc etc. This is why the result of the Jason T problem was 2,2 even though many didn't want it to be.

CallMeIshmael
05-17-2006, 10:39 PM
[ QUOTE ]
[ QUOTE ]
communication like this is inplicitly assumed in game theory problems

[/ QUOTE ]

no.

[/ QUOTE ]

Yes.

AlphaWice
05-18-2006, 01:31 AM
This game is called the Ultimatum Game. In a non-iterated version of it, the game-theoretical solution is to offer $0 and accept $0.

moorobot
05-18-2006, 02:01 AM
When people actually play this game in experiments the outcome is most often 50/50. Whether or not this is the rational solution or not according to ecnomists has already been debated here.

moorobot
05-18-2006, 02:10 AM
[ QUOTE ]


My answer was 89 and 11. Sweet. From talking to other people in the class I feel I may be the only one who got it right. All others I talked to were somewhere between 60/40 and 50/50. THey are not smart


[/ QUOTE ] It confuses people, because that is what they would choose to offer, and economics professors have told them they are rational in the neoclassical view of rationality.

When this game is played in behavioral experiments, and it has hundreds of times with university students as subjects, in countries including but not limited to U.S., Japan, Israel, Slovakia, Indonesia, and Russia, the vast majority of proposers offer between 40 and 50 percent, as your classmates thought. Equally striking is that offers of 25 percent or less are frequently rejected; they would rather have none instead of an unfairly small piece of the pie. A few years ago, the guys who won the noble prize in economics (who were psychologists by trade, but there lifetime work was important for econ.) used this and several other studies to show that the 'economic man' of textbooks in neoclassical econ. was not an accurate representation of human nature. Unfortunately, due to the importance of this assumption to neoclassical economics, amongst other reasons, the textbooks have not quite caught up to recent widely agreed upon and known research.

So real humans would offer between 40 and 50 most of the time. But the man that economics texts posit would not.

madnak
05-18-2006, 03:04 AM
The idiots are the ones who act according to game theory. "Rational" in game theory actually means irrational based on most practical definitions of rationality. Any game theory opponent is by definition extremely stupid, shortsighted, and foolish.

More importantly, it assumes that neither opponent has a psychology. Any psychology. The introduction of psychology into the situation makes game theoretical opponents pathetically weak. Which is exactly why human beings have evolved traits that are antithetical to game theoretically correct action.

The correct answer to this question is 50/50, and game theory is completely irrelevant to the situation. I believe those who suggest otherwise are honestly mentally deficient.

hmkpoker
05-18-2006, 03:15 AM
[ QUOTE ]
Two players are playing a game in which they are trying to maximize their own payoffs and are not interested in any spiteful outcomes. They are rational and have perfect information about the game before it starts.

[/ QUOTE ]

The point is to analyze the behavior of rational players trying to maximize their own profits.

CallMeIshmael
05-18-2006, 03:16 AM
[ QUOTE ]
The idiots are the ones who act according to game theory.

[/ QUOTE ]

For the most part, people dont really use game theory to make real life decisions since GT relies on complete rationality, which is never attained in real life. So, I def. agree with what you're saying (though I def. argue that 50/50 is not the right answer here)


BUT, organisms (including humans) make unconscious decisions everyday that are directly predicted by game theory since evolution seeks the solutions predicted by GT.


[ QUOTE ]
I believe those who suggest otherwise are honestly mentally deficient.

[/ QUOTE ]

ummm...

W
T
F
?

moorobot
05-18-2006, 03:43 AM
[ QUOTE ]
complete rationality, which is never attained in real life.

[/ QUOTE ]

[ QUOTE ]
BUT, organisms (including humans) make unconscious decisions everyday that are directly predicted by game theory since evolution seeks the solutions predicted by GT.

[/ QUOTE ] Not when they use this definition of "rationality"; when they equate rationality with matieral maximizing self-interest; it won't predict their behavior because evolution did not make people "rational" in this sense. It is widely agreed upon by biologists that humans do not follow this model of rationality. See "The selfish gene" by Dawkins which explains why evolution made humans altruistic and moral to some degree; humans act upon what is in the interest of their genes, not their own "utility", and that requires moral and altruistic behavior.

Sociologists and anthropologists also disagree, the studies done by them indicate humans act according to social norms (including "rules of thumb" and "morals") first and in self-interest second.

Also see Amartya Sen's article "Rational Fools". The implications of the concept of economic man are even worse then Sen suggests, however: mental health professionals use the term SOCIOPATH to refer to a person's behavior which is governed entirely by the calculation of self-interest; sociopaths have no sense of right or wrong and no concern for the well-being or pain of others.

I'll end what has become a lengthy post with a quote from Charles Darwin himself, from The Descent of Man :

"When two tribes of primeval man, living in the same country, came into competition, if...one tribe included a great number of courageous, sympathetic and faithful members, who were always ready to warn each other of danger (and) to aid and defend each other, this tribe would succeed better and conquer the other....Selfish and contentious people will not cohere, and without coherence nothing can be effected". (Ch 5, "On the development of the intellectual and moral faculties during primeval and civilized times").

The point of Darwin's statement is clear: in competitions among groups, those whose members have learned to cooperate-that is, NOT TO COMPETE with one another, often win. Think of team sports. This same reasoning applies to firms, neighborhoods, ethnic groups, and nations. Evolution makes people act in against self-interest, contra Game Theory.

Economic man is a deeply flawed concept that has caused much uneccessary hardship and poor economic policy advice, and is pushed upon us and celebrated by apologists for inequality and hierarchy. The sooner it is striken from the records the better. Cooperation is necessary for progress; economics in the mainstream has vastly overrated the importance of competition.

moorobot
05-18-2006, 03:49 AM
Right, but this does not describe in any way how real humans act.

I am reminded of a post by you on the GDP:

[ QUOTE ]
I come up with a machine that can measure the total combined length of my scrotal hairs. Of what use is it?


[/ QUOTE ] Who cares about what "rational players trying to maximize thier own profits" do? What relevance does this have to the study of real humans taking real actions?

CallMeIshmael
05-18-2006, 03:54 AM
[ QUOTE ]
Not when they use this definition of "rationality"; when they equate rationality with matieral maximizing self-interest; it won't predict their behavior because evolution did not make people "rational" in this sense. It is widely agreed upon by biologists that humans do not follow this model of rationality.

[/ QUOTE ]

Keep in mind that Im not talking about conscious decisions. We make decisions based on game theory, where our inclusive fitness is the unit of pay out. In this regard all organisms do make decisions based on those predicted by GT, and it's definition of rationality.


[ QUOTE ]
The point of Darwin's statement is clear: in competitions among groups, those whose members have learned to cooperate-that is, NOT TO COMPETE with one another, often win.

[/ QUOTE ]


Keep in mind that, for the most part, group selection is now considered incorrect. The reason people cooperate is because it is in their own self interest. (see reciprocal altruism, for example)


(FWIW, within the past few years "new" group selection has emerged, but its still controversial)


EDIT: This (http://en.wikipedia.org/wiki/Evolutionarily_stable_strategy) touches on what Im talking about. Natural selection causes organisms to make decisions predicted by game theory

DougShrapnel
05-18-2006, 04:33 AM
I think moorobot brings up some interesting points. That there are several correct answers to the question
Altruistic 0 100
Fair 50 50
Chance fair 55 45
Min Acceptance 60 40
Game Theory 89 11
Greed 100 0
I'd be willing to bet that in all the times this game has been played never once did someone offer alturistically. I think this game is a good proof that alruism doesn't exist except as a dysfunction.
I think that the GT prediction will be declined nearly everytime because people get a sense of well being when they are able to punish incorrect behavior.
I offer 45 and keep 55.

moorobot
05-18-2006, 04:35 AM
[ QUOTE ]
The reason people cooperate is because it is in their own self interest. (see reciprocal altruism, for example)


[/ QUOTE ] Regardless, what is one's evolutionary self-interest is not automatically, and not even usually, in their self-interest as defined matierally or by utility. The game theory model presented in the OP is an example of the latter, as are most models used by economists.

CallMeIshmael
05-18-2006, 04:54 AM
[ QUOTE ]
Regardless, what is one's evolutionary self-interest is not automatically, and not even usually, in their self-interest as defined matierally or by utility.

[/ QUOTE ]

Yes it is.

Think about it like this:

- Two animals are playing a game, where there are only 2 strategies
- The our payoff is in terms of fitness. (reproduction is linked to performance in the game, and this may be something like hunting for food or seeking a mate)
- We are playing different strategies
- One has a higher payoff than the other
- Therefore, the population shifts towards the higher paying strategy in the next generation

Now, lets assume that the population is now 100% the better strategy. BUT, what if a new strategy is created by a mutation or immigration, and it does better? Well, the population goes through the above again, until we are 100% the new population.

The only point at which this stops is when there is no better strategy. This occurs at a nash equilibrium. You caclulate nash equilibria using the assumptions of game theory.



Again, I will stress that Im not talking about games like that in the OP, but specifically games that directly related to fitness in the EEA. The game in the OP was not present in the EEA, so natural selection couldnt have led us to always play the nash.

We make these decisions at the unconcious level, but we still make them everyday.

If you wish I can post studies that show humans and non-humans alike acting based on GT tomorrow.

madnak
05-18-2006, 11:29 AM
[ QUOTE ]
For the most part, people dont really use game theory to make real life decisions since GT relies on complete rationality, which is never attained in real life.

[/ QUOTE ]

It's not a matter of complete rationality. That is one of the many absurd premises on which game theory is based. But it's not the biggest one. The total removal of psychology is the biggest problem with game theory and the reason it will never apply to real actors. Even machines have a certain kind of psychology.

[ QUOTE ]
BUT, organisms (including humans) make unconscious decisions everyday that are directly predicted by game theory since evolution seeks the solutions predicted by GT.

[/ QUOTE ]

That's just plain false. Back it up.


[ QUOTE ]
[ QUOTE ]
I believe those who suggest otherwise are honestly mentally deficient.

[/ QUOTE ]

ummm...

W
T
F
?

[/ QUOTE ]

I firmly believe most of the prominent game theorists were mentally disabled.

madnak
05-18-2006, 11:34 AM
Your bet would be wrong. If my opponent were struggling to feed his family, I would choose the altruistic strategy every single time.

TomCollins
05-18-2006, 11:51 AM
You have to realize that there are other things that contribute to utility. People care about what others think of them. They don't want to be viewed as selfish bastards. That's why they offer 50-50 or something similar. Their utility for $50 + not screwing your opponent is higher than getting $89.

Also, most people play suboptimally. Most people cannot think about such things, so they try to offer a "fair" solution. Same thing in poker. If your opponents were playing perfect, you would have to play a game theory optimal solution. However, if they are making mistakes, you have to alter your strategy to maximize profits. However, this would open you up to exploitation if your opponents caught on.

So the fact that empirical studies show a different results mean one of two things:
1) People value things other than money
2) People are not perfect rational thinkers.

madnak
05-18-2006, 12:08 PM
[ QUOTE ]
Yes it is.

Think about it like this:

- Two animals are playing a game, where there are only 2 strategies
- The our payoff is in terms of fitness. (reproduction is linked to performance in the game, and this may be something like hunting for food or seeking a mate)
- We are playing different strategies
- One has a higher payoff than the other
- Therefore, the population shifts towards the higher paying strategy in the next generation

[/ QUOTE ]

This is a gross oversimplification and a highly unrealistic scenario. It will never happen. "Two animals are playing a game, where there are only 2 strategies." This doesn't happen. "We are playing different strategies." That's some assumption. Often the best case for both actors is playing the same strategy, and this happens quite often in nature. "One has a higher payoff than the other" Categorically? This isn't how the world works. Every action has costs and benefits. The specific utility of that action is determined based on environment and circumstances. Since we're being absurd, in an environment in which lightning strikes every non-altruistic actor, altruistic action will become the norm. In an environment that punishes game theoretical decisions, those decisions won't happen. "Therefore, the population shifts towards the higher paying strategy in the next generation." This doesn't follow at all. The number of latent assumptions here is very high. For one thing, this situation needs to occur repeatedly in order for this to follow. And for another, if there is some other situation that rewards the "losing" trait here, and it is more frequent or significant, then the losing trait may be the one that's propagated.

And this isn't game theory anyhow. You're couching the scenario in terms that are similar to those of game theory in order to push your position. I could use different terms to make moorobot's position look valid. When I play tic-tac-toe, I play according to the game theoretical recommendation. That's not because I follow game theory - it's because the actions I choose and the outcomes I desire happen to correspond to game theory. That's exactly why it's called game theory in the first place, games correspond more closely to its recommendations than anything else in the real world. That doesn't mean that games, even at the highest levels of mastery, are played according to game theory. Even in a game like chess, game theory will never be ideal because it may always be possible to exploit the psychology and incomplete "rationality" and information of the opponent. In poker it's especially important to avoid strict game theory because the point of the game is to take advantage of poor (irrational) players, and because game theoretical recommendations will result in undesirable reciprocal action. The bare fact here is that neither actor has all the information and neither actor is perfectly rational. Just because evolutionary outcomes sometimes have similarity to game theoretical outcomes doesn't imply that evolution happens according to game theory. Evolution and game theory sometimes predict the same result. That's it.

[ QUOTE ]
Now, lets assume that the population is now 100% the better strategy.

[/ QUOTE ]

Huge assumption. It ignores context and biological mechanics entirely. Again, it assumes that one strategy is categorically better than the other. This doesn't happen in nature.

[ QUOTE ]
BUT, what if a new strategy is created by a mutation or immigration, and it does better? Well, the population goes through the above again, until we are 100% the new population.

[/ QUOTE ]

If there's another possible strategy, then your original assumption that there are only two strategies is false and your entire scenario fails to hold up. Which is it? Game theory involves a strictly-enumerated set of options by definition. If the set of options can't be strictly enumerated, then it's not game theory.

[ QUOTE ]
The only point at which this stops is when there is no better strategy. This occurs at a nash equilibrium. You caclulate nash equilibria using the assumptions of game theory.

[/ QUOTE ]

This is true in some cases. When the mechanics of evolution work according to the mechanics of game theory, game theory can make useful predictions. But game theory isn't a model for evolution. Game theory can predict my actions on a tic-tac-toe board. That doesn't mean game theory is a model for my mind.

[ QUOTE ]
If you wish I can post studies that show humans and non-humans alike acting based on GT tomorrow

[/ QUOTE ]

Do so. I'd love to hit this at the source and your wikipedia link just doesn't cut it.

And let's make one thing clear. Nobody is disputing that evolution is about maximizing the outcome or achieving self-interest. You seem to be putting a lot of focus on that straw man. The reality is that games, according to the strict definition upon which game theory is predicated, do not exist in the real world. Real-world events are forced into game theory molds through an isolation of context, a disregard of inconvenient variables (usually under the assumption that these variables are not, in fact, variable), and a deliberately artificial interpretation of how to quantify abstract properties. Even then, the real world manages to fail at presenting game situations as described by game theory.

DougShrapnel
05-18-2006, 01:37 PM
[ QUOTE ]
Your bet would be wrong. If my opponent were struggling to feed his family, I would choose the altruistic strategy every single time.

[/ QUOTE ]A vast majority of people would. And even more would lie that the would. But just because an act has concern for others, does not make it altruistic. If you want to call giving $100 to help starving children when game theory dictates that you could only give 11, and psych 101 says you could get about 60 altruistic, fine. But why must they be starving, a true altruist would give away the 100$ when he himself was starving, and the opponent was well off. I don't wish to redifine altruism away, am I trying to go with the definition of a selfless act. If anyone has read the research of ultimatum games, can you confirm my in the dark guess that zero times was the altruistic offer made.

moorobot
05-18-2006, 02:34 PM
The reasons for people's behavior in ultimatum game experiments is debatable.

Three possible alternative explanations are available, and I believe all three are part of human nature:

A tendency to be generous toward another person as long as you are treated well by the other person but a willingness to pay good money to punish someone who has corssed or insulted you, even if you will never see that person agains: call this one a tendency towards recioprocity. In my view, based primarily on the psychological work done by the darwinians Samuel Bowles and Herbert Gintis, 'hom-o reciprocans' is probably the most important component of 'human nature'.

A second explanation is that the proposers high offers reflect uncoditional generosity toward the responders or a concern for their well-being independent of any behavior on the responders part. If that is correct, people have a tendency towards altruism (this is distinct from utter and complete altruism, which you are proposing). These preferences lead them to act to benefit others at some cost to themselves (even with no expectation that reciprocal benefits).

A thid explanation is that a proposer could have well-informed beliefs and be selfish. Suppose the proposers believes that the responder will not play the game like an economic man, one willing to accept a penny. Making a 50-50 offer could be nothing more than self-interest guided by prudence.

The important thing is that the game demonstrates that economic man, the person posited by most neoclassical models and explanations, is flawed; even the self-interested but prudent proposer just described does not believe that the responder is an economic man. And in virtually all cases the proposers assume that the responders will depart from the assumption of perfect selfishness.

madnak
05-18-2006, 02:36 PM
This depends on your definition of altruism. I consider reciprocal altruism to be "real" altruism. In fact, I consider action "against your interests" to be a contradiction in terms at a theoretical level.

This is like the free will debate. I don't believe in free will. That hardly means I don't make choices. Similarly, I don't believe in action that isn't self-interested. But that hardly means I can't act to help others to my own immediate detriment. Call it what you will. It sounds like we agree on everything but the semantics.

moorobot
05-18-2006, 02:40 PM
[ QUOTE ]
--------------------------------------------------------------------------------

Regardless, what is one's evolutionary self-interest is not automatically, and not even usually, in their self-interest as defined matierally or by utility.


--------------------------------------------------------------------------------



Yes it is.

[/ QUOTE ] What benefits our genes is not this.

[ QUOTE ]

- Two animals are playing a game, where there are only 2 strategies
- The our payoff is in terms of fitness. (reproduction is linked to performance in the game, and this may be something like hunting for food or seeking a mate)
- We are playing different strategies
- One has a higher payoff than the other
- Therefore, the population shifts towards the higher paying strategy in the next generation

[/ QUOTE ] All you have shown here is that animals act in a way that maximizes there genetic fitness, not their matieral self-interest or their utility.

For example, adultery in the form of a one night stand in a male dominated society with a mate that has a high genetic fitness usually leads a woman to lose utility (defined in terms of happiness, and/or long term rational pursuit of preferences) and takes the chance of losing economic benefits from the husband involved.

Human beings are designed to benefit the long term success of their genes, not to benefit themselves. A self-interested gene makes a human that is not fully self-interested, because a person who cared only about his/her own matieral gain would be terrible at reproducing, and a person that cares only about his/her own utility also often faces disadvantages.

CallMeIshmael
05-18-2006, 02:58 PM
There is a lot here...

2 major problems:

"This isn't how the world works. Every action has costs and benefits"

In every single game since the beginning of game theory, payoffs has equaled (benefits - costs). I wasnt proposing anythign else.

"This is a gross oversimplification and a highly unrealistic scenario."

Never sait it wasnt. All models in biology require some simplification, and the one I presented wasnt even really a model, I was just trying to use a simple example of how the situation works.


OK, since there is clearly quite a bit of interest in this subject, Ive recalled this book (http://www.amazon.com/gp/product/0195137906/sr=8-4/qid=1147978253/ref=sr_1_4/102-8842247-9495329?%5Fencoding=UTF8) from one of our library's, and will post maybe 10 studies from it that are good examples of what Im talking about.

BUT, it may take a few days before the book gets here. If anyone thinks this is a copout, Ill shop around for a different book at one of our libraries (I chose this one because I have read it before).

This is the description from amazon:

"Game theory has revolutionized the study of animal behavior. The fundamental principle of evolutionary game theory--that the strategy adopted by one individual depends on the strategies exhibited by others--has proven a powerful tool in uncovering the forces shaping otherwise mysterious behaviors. In this volume, the first since 1982 devoted to evolutionary game theory, leading researchers describe applications of the theory to diverse types of behavior, providing an overview of recent discoveries and a synthesis of current research. The volume begins with a clear introduction to
game theory and its explanatory scope. This is followed by a series of chapters on the use of game theory to understand a range of behaviors: social foraging, cooperation, animal contests, communication, reproductive skew and nepotism within groups, sibling rivalry, alternative life-histories,
habitat selection, trophic-level interactions, learning, and human social behavior. In addition, the volume includes a discussion of the relations among game theory, optimality, and quantitative genetics, and an assessment of the overall utility of game theory to the study of social behavior.
Presented in a manner accessible to anyone interested in animal behavior but not necessarily trained in the mathematics of game theory, the book is intended for a wide audience of undergraduates, graduate students, and professional biologists pursuing the evolutionary analysis of animal behavior. "

CallMeIshmael
05-18-2006, 02:59 PM
[ QUOTE ]
Human beings are designed to benefit the long term success of their genes, not to benefit themselves.

[/ QUOTE ]

Everything Ive said assumes the gene centred view.

I didnt mean to imply that I think that humans make decisions to benefit themselves. Always the genes. Perhaps we dont disagree, since I didnt disagree with anything you said.

moorobot
05-18-2006, 03:21 PM
[ QUOTE ]
Everything Ive said assumes the gene centred view.

[/ QUOTE ] Unfortunately, most models neoclassical economists use assumes the individual centered view.

madnak
05-18-2006, 03:25 PM
[ QUOTE ]
There is a lot here...

2 major problems:

"This isn't how the world works. Every action has costs and benefits"

In every single game since the beginning of game theory, payoffs has equaled (benefits - costs). I wasnt proposing anythign else.

[/ QUOTE ]

Sorry, I was unclear. I mean to say that different costs and benefits tend to be qualitatively different and can't be condensed into a single quantity. In terms of evolution, different traits have different costs and benefits relevant to the environment. In an environment of abundance different traits will be valuable than in an environment of scarcity. The environment and ecosystem are always changing.

It's very popular to confuse the mechanics of emergence with those of game theory. I believe the two must remain rigidly separate in spite of their seeming similarities. I think it's clear that life on Earth represents an emergent system, not a game.

Game theory was developed as a way to analyze specific mathematical situations. Now people are starting to use it as a way to analyze decision-making processes, and I think that is a big mistake.

CallMeIshmael
05-18-2006, 04:39 PM
I did a quick search, and found the following pdfs.

Now, I dont think any of them are great for what Im talking about, but it is certainly better than posting nothing.

(note: they are all from scientific journals, so they can be sort of long winded, etc)

http://www.geocities.com/call_me_ishmael_2002/001.pdf
http://www.geocities.com/call_me_ishmael_2002/002.pdf
http://www.geocities.com/call_me_ishmael_2002/003.pdf
http://www.geocities.com/call_me_ishmael_2002/004.pdf
http://www.geocities.com/call_me_ishmael_2002/kern.pdf


EDIT: I still plan to post better examples once I get that book

hmkpoker
05-18-2006, 04:47 PM
Wrong.

[ QUOTE ]
Two players are playing a game in which they are trying to maximize their own payoffs and are not interested in any spiteful outcomes. They are rational and have perfect information about the game before it starts.


[/ QUOTE ]

The question assumes rational, greedy opponents with perfect information. There is only one correct answer.

atrifix
05-18-2006, 05:05 PM
[ QUOTE ]
I'd be willing to bet that in all the times this game has been played never once did someone offer alturistically. I think this game is a good proof that alruism doesn't exist except as a dysfunction.

[/ QUOTE ]

OK, I'll bet. What are the odds?

You would lose that bet. Link to study (http://webuser.bus.umich.edu/henrich/gamesvol/alvard.doc); the mean offer was 57%.

Edit: also note that players were more likely to reject hyperfair offers than merely fair ones.

ctj
05-18-2006, 05:41 PM
See "The Evolution of Co-operation" by Robert Axelrod (Basic Books, 1980)

He shows how co-operation is +EV and how it can become established in a population -- in an "Iterated Prisoner's Dilemma" environment it benefits an individual to punish 'cheaters'. Surprisingly, a simple 'tit-for-tat' strategy (punish once, then go back to co-operation until the next instance of cheating) was the most effective in computer simulations.

Note that he shows how co-operation can develop without resorting to 'group selection' (a no-no in evolutionary theory).

To relate back to the OP: Since people have been selected for dealing with complex "iterated prisoner's dilemmas" we might expect them to make co-operative offers. To encourage them to treat it in a more 'selfish' way, make it clear that there won't be any further beneficial co-operation with the villain, perhaps by making it a problem related to settling an acrimonious divorce.

-- C.T. Jackson

CallMeIshmael
05-18-2006, 05:46 PM
[ QUOTE ]

See "The Evolution of Co-operation" by Robert Axelrod (Basic Books, 1980)

He shows how co-operation is +EV and how it can become established in a population -- in an "Iterated Prisoner's Dilemma" environment it benefits an individual to punish 'cheaters'. Surprisingly, a simple 'tit-for-tat' strategy (punish once, then go back to co-operation until the next instance of cheating) was the most effective in computer simulations.

Note that he shows how co-operation can develop without resorting to 'group selection' (a no-no in evolutionary theory).

To relate back to the OP: Since people have been selected for dealing with complex "iterated prisoner's dilemmas" we might expect them to make co-operative offers. To encourage them to treat it in a more 'selfish' way, make it clear that there won't be any further beneficial co-operation with the villain, perhaps by making it a problem related to settling an acrimonious divorce.

-- C.T. Jackson

[/ QUOTE ]


Excellent post.

Quetsion, re: "(a no-no in evolutionary theory)."


Have you read much about "new" group seleciton, aka multilevel selection.


I was talking to a prof recently, and he said that if three years ago someone said he was going to be teaching group slection, he would have shot himself. But, now he is.

Its pretty controversial, Dawkins doesnt like it, Wilson does.

DougShrapnel
05-18-2006, 05:51 PM
[ QUOTE ]
[ QUOTE ]
I'd be willing to bet that in all the times this game has been played never once did someone offer alturistically. I think this game is a good proof that alruism doesn't exist except as a dysfunction.

[/ QUOTE ]

OK, I'll bet. What are the odds?

You would lose that bet. Link to study (http://webuser.bus.umich.edu/henrich/gamesvol/alvard.doc); the mean offer was 57%.

Edit: also note that players were more likely to reject hyperfair offers than merely fair ones.

[/ QUOTE ] Cool, thanks, I lose the bet, but I still think I'm right. "Data from dictator games (DG) support the idea that offers in the UG are high in order to avoid rejection." Another thing is that it's villagers who know each other, with a shared history and a shared future. "Hoffman et al. (1994) suggest that players are not only strategically motivated to avoid rejection, but also play as if they are concerned about what others think of them. "

[ QUOTE ]
Hoffman et al. (1996a) created a number of additional treatments to the DG designed to increase the degree of “social distance’ between players and experimenters. Offers were substantially lower when the experimental design was such that complete anonymity was assured to the proposers. In these trials, nobody, including the experimenter, knew the offer. Only 11% of the subjects gave 30% or more to their partner. Hoffman increased assurance of social anonymity in these cases, and 'fair;' behavior essential evaporated. Similar results were obtained when the experiment was repeated – 64% of offers were $0 (Hoffman et al., 1996a).

[/ QUOTE ] Not what I consider altruism. I do grant that its mainly a semantic debate hardly worth getting in a fuss over.

DougShrapnel
05-18-2006, 06:32 PM
[ QUOTE ]
Wrong.

[ QUOTE ]
Two players are playing a game in which they are trying to maximize their own payoffs and are not interested in any spiteful outcomes. They are rational and have perfect information about the game before it starts.


[/ QUOTE ]

The question assumes rational, greedy opponents with perfect information. There is only one correct answer.

[/ QUOTE ]You assume that a rational and greedy person would place cash as the only payoff to be maximzed under expected utility. GT dictates only one correct answer, using that assumtption, true. I don't think that rational, greedy, and unspiteful persons, always agree.

ctj
05-18-2006, 06:58 PM
[ QUOTE ]
[ QUOTE ]

See "The Evolution of Co-operation" by Robert Axelrod (Basic Books, 1980)

He shows how co-operation is +EV and how it can become established in a population -- in an "Iterated Prisoner's Dilemma" environment it benefits an individual to punish 'cheaters'. Surprisingly, a simple 'tit-for-tat' strategy (punish once, then go back to co-operation until the next instance of cheating) was the most effective in computer simulations.

Note that he shows how co-operation can develop without resorting to 'group selection' (a no-no in evolutionary theory).

To relate back to the OP: Since people have been selected for dealing with complex "iterated prisoner's dilemmas" we might expect them to make co-operative offers. To encourage them to treat it in a more 'selfish' way, make it clear that there won't be any further beneficial co-operation with the villain, perhaps by making it a problem related to settling an acrimonious divorce.

-- C.T. Jackson

[/ QUOTE ]


Excellent post.

Quetsion, re: "(a no-no in evolutionary theory)."


Have you read much about "new" group seleciton, aka multilevel selection.


I was talking to a prof recently, and he said that if three years ago someone said he was going to be teaching group slection, he would have shot himself. But, now he is.

Its pretty controversial, Dawkins doesnt like it, Wilson does.

[/ QUOTE ]

I don't know much about the 'new, improved' Group Selection. I'm a little out of date, except for reading Dawkins' books. My speculation, fwiw, would be that group selection might not be hard to demonstrate, but group evolution would be -- you would have to show that the groups replicated with reasonable fidelity. Of course, I suppose that you could take the (high-level) view that an individual's genome ( a group of thousands of individual genes) is selected on, and evolves, as a group. Dawkins talks about this in 'The Extended Phenotype'.

Moving even further into speculation, you could consider selection operating on groups of memes (a 'meme' -- Dawkins' term-- is roughly equal to 'an idea'). Consider religions as 'meme-complexes' and consider their differential survival:
-- Shakerism (celibacy for all, no proselitizing)
-- Mormonism (Marry young, have lots of children, no caffeine or drugs,required proselitizing, tithing)

It's not surprising that there are a lot of minds with the Mormon meme-complex, while the Shaker meme-complex has all but died out (although it flourished briefly in the 19th century). Note that memes and meme-complexes don't replicate in the same way that genes do, so Dawwinian evolution doesn't apply to memes.

-- C.T. Jackson

morphball
05-18-2006, 07:46 PM
The problem is actually flawed.

It seems to me that the A gets $89 answer assumes that the last person to offer a deal has the power. The power actually lies in B, because without B's agreement A gets 0.

The problem is flawed because we are to assume that neither player is spiteful. Coincidentally, this means they can't be rational, because the only way B can maximize any return is the credible threat of spitefully declining of the last offer.

B should reject any offer from A less than $89, on his turn B offers A $1 with the proviso that he will reject any offer less than $80. If A knows this, he has to offer B $89.

This why the 50/50 split moorobot spoke of occurs in real life. Rational beings have to act in spiteful manners in order to obtain their maximum interest. Because a rational being knows his rational opponent will employ spiteful means if necessary, A has to offer $50 and B should accept.

atrifix
05-18-2006, 07:56 PM
I don't know if it's altruism per se. Whether reciprocal altruism is really alturism is probably more a question of metaethics and philosophy of mind. But there have been cases where players offer, on average, more than 50% to the other player--although those are extremely rare.

TomCollins
05-18-2006, 08:41 PM
[ QUOTE ]
The problem is actually flawed.

It seems to me that the A gets $89 answer assumes that the last person to offer a deal has the power. The power actually lies in B, because without B's agreement A gets 0.

The problem is flawed because we are to assume that neither player is spiteful. Coincidentally, this means they can't be rational, because the only way B can maximize any return is the credible threat of spitefully declining of the last offer.

B should reject any offer from A less than $89, on his turn B offers A $1 with the proviso that he will reject any offer less than $80. If A knows this, he has to offer B $89.

This why the 50/50 split moorobot spoke of occurs in real life. Rational beings have to act in spiteful manners in order to obtain their maximum interest. Because a rational being knows his rational opponent will employ spiteful means if necessary, A has to offer $50 and B should accept.

[/ QUOTE ]

Uh, no.

Sephus
05-18-2006, 08:58 PM
[ QUOTE ]
The problem is actually flawed.

It seems to me that the A gets $89 answer assumes that the last person to offer a deal has the power. The power actually lies in B, because without B's agreement A gets 0.

The problem is flawed because we are to assume that neither player is spiteful. Coincidentally, this means they can't be rational, because the only way B can maximize any return is the credible threat of spitefully declining of the last offer.

B should reject any offer from A less than $89, on his turn B offers A $1 with the proviso that he will reject any offer less than $80. If A knows this, he has to offer B $89.

This why the 50/50 split moorobot spoke of occurs in real life. Rational beings have to act in spiteful manners in order to obtain their maximum interest. Because a rational being knows his rational opponent will employ spiteful means if necessary, A has to offer $50 and B should accept.

[/ QUOTE ]

lol please stop trying. in a one time game there is no way it will help B to be spiteful and not accept $1 on the final turn.

that's why B can't threaten to reject any offer if A doesn't take $1. in order for B's threat to be credible, it has to be beneficial for B to carry out the threat when A doesn't give in. at the final stage B has to realize "crap, my threat didn't work, now i should accept any offer because it's either that or nothing."

since A knows this, he doesn't care about B's threat.

remember it's a one time game.

atrifix
05-18-2006, 09:59 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
communication like this is inplicitly assumed in game theory problems

[/ QUOTE ]

no.

[/ QUOTE ]

Yes.

[/ QUOTE ]No.

In general, it doesn't matter anyway, because communication by itself should do nothing to change the outcome of a game. Only some kind of enforced contract is substantive enough to change the modeled solution. Of course, in experimentation, communication makes a big difference.

sweetjazz
05-19-2006, 01:32 AM
[ QUOTE ]
That's exactly why it's called game theory in the first place, games correspond more closely to its recommendations than anything else in the real world.

[/ QUOTE ]

No, that's not why it is called game theory.

FWIW, I think the people who are arguing about the limitations of applying game theory to human behavior are missing the point. Game theoretic problems are interesting because they can be analyzed in depth and they do tell us something interesting. Even if what they tell us is not the end of the story in terms of predicting human behavior.

DougShrapnel
05-19-2006, 05:52 AM
[ QUOTE ]
Game theoretic problems are interesting because they can be analyzed in depth and they do tell us something interesting.

[/ QUOTE ] Exactly what is interesting about 89, 11? I don't really think it's that interesting, however I do find that people glossing over the fact that not all rejections of a GT split are spiteful, is interesting. I also find it interesting that hyper-fair offers have been made in the past to ensure acceptance. And I find it interesting that ultra hyper-fair offers also are often rejected. Moreover, I assure you that in an econ class I write the answer 89,11. Admitantly, I very likely could be missing something of interesting depth regarding the 89,11 answer that GT dictates.

This to me is the non spiteful solution.
R3 80, 0
R2 80, 10
R1 90, 10

Actually I would put 90/10 on the test. But I don't think the 90/10 vs 89/11 is of any interest at all.

morphball
05-19-2006, 09:26 AM
[ QUOTE ]
[ QUOTE ]
The problem is actually flawed.

It seems to me that the A gets $89 answer assumes that the last person to offer a deal has the power. The power actually lies in B, because without B's agreement A gets 0.

The problem is flawed because we are to assume that neither player is spiteful. Coincidentally, this means they can't be rational, because the only way B can maximize any return is the credible threat of spitefully declining of the last offer.

B should reject any offer from A less than $89, on his turn B offers A $1 with the proviso that he will reject any offer less than $80. If A knows this, he has to offer B $89.

This why the 50/50 split moorobot spoke of occurs in real life. Rational beings have to act in spiteful manners in order to obtain their maximum interest. Because a rational being knows his rational opponent will employ spiteful means if necessary, A has to offer $50 and B should accept.

[/ QUOTE ]

lol please stop trying. in a one time game there is no way it will help B to be spiteful and not accept $1 on the final turn.

that's why B can't threaten to reject any offer if A doesn't take $1. in order for B's threat to be credible, it has to be beneficial for B to carry out the threat when A doesn't give in. at the final stage B has to realize "crap, my threat didn't work, now i should accept any offer because it's either that or nothing."

since A knows this, he doesn't care about B's threat.

remember it's a one time game.

[/ QUOTE ]

lol - you must be one of the forum's numerous posters with an IQ exceeding 184.

[ QUOTE ]
in a one time game there is no way it will help B to be spiteful and not accept $1 on the final turn.


[/ QUOTE ]

There are three separate bargaining events. In your model A offers B $11, which B should accept if he's not spiteful. B rejects that, so A immediately knows that B has already acted spitefully.

When B offers $1, A should believe that B will reject the final offer as B has already rejected the most he could possibly obtain under your rubric.

lol-you are so smart.

This is why the problem is flawed, its impossible to be rationale and not spiteful in many situations.

Rduke55
05-19-2006, 01:48 PM
[ QUOTE ]
[ QUOTE ]

See "The Evolution of Co-operation" by Robert Axelrod (Basic Books, 1980)

He shows how co-operation is +EV and how it can become established in a population -- in an "Iterated Prisoner's Dilemma" environment it benefits an individual to punish 'cheaters'. Surprisingly, a simple 'tit-for-tat' strategy (punish once, then go back to co-operation until the next instance of cheating) was the most effective in computer simulations.

Note that he shows how co-operation can develop without resorting to 'group selection' (a no-no in evolutionary theory).

To relate back to the OP: Since people have been selected for dealing with complex "iterated prisoner's dilemmas" we might expect them to make co-operative offers. To encourage them to treat it in a more 'selfish' way, make it clear that there won't be any further beneficial co-operation with the villain, perhaps by making it a problem related to settling an acrimonious divorce.

-- C.T. Jackson

[/ QUOTE ]


Excellent post.

Quetsion, re: "(a no-no in evolutionary theory)."


Have you read much about "new" group seleciton, aka multilevel selection.


I was talking to a prof recently, and he said that if three years ago someone said he was going to be teaching group slection, he would have shot himself. But, now he is.

Its pretty controversial, Dawkins doesnt like it, Wilson does.

[/ QUOTE ]

Did you ever read that Wilson article we talked about?

CallMeIshmael
05-19-2006, 02:15 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]

See "The Evolution of Co-operation" by Robert Axelrod (Basic Books, 1980)

He shows how co-operation is +EV and how it can become established in a population -- in an "Iterated Prisoner's Dilemma" environment it benefits an individual to punish 'cheaters'. Surprisingly, a simple 'tit-for-tat' strategy (punish once, then go back to co-operation until the next instance of cheating) was the most effective in computer simulations.

Note that he shows how co-operation can develop without resorting to 'group selection' (a no-no in evolutionary theory).

To relate back to the OP: Since people have been selected for dealing with complex "iterated prisoner's dilemmas" we might expect them to make co-operative offers. To encourage them to treat it in a more 'selfish' way, make it clear that there won't be any further beneficial co-operation with the villain, perhaps by making it a problem related to settling an acrimonious divorce.

-- C.T. Jackson

[/ QUOTE ]


Excellent post.

Quetsion, re: "(a no-no in evolutionary theory)."


Have you read much about "new" group seleciton, aka multilevel selection.


I was talking to a prof recently, and he said that if three years ago someone said he was going to be teaching group slection, he would have shot himself. But, now he is.

Its pretty controversial, Dawkins doesnt like it, Wilson does.

[/ QUOTE ]

Did you ever read that Wilson article we talked about?

[/ QUOTE ]


Yeah, its actually quite applicable.


For those who dont know about it: new group selection is the idea that a trait that has negative fitness withint a group can still be selected for as long as it has positive fitness between groups.

Again, its still very controversial.

EDIT: link to article (http://www.pnas.org/cgi/reprint/102/38/13367?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=1&au thor1=wilson&author2=Holldobler&andorexacttitle=an d&andorexacttitleabs=and&andorexactfulltext=and&se archid=1&FIRSTINDEX=0&sortspec=relevance&fdate=5/1/1996&resourcetype=HWCIT)

atrifix
05-19-2006, 02:19 PM
[ QUOTE ]
There are three separate bargaining events. In your model A offers B $11, which B should accept if he's not spiteful. B rejects that, so A immediately knows that B has already acted spitefully.

[/ QUOTE ]

B doesn't reject because rejection is irrational and B is a rational agent.

[ QUOTE ]
When B offers $1, A should believe that B will reject the final offer as B has already rejected the most he could possibly obtain under your rubric.

[/ QUOTE ]

Again, questions of psychology do not enter the framework. Because B is a rational agent and 398/400 strategies can be eliminated by iteration, there's only two possible answers -- 89, 11 and 90, 10.

atrifix
05-19-2006, 02:26 PM
[ QUOTE ]
Exactly what is interesting about 89, 11?

[/ QUOTE ]

Well--I don't know that I'll convince you--but I find this solution somewhat interesting. I think it's interesting that two rational players playing this game will reach an outcome that is so unintuitive to most people, and I think it's interesting that each assumption--which seems so innocuous by itself--leads to predictions which are wildly incompatible with human behavior. I think it's interesting to try to decide which assumption to get rid of to more accurately model human behavior, precisely because each seems so innocuous.

Rduke55
05-19-2006, 02:32 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]

See "The Evolution of Co-operation" by Robert Axelrod (Basic Books, 1980)

He shows how co-operation is +EV and how it can become established in a population -- in an "Iterated Prisoner's Dilemma" environment it benefits an individual to punish 'cheaters'. Surprisingly, a simple 'tit-for-tat' strategy (punish once, then go back to co-operation until the next instance of cheating) was the most effective in computer simulations.

Note that he shows how co-operation can develop without resorting to 'group selection' (a no-no in evolutionary theory).

To relate back to the OP: Since people have been selected for dealing with complex "iterated prisoner's dilemmas" we might expect them to make co-operative offers. To encourage them to treat it in a more 'selfish' way, make it clear that there won't be any further beneficial co-operation with the villain, perhaps by making it a problem related to settling an acrimonious divorce.

-- C.T. Jackson

[/ QUOTE ]


Excellent post.

Quetsion, re: "(a no-no in evolutionary theory)."


Have you read much about "new" group seleciton, aka multilevel selection.


I was talking to a prof recently, and he said that if three years ago someone said he was going to be teaching group slection, he would have shot himself. But, now he is.

Its pretty controversial, Dawkins doesnt like it, Wilson does.

[/ QUOTE ]

Did you ever read that Wilson article we talked about?

[/ QUOTE ]


Yeah, its actually quite applicable.


For those who dont know about it: new group selection is the idea that a trait that has negative fitness withint a group can still be selected for as long as it has positive fitness between groups.

Again, its still very controversial.

EDIT: link to article (http://www.pnas.org/cgi/reprint/102/38/13367?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=1&au thor1=wilson&author2=Holldobler&andorexacttitle=an d&andorexacttitleabs=and&andorexactfulltext=and&se archid=1&FIRSTINDEX=0&sortspec=relevance&fdate=5/1/1996&resourcetype=HWCIT)

[/ QUOTE ]

Of course Wilson and Holldobler are huge.
The existing model seems so...right and I can't wrap my head around why they think their model has equal or better explanatory power. Actually I can't imagine their model getting off the ground or why they even came up with it.
And I'm often suspicious of these paradigm shifting papers when they are in PNAS since it's such an old boys network, with that weid peer review they do.

The first couple of times I read this paper and was having trouble getting at why they were saying what they were saying I thought of an E.O. Wilson quote:

"Sometimes a concept is baffling not because it is profound but because it is wrong."

madnak
05-19-2006, 02:37 PM
I don't consider the assumptions innocuous at all. As far as I'm concerned, they're obscene. No rational person (pun intended) would ever accept those premises in anything but a purely hypothetical context. They don't apply in the real world. Ever. Under any circumstances.

How you see them as innocuous is beyond me.

atrifix
05-19-2006, 02:40 PM
What I mean is not that people are actually playing games in their everyday lives, but rather that in controlled experiments the assumptions that go into a "rational person" (of, say, preference transitivity, perfect information, etc.) seem rather innocuous.

TomCollins
05-19-2006, 02:55 PM
[ QUOTE ]
I don't consider the assumptions innocuous at all. As far as I'm concerned, they're obscene. No rational person (pun intended) would ever accept those premises in anything but a purely hypothetical context. They don't apply in the real world. Ever. Under any circumstances.

How you see them as innocuous is beyond me.

[/ QUOTE ]

Let me guess, when they did word problems in school, you probably called them rediculous as well?

If a train leaves Chicago at 3:00....?
These problems are rediculous too, at least according to your definition.

However, at the very least, they are very good for logical thinking. Further, they present concepts that are useful in other applications. Although simplistic, it is hard to do anything complex until you get the basics down.

morphball
05-19-2006, 02:56 PM
[ QUOTE ]
[ QUOTE ]
There are three separate bargaining events. In your model A offers B $11, which B should accept if he's not spiteful. B rejects that, so A immediately knows that B has already acted spitefully.

[/ QUOTE ]

B doesn't reject because rejection is irrational and B is a rational agent.

[ QUOTE ]
When B offers $1, A should believe that B will reject the final offer as B has already rejected the most he could possibly obtain under your rubric.

[/ QUOTE ]

Again, questions of psychology do not enter the framework. Because B is a rational agent and 398/400 strategies can be eliminated by iteration, there's only two possible answers -- 89, 11 and 90, 10.

[/ QUOTE ]

I agree that the correct answer for the economics test was 89/11 A gets the goods.

I was trying to point out that B can only thwart this outcome by being spiteful, and if B is rationale, then B has to be spiteful. I guess I was trying to convey that the examples Moorebot introduced were natural because rational opponents who assume their opponents to be rational, account for the fact that their opponents will be spiteful in order to maximize their returns, and maximizing their returns is certainly a rational goal. Thus, I believe that rational opponents who use game theory and expext their opponents to behave rationally (which means spitefully here) should offer $50. That's why I said the problem was flawed.

The rational but not spiteful example is about as real as the "ideal frictionless plain" in physics.

I came off a little hard in putting this forward, but I get mad when people laugh at me because I slightly irrational... /images/graemlins/wink.gif

madnak
05-19-2006, 03:01 PM
Preference transitivity... Is that when a person (for example) prefers A over B, and B over C, but prefers C over A? I'm not familiar with the term.

Regardless, this is exactly what I'm talking about. The assumptions of game theory within a game-theoretical context are pretty innocuous. The attempt to transfer them from the strict hypothetical confines of pure mathematics into concrete reality is out of line, in my opinion. I think the fact the assumptions aren't bearing out tends to validate my opinion.

This kind of mistake came very close to destroying major populations centers. Hell, Nash and Neumann and Russell all believed that pre-emptive nuclear strikes were literally necessary to prevent nuclear holocaust during the Cold War. Guess what? They were 100% wrong. In fact, it was their own recommendations that came close to starting a nuclear holocaust.

The problems of misapplication of game theory go well beyond baffled psychologists. So I consider it a very big and extremely dangerous mistake. I loved Wargames as much as the next person, but the fact is these things never apply in the real world and it's inappropriate to pull them out of pure math as if they do.

madnak
05-19-2006, 03:07 PM
[ QUOTE ]
Let me guess, when they did word problems in school, you probably called them rediculous as well?

If a train leaves Chicago at 3:00....?
These problems are rediculous too, at least according to your definition.

[/ QUOTE ]

No they aren't. The situations aren't even remotely analogous.

[ QUOTE ]
However, at the very least, they are very good for logical thinking. Further, they present concepts that are useful in other applications. Although simplistic, it is hard to do anything complex until you get the basics down.

[/ QUOTE ]

I agree with all this regarding game theory. What's your point? That doesn't justify its direct application in the real world.

TomCollins
05-19-2006, 03:11 PM
Game theory is not about predicting human behavior. I don't know why you think it is. It's about finding a non-exploitable strategy. Game theory strategies are rarely the optimal strategy, as they assume perfectly rational opponents. However, knowing a game theory optimal strategy can easily identify mistakes opponents are making, and how to exploit them. Sklansky uses Game Theory in poker all the time to find and exploit mistakes opponents are making, and to prevent himself from getting into a situation where he can be exploited.

I would like to know where you are getting this information about Nash and nuclear war. If anything, it sounds like you have taken something out of context. Game theory would not try to analyze what actions someone would take, merely, finding a non-exploitable solution. If the game theory solution was to pre-emptively strike, this does not mean it was the only way to stop a halocaust. It may have been the only way to guarantee a lack of a halocaust. But our opponent must have been playing suboptimally, and we benefitted from their mistake.

I agree that game theory can be misapplied. Just because game theory comes to one solution, doesn't mean that it will produce the optimal results. People make mistakes and are not rational. There are psychological consideraitons and mathematical situations. That is why poker is a perfect example of it. People ask if its a psychological game or a math game. It is both. You need to know both components to play optimally. In any real world "game theory"-ish example that is misapplied, it is probably both as well.

atrifix
05-19-2006, 03:23 PM
[ QUOTE ]
Preference transitivity... Is that when a person (for example) prefers A over B, and B over C, but prefers C over A? I'm not familiar with the term.

[/ QUOTE ]
Well, that would be non-transitivity. Preference transitivity is just that if someone prefers A to B, and B to C, then he also prefers A to C.

[ QUOTE ]
Regardless, this is exactly what I'm talking about. The assumptions of game theory within a game-theoretical context are pretty innocuous. The attempt to transfer them from the strict hypothetical confines of pure mathematics into concrete reality is out of line, in my opinion. I think the fact the assumptions aren't bearing out tends to validate my opinion.

[/ QUOTE ]

I think that there is an even more fundamental problem than that. The assumptions don't even work in a pure mathematical context. Even when you create a purely contrived situation, the predictions don't line up with what is observed. That's what I think is interesting. How can anyone even start to apply these things to economies, labor markets, etc., if they don't even work in controlled experiments?

atrifix
05-19-2006, 03:28 PM
[ QUOTE ]
Game theory is not about predicting human behavior. I don't know why you think it is. It's about finding a non-exploitable strategy. Game theory strategies are rarely the optimal strategy, as they assume perfectly rational opponents. However, knowing a game theory optimal strategy can easily identify mistakes opponents are making, and how to exploit them. Sklansky uses Game Theory in poker all the time to find and exploit mistakes opponents are making, and to prevent himself from getting into a situation where he can be exploited.

[/ QUOTE ]

I don't agree with this. The game theory of Nash and von Neumann and Morgenstern was based on finding a non-exploitable solution, assuming optimal play. But simulations, experiments, etc. show that people often do better for themselves when they play the "irrational" strategies. To me that indicates that there is something wrong with game theory rather than something wrong with the way people are playing.

madnak
05-19-2006, 04:00 PM
[ QUOTE ]
Game theory is not about predicting human behavior. I don't know why you think it is. It's about finding a non-exploitable strategy. Game theory strategies are rarely the optimal strategy, as they assume perfectly rational opponents. However, knowing a game theory optimal strategy can easily identify mistakes opponents are making, and how to exploit them. Sklansky uses Game Theory in poker all the time to find and exploit mistakes opponents are making, and to prevent himself from getting into a situation where he can be exploited.

[/ QUOTE ]

But he doesn't play poker based on game theory. He plays poker based on psychology. Deviations from game theoretically correct strategy in systems that correspond to models of games can be exploited. And perfect game theory strategy can only be countered with equally perfect game theory strategy within such a system. That's not the same as saying GT applies to the real world.

[ QUOTE ]
I would like to know where you are getting this information about Nash and nuclear war. If anything, it sounds like you have taken something out of context. Game theory would not try to analyze what actions someone would take, merely, finding a non-exploitable solution. If the game theory solution was to pre-emptively strike, this does not mean it was the only way to stop a halocaust. It may have been the only way to guarantee a lack of a halocaust. But our opponent must have been playing suboptimally, and we benefitted from their mistake.

[/ QUOTE ]

For one thing, game theory doesn't apply to war in the first place. You can't find an optimal strategy in a situation like war. War has no rules. There is no set of assumptions in any theory of war that can be interpreted as a game theoretical context.

In the second place, this is exactly what I'm saying. Game theory should not try to analyze what actions someone would take. But it is often used to do so in economics, psychology, and yes, war.

John von Neumann believe game theory applied to just about everything. He pushed very hard for nuclear war and used his ideas to justify it. I had heard Nash had similar views, but I'm not finding anything so maybe he didn't. And if he did, maybe it was just because he was a [censored] psycho anyhow. Neumann certainly managed to convince Bertrand Russel, a prominent mathematician and philosopher.

[ QUOTE ]
I agree that game theory can be misapplied. Just because game theory comes to one solution, doesn't mean that it will produce the optimal results. People make mistakes and are not rational. There are psychological consideraitons and mathematical situations. That is why poker is a perfect example of it. People ask if its a psychological game or a math game. It is both. You need to know both components to play optimally. In any real world "game theory"-ish example that is misapplied, it is probably both as well.

[/ QUOTE ]

War isn't both. War isn't like poker. It doesn't have a set of rules, a discrete level of information, or even a definite concrete goal. Clausewitz described this clearly and concisely. There are some elements of war that can be considered in a mathematical context, but in the general sense war is not a math game. It's a psychological game toward which math can sometimes be applied. And it's based on dramatically limited information and unstable variables of high impact.

madnak
05-19-2006, 04:02 PM
[ QUOTE ]
Well, that would be non-transitivity. Preference transitivity is just that if someone prefers A to B, and B to C, then he also prefers A to C.

[/ QUOTE ]

Ah, yeah, that's what I meant. Thanks for the clarification.

[ QUOTE ]
I think that there is an even more fundamental problem than that. The assumptions don't even work in a pure mathematical context. Even when you create a purely contrived situation, the predictions don't line up with what is observed. That's what I think is interesting. How can anyone even start to apply these things to economies, labor markets, etc., if they don't even work in controlled experiments?

[/ QUOTE ]

You may be right here. I agree with your conclusion, in any case.

TomCollins
05-19-2006, 04:23 PM
[ QUOTE ]

But he doesn't play poker based on game theory. He plays poker based on psychology. Deviations from game theoretically correct strategy in systems that correspond to models of games can be exploited. And perfect game theory strategy can only be countered with equally perfect game theory strategy within such a system. That's not the same as saying GT applies to the real world.

[/ QUOTE ]

Have you read Theory of Poker? There are tons of applications in there. I never claimed Game Theory was superior to psychology or vice versa. Just that both are useful.

The same thing applies in war. Even though there are no rules, there are a series of strategies that an opponent can use against a series of strategies that you use. They produce outcomes. War is a lot more complicated than poker, which is much more complicated than the simple game used as an example. I never disagreed that people have made oversimplifications or confused the ideas of optimal strategy vs. non-exploitable strategies. I'm sure thats the case. But to discount game theory as not useful at all is just as absurd. The more you know about your opponents strategy or line of thinking, the more you deviate from game theory to take advantages of these weaknesses. No idea how a simple math problem turned into a philosophical debate about the value of certain theories.

Sephus
05-19-2006, 04:30 PM
[ QUOTE ]
[ QUOTE ]
The problem is actually flawed.

It seems to me that the A gets $89 answer assumes that the last person to offer a deal has the power. The power actually lies in B, because without B's agreement A gets 0.

The problem is flawed because we are to assume that neither player is spiteful. Coincidentally, this means they can't be rational, because the only way B can maximize any return is the credible threat of spitefully declining of the last offer.

B should reject any offer from A less than $89, on his turn B offers A $1 with the proviso that he will reject any offer less than $80. If A knows this, he has to offer B $89.

This why the 50/50 split moorobot spoke of occurs in real life. Rational beings have to act in spiteful manners in order to obtain their maximum interest. Because a rational being knows his rational opponent will employ spiteful means if necessary, A has to offer $50 and B should accept.

[/ QUOTE ]

lol please stop trying. in a one time game there is no way it will help B to be spiteful and not accept $1 on the final turn.

that's why B can't threaten to reject any offer if A doesn't take $1. in order for B's threat to be credible, it has to be beneficial for B to carry out the threat when A doesn't give in. at the final stage B has to realize "crap, my threat didn't work, now i should accept any offer because it's either that or nothing."

since A knows this, he doesn't care about B's threat.

remember it's a one time game.

[/ QUOTE ]

lol - you must be one of the forum's numerous posters with an IQ exceeding 184.

[ QUOTE ]
in a one time game there is no way it will help B to be spiteful and not accept $1 on the final turn.


[/ QUOTE ]

There are three separate bargaining events. In your model A offers B $11, which B should accept if he's not spiteful. B rejects that, so A immediately knows that B has already acted spitefully.

When B offers $1, A should believe that B will reject the final offer as B has already rejected the most he could possibly obtain under your rubric.


this is pathetic. in the context of this conversation when someone is acting "spitefully" they're reducing thier OWN payoff in order to reduce the other person's payoff.

if A believes that B is rational, he assumes that B has rejected the initial offer because B believes that he can get a higher payoff for himself by doing so. he does not start assuming that B cares about the payoff of A, because caring about the other person's payoff is not part of the problem. it's irrelevant whether people care about the other person's payoff in real life, this is a theoretical question with its own rules. that doesn't make the problem flawed, it makes the problem simpler.

there is still no reason to believe B will reject the offer of $1 on the last turn, because NO RATIONAL PLAYER CHOOSES A PAYOFF OF ZERO OVER ONE in a one-time game.


lol-you are so smart.

yeah, no smart people ever type "lol"

This is why the problem is flawed, its impossible to be rationale and not spiteful in many situations.

yeah, convincing the opponent that you'll be spiteful leads to a higher payoff in real life. if both players are completely rational, it's not possible. it doesn't make the problem flawed, it's a result of the simplicity of the question.

madnak
05-19-2006, 04:34 PM
I don't really remember either, but I think I moorobot started it. /images/graemlins/grin.gif

I disagree for the reasons artifix described.

Sephus
05-19-2006, 05:09 PM
to clarify, you're wrong that a player who acts spitefully has a higher expected payoff. acting spitefully does NOT help you against a rational opponent.

the reason B would tend to win more in real life than theory would predict is that people tend to care about the other person's payoff. because A knows that B might forsake some of his own payoff to lower A's payoff, he doesn't play the same way he would against a player who ONLY cares about his own payoff.

in real life, if you could somehow get people to maximize their individual payoffs from this game and totally ignore the UTILITY they might get from being spiteful, you would see the theory-predicted results if you had intelligent people and they played the game (anonymously) over and over and learned how to maximize their payoffs.

it does NOT help you to actually follow through on a threat and reject A's final offer, because you're lowering your payoff. because of this, (again) if both players are rational B can NOT make any kind of threat to reject on the final turn. the reason the threat might work is that A might believe B might get extra utility by lowering his owm payoff.

am i making this clear? you can't maximise your payoff by being spiteful, it's the THEAT that you might be spiteful that increases your payoff. unfortuately for B, the threat is not believable (if A knows that B really IS trying to maximize his own payoff) which is the only assumption you can make about a rational player.

i know i repeated myself a lot but i'm taking this as a personal challenge to explain this to you in a way that makes you understand.

CallMeIshmael
05-19-2006, 05:30 PM
I just want to once again go over the fact game theory is applicable to situations where natural selection is at play.

Using game theory to explain the decisions people would make in the situation described in the OP does have problems, since it isnt a problem that selection acted on us to play rationally, since situations like this werent present in the EEA.

RDuke and I have exchanged some PMs regarding this thread. He gave me permission to quote this part "Two things you said that I think people glossed over in that thread are you mentioning game theory and the evolution of animal behavior (but typically economics folk are biased about people) and that we're talking unconscious stuff shaped by evolution. Logic, etc. has nothing to do with it."



Game theory explains why there is a 50:50 sex ratio in humans.

This works with frequency dependent selection. If we produced more females than males, males would be at a sadvantage. Therefore, it would pay to produce more males, and a mutation that caused more males would be favoured. But, once the population gets to be male dominated, females would be favoured. This goes back and forth until you are at a nash equilibrium (50:50), since all responses are a best response.

Game theory also explains why more fit animals tend to have more males, and less fit animals tend to have more males (this is up to an including humans), since the fitness of a low quality female is higher than a low quality male.


http://biology.queensu.ca/~bio210/pdf/21...almon%20mating' (http://biology.queensu.ca/~bio210/pdf/210lecture7.pdf#search='Hooknose%20and%20jack%20sa lmon%20mating')

Is an article I found about mating strategies, and how game theory explains them.

In some animals, there are completely differnent mating strategies (in the example, there are sneakers and non-sneakers).

The frequency of sneakers vs non-sneakers spirals toward the nash equilibrium point predicted by game theory.

CallMeIshmael
05-19-2006, 05:44 PM
[ QUOTE ]
How can anyone even start to apply these things to economies, labor markets, etc., if they don't even work in controlled experiments?

[/ QUOTE ]

I'll preface this by saying I do not know much about economics, and how markets work. So, I dont know for sure if game theory applies. Though I will say I know enough about evolution to say that game theory certainly does apply to animal bahavior, and anyone who says Im wrong is uninformed.

With that being said, I will note the important difference between markets and people, and I will use the game in context as an example.


- If Player A offers player B $1, and player B rejects (which we define as irrational), he loses a $1. Not really a big deal. It doesnt affect his ability to continue living.

- If there are two corporations, and A offers B $1, and B rejects, B can now be "invaded" by a competitor who is willing to accept the $1 offer.



Like, lets say that there are a bunch of companies of type A, and a bunch of type B, and they are both doing what you call rational, and playing the 50:50 split at the beginning.

If some company comes in, and says "Hey guys, im type B, and im willing to take only 45". What are type A companies going to do??? Well, I would bet they would ALL go to that one company B. So, that company B now has 100% of the market, and they are making a ton.

But, what if a new company B comes in and says they'll take 30. Well, all of the A's will now go there.

Do you see where this is going?

Sephus
05-19-2006, 06:30 PM
[ QUOTE ]
[ QUOTE ]
Game theory is not about predicting human behavior. I don't know why you think it is. It's about finding a non-exploitable strategy. Game theory strategies are rarely the optimal strategy, as they assume perfectly rational opponents. However, knowing a game theory optimal strategy can easily identify mistakes opponents are making, and how to exploit them. Sklansky uses Game Theory in poker all the time to find and exploit mistakes opponents are making, and to prevent himself from getting into a situation where he can be exploited.

[/ QUOTE ]

I don't agree with this. The game theory of Nash and von Neumann and Morgenstern was based on finding a non-exploitable solution, assuming optimal play. But simulations, experiments, etc. show that people often do better for themselves when they play the "irrational" strategies. To me that indicates that there is something wrong with game theory rather than something wrong with the way people are playing.

[/ QUOTE ]

if people really tried to maxmize their payoff from the game, if they knew that the other person was trying to maximize their own payoff, and they knew the other person knew that they themselves were maximizing their own payoff (and so on), and you really could eliminate the possibility of creating a reputation (if it's supposed to be a one-time game), you would see theory-predicted results in games that were not too complicated to "solve."

the main problem is that people don't actually try to maxmize their payoff, and people know this, etc...

morphball
05-19-2006, 06:44 PM
[ QUOTE ]
you're wrong that a player who acts spitefully has a higher expected payoff. acting spitefully does NOT help you against a rational opponent.

[/ QUOTE ]

As A gets to make the final offer, B's non-spiteful outcome is $11. If B behaves spitefully, A has to offer $50. B's expected outcome increases by $39 by being willing to screw A. If you define being rational as increasing your expected return, then B behaves rationally by screwing A.

[ QUOTE ]
the reason B would tend to win more in real life than theory would predict is that people tend to care about the other person's payoff.

[/ QUOTE ]

In the 50/50 example, A cares about his own payoff. A knows that if B does not get half, A will get zero. In order for A to maximize his return, he has to offer $50. Altruism has nothing to do with it. You're argument about people caring is akin to arguing altruism is the reason MAD is an effective deterrent. I.e., you are saying the Russian refrained from nuking us not because they knew we would destroy them in return, but because they didn't want to see us hurt.

[ QUOTE ]
in real life, if you could somehow get people to maximize their individual payoffs from this game and totally ignore the UTILITY they might get from being spiteful, , you would see the theory-predicted results if you had intelligent people and they played the game (anonymously) over and over and learned how to maximize their payoffs.

[/ QUOTE ]

In real life, if you are A you had better offer me $50 or you're getting $1 or $0. You can play it as rationally as you want, B can only win by being spiteful. If B gains utility by being spiteful, being spiteful is the rationale course of action.

[ QUOTE ]
it does NOT help you to actually follow through on a threat and reject A's final offer

[/ QUOTE ]

MAD works precisely because an opponent will spitefully send a counterstrike that cannot possibly be to his benefit.

[ QUOTE ]
this is pathetic. in the context of this conversation when someone is acting "spitefully" they're reducing thier OWN payoff in order to reduce the other person's payoff.

[/ QUOTE ]

B increases his payoff by $39, he has not reduced it.

[ QUOTE ]
if A believes that B is rational, he assumes that B has rejected the initial offer because B believes that he can get a higher payoff for himself by doing so.

[/ QUOTE ]

So far so good...

[ QUOTE ]
he does not start assuming that B cares about the payoff of A, because caring about the other person's payoff is not part of the problem.

[/ QUOTE ]

Now you're losing me. B only cares that he can only get $11 by not being spiteful, but makes $50 by being spiteful. Appears to me B is only looking out for himself and does not care about A.

[ QUOTE ]
this is a theoretical question with its own rules. that doesn't make the problem flawed, it makes the problem simpler.

[/ QUOTE ]

The problem is no doubt simpler, but it's still flawed in that B's spitefulness is the rational course of action. In other words, the problem should read A is rational and nonspiteful while B is irrational and nonspiteful. B cannot be rational and nonspiteful at the same time, because being nonspiteful reduces his expectation by $39, which a rational person seeking to maximize his utility would not do.

Sephus
05-19-2006, 06:56 PM
let me try this a different way.

say i'm A and you're B.

you reject my initial offer because you're playing your "spiteful" strategy.

then you offer me $1 (or any amount less than $79) and say "if you don't take this, i'm going to reject your next offer of less than X"

i reject your offer.

then i offer you $1. you can either take it or leave it. now you MUST accept. you don't have a choice. you can't choose 0 over 1, because you are trying to maximize your payoff.

since i know this, i will never accept anything less than $79 dollars as an offer from you on the previous turn. why would i ever accept $50 if i know for a FACT that you MUST accept $1 and give me $79 when i make the final offer.

and because you know that i will inevitably reject all offers of less than $79, you can not increase your payoff by rejecting the initial offer of $11.

therefore, being spiteful does not improve your payoff.

B can't convince A that he's not maximizing his payoff, because A knows B is rational and choosing greater payoffs over lesser ones is the very definition of a rational player.

and you can't say "B has already shown that he's not maximizing his payoff when he rejects the first offer" beacause now you've already voilated the rules of the game that say both players are maximizing their payoff. A makes every decision under the impression that B will respond in a way that maximizes his owm payoff. that's what the mutual rationality is for.

p.s. the reason MAD works is that you commit to it beforehand. if there were some way for you to force yourself to hold to a rule where you rejected certain offers on the last turn, then your threat would work. but in the game there's no way to make that committment. you will be forced to say "crap that didn't work" and take the dollar.

CallMeIshmael
05-19-2006, 08:42 PM
RDuke sent me 2 PMs before he had to go out.

"I think Maynard-Smith's 1973 nature paper could clear things up for those guys. I have to run now but, if you don't, I'll post on it tomorrow. "

I agree this is a paper worth reading. Smithi is like THE guy in evolutionary game theory.

http://www.nature.com/nature/focus/maynardsmith/pdf/1973.pdf



"I'm not sure gender in most species would fit into game theory b/c no decision is made. "


Ill respond to this here.

I think it comes down to how you define "decision." Now, no human ever says "there are more boys than girls in this population, therefore I should have a girl."

BUT... there is evidence that humans are capable of skewing their reproductive sex ratio. Like I said above, low quality girls are better than low quality boys in terms of fitness. This is because males tend to have more vairance in terms of the number of children they produce. More males produce 5 kids than females, but more males also fail to reproduce than females. As such, if you fear your child will be of low mate quality, then have a female if you can. This pattern has been witnessed in humans, since there is a positive correlation between probability of someone having a male child and the amount of money they have.

Now... how do humans skew their reproductive sex ratio? Like, what mechanism makes poor people have more females than males? I have no clue. Frankly, I dont care either. Proximate causes bore me

But, the matter is, humans can in fact skew this ratio. As such, it is a "decision" in some sense

Copernicus
05-19-2006, 09:10 PM
[ QUOTE ]
I just want to once again go over the fact game theory is applicable to situations where natural selection is at play.

Using game theory to explain the decisions people would make in the situation described in the OP does have problems, since it isnt a problem that selection acted on us to play rationally, since situations like this werent present in the EEA.

RDuke and I have exchanged some PMs regarding this thread. He gave me permission to quote this part "Two things you said that I think people glossed over in that thread are you mentioning game theory and the evolution of animal behavior (but typically economics folk are biased about people) and that we're talking unconscious stuff shaped by evolution. Logic, etc. has nothing to do with it."



Game theory explains why there is a 50:50 sex ratio in humans.

This works with frequency dependent selection. If we produced more females than males, males would be at a sadvantage. Therefore, it would pay to produce more males, and a mutation that caused more males would be favoured. But, once the population gets to be male dominated, females would be favoured. This goes back and forth until you are at a nash equilibrium (50:50), since all responses are a best response.

Game theory also explains why more fit animals tend to have more males, and less fit animals tend to have more males (this is up to an including humans), since the fitness of a low quality female is higher than a low quality male.


http://biology.queensu.ca/~bio210/pdf/21...almon%20mating' (http://biology.queensu.ca/~bio210/pdf/210lecture7.pdf#search='Hooknose%20and%20jack%20sa lmon%20mating')

Is an article I found about mating strategies, and how game theory explains them.

In some animals, there are completely differnent mating strategies (in the example, there are sneakers and non-sneakers).

The frequency of sneakers vs non-sneakers spirals toward the nash equilibrium point predicted by game theory.

[/ QUOTE ]

Perhaps changing the numbers doesnt change your point, in fact may reinforce it, but the ratio of the genders is generally not 50:50.

At birth there is a statistically significant bias toward males. This makes sense because young males would have been more susecptible to early death than females, so to have a sufficient population of males during the reproductive years they would need to start out higher.

Historically the ratio of males to females declined to the point where there were more females during the reproductive years. This also makes sense from an evolutionary perspective because a polyamorous male could reproduce and support more than one female and their offspring, while females are reproductively unavailable to men during pregnancy.

In the last 50 years the ratio of males has trended significantly higher even during the reproductive years, because they no longer face the threats to their lives to the extent that they used to...helped by not participating in major wars to the extent they were before 1950.

Female longevity advantages after the reproductive years have also narrowed, primarily because their lifestyle choices have become more similar to males.

NLSoldier
05-19-2006, 09:57 PM
[ QUOTE ]

let me try this a different way.

say i'm A and you're B.

you reject my initial offer because you're playing your "spiteful" strategy.

then you offer me $1 (or any amount less than $79) and say "if you don't take this, i'm going to reject your next offer of less than X"

i reject your offer.

then i offer you $1. you can either take it or leave it. now you MUST accept. you don't have a choice. you can't choose 0 over 1, because you are trying to maximize your payoff.

since i know this, i will never accept anything less than $79 dollars as an offer from you on the previous turn. why would i ever accept $50 if i know for a FACT that you MUST accept $1 and give me $79 when i make the final offer.

and because you know that i will inevitably reject all offers of less than $79, you can not increase your payoff by rejecting the initial offer of $11.

therefore, being spiteful does not improve your payoff.

[/ QUOTE ]

Yeah this is clearly right. I think that guy is just argiung for the sake of arguing.

I cant believe how big this thread has gotten /images/graemlins/smile.gif

CallMeIshmael
05-19-2006, 11:30 PM
[ QUOTE ]
Perhaps changing the numbers doesnt change your point, in fact may reinforce it, but the ratio of the genders is generally not 50:50.

At birth there is a statistically significant bias toward males. This makes sense because young males would have been more susecptible to early death than females, so to have a sufficient population of males during the reproductive years they would need to start out higher.

Historically the ratio of males to females declined to the point where there were more females during the reproductive years. This also makes sense from an evolutionary perspective because a polyamorous male could reproduce and support more than one female and their offspring, while females are reproductively unavailable to men during pregnancy.

In the last 50 years the ratio of males has trended significantly higher even during the reproductive years, because they no longer face the threats to their lives to the extent that they used to...helped by not participating in major wars to the extent they were before 1950.

Female longevity advantages after the reproductive years have also narrowed, primarily because their lifestyle choices have become more similar to males.

[/ QUOTE ]


I actually did not know that the primary reproducitve ratio differed from 50:50, but the fact that it does only helps my case!

The fact that it deviates from 50:50 means that there is slightly more reproductive value in having a male than a female at a population of 50:50.


Beyond that, I just read that the old ratio of roughly 106:100 is going down for the reasons that you mention! That is, the nash equilibrium point is shifting slightly.

Leaky Eye
05-20-2006, 09:00 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
It depends what you mean by spiteful outcomes. What will A prefer if he can get $0 and B will get $80, or they both will get $0. I am assuming he will be generous and let B have the $80.

[/ QUOTE ]

I think he should reject. If he rejects 0, the other guy has to offer him 1, and therefore he does better by rejecting 0.

[/ QUOTE ]

Why would he reject? He gets $0 no matter what. OP said he is not spiteful.

[/ QUOTE ]


Because if he tells the opponent that he is going to reject 0, he knows the opponent will be forced to offer him 1. Its kind of weird, once he makes the offer of 0, rejecting and accepting are the same, and always rejecting is spiteful. But, BEFORE he offers 0, the statement of "Im going to reject all offers of 0" isnt spiteful, its rational since it increases his payoff.


There are situations in game theory where if you could somehow force yourself to make an irrational decision in later rounds, you can prevent the opponent from making a certain decision that, in the end, prevents them from making a decision that hurts your payoff. This is a somehwat similar situation.

[/ QUOTE ]

This is a one time game. Where it would be irrational to keep such a promise once the offer was made. It doesn't apply here. Your company example is not a one time game, the company's declaration applies to and sets an example for future competitors as well.

Copernicus
05-20-2006, 10:47 AM
[ QUOTE ]
[ QUOTE ]
Perhaps changing the numbers doesnt change your point, in fact may reinforce it, but the ratio of the genders is generally not 50:50.

At birth there is a statistically significant bias toward males. This makes sense because young males would have been more susecptible to early death than females, so to have a sufficient population of males during the reproductive years they would need to start out higher.

Historically the ratio of males to females declined to the point where there were more females during the reproductive years. This also makes sense from an evolutionary perspective because a polyamorous male could reproduce and support more than one female and their offspring, while females are reproductively unavailable to men during pregnancy.

In the last 50 years the ratio of males has trended significantly higher even during the reproductive years, because they no longer face the threats to their lives to the extent that they used to...helped by not participating in major wars to the extent they were before 1950.

Female longevity advantages after the reproductive years have also narrowed, primarily because their lifestyle choices have become more similar to males.

[/ QUOTE ]


I actually did not know that the primary reproducitve ratio differed from 50:50, but the fact that it does only helps my case!

The fact that it deviates from 50:50 means that there is slightly more reproductive value in having a male than a female at a population of 50:50.


Beyond that, I just read that the old ratio of roughly 106:100 is going down for the reasons that you mention! That is, the nash equilibrium point is shifting slightly.

[/ QUOTE ]

The shift in female longevity is almost totally beyond the reproductive years, so it is more likely to be environmental than genetic, so I wouldnt call that a shifting of the NE point.

The shift (I should have pointed out these are US statistics) during the reproductive years, with men now outnumbering women, could have evolutionary consequences though, so a shift toward fewer male births would seem to be in order if the original birth ratios were truly an advantageous NE point.

CallMeIshmael
05-20-2006, 02:25 PM
[ QUOTE ]
This is a one time game. Where it would be irrational to keep such a promise once the offer was made.

[/ QUOTE ]

No it wouldnt.

Rejecting an offer of 0 isnt irrational. You are indifferent to 0.

Now, you might be able to argue that rejecting 0 100% of the time is spiteful (I would disagree), but it is certainly not irrational.


Even if we opt to reject 0 50% of the time (a strategy that I would have a very hard time seeing as spiteful) the villian must offer us $1.

Leaky Eye
05-21-2006, 05:26 PM
I wasn't refering to what spiteful means, or how it affects the answer here. I was saying that rejecting an otherwise favorable offer in a one time game is not a rational strategy, regardless of any statements made previous to the offer.

morphball
05-22-2006, 11:20 AM
[ QUOTE ]
then you offer me $1 (or any amount less than $79) and say "if you don't take this, i'm going to reject your next offer of less than X"

i reject your offer.

then i offer you $1. you can either take it or leave it. now you MUST accept. you don't have a choice. you can't choose 0 over 1, because you are trying to maximize your payoff.


[/ QUOTE ]

Well why doesn't A offer $0 on the last turn then? If B is not spiteful, then A can say, "look you get zero if you accept and zero if you decline. Declining is simply spiteful."

[ QUOTE ]
B can't convince A that he's not maximizing his payoff, because A knows B is rational and choosing greater payoffs over lesser ones is the very definition of a rational player.


[/ QUOTE ]

First the problem says perfect information, not knowing the other players are rational, but the econ professor probably meant that with his crapola question, so I won't push this point.

This response is flawed because you overlook the red herring. What do you mean "B can't convince A that he's not maximizing his payoff"?

B already has. A offered the maximum payout he could get under your scenario and B rejected it.

[ QUOTE ]
the reason MAD works is that you commit to it beforehand.

[/ QUOTE ]

Not exactly. MAD works because the other actor believes his opponent will launch a spiteful retaliation because he has previously threatened to do so. In our game, it would equate to A believing B's threat to reject offers below a certain point. Coincidentally, it seems to me that MAD is a one time game as well.

JMAnon
05-22-2006, 01:07 PM
[ QUOTE ]
I was saying that rejecting an otherwise favorable offer in a one time game is not a rational strategy, regardless of any statements made previous to the offer.

[/ QUOTE ]

And he responded that an offer of $0 is not favorable.

Sephus
05-22-2006, 03:06 PM
[ QUOTE ]
then you offer me $1 (or any amount less than $79) and say "if you don't take this, i'm going to reject your next offer of less than X"

i reject your offer.

then i offer you $1. you can either take it or leave it. now you MUST accept. you don't have a choice. you can't choose 0 over 1, because you are trying to maximize your payoff.


[/ QUOTE ]

Well why doesn't A offer $0 on the last turn then? If B is not spiteful, then A can say, "look you get zero if you accept and zero if you decline. Declining is simply spiteful."

i keep trying to tell you, it doesn't matter at ALL what either player SAYS.

the reason A doesn't offer B zero on the last turn is that the problem didn't specify what B does when he is indifferent, so we don't know whether B will accept 0 or not as opposed to 1.

by the way, if you were paying close attention you saw that i made a point of defining spiteful as "reducing your own payoff in order to reduce the other player's payoff." rejecting an offer of zero is not reducing your payoff, because you ger zero either way.

[ QUOTE ]
B can't convince A that he's not maximizing his payoff, because A knows B is rational and choosing greater payoffs over lesser ones is the very definition of a rational player.


[/ QUOTE ]

First the problem says perfect information, not knowing the other players are rational, but the econ professor probably meant that with his crapola question, so I won't push this point.

perfect information about the game includes knowing your opponent is rational.

This response is flawed because you overlook the red herring. What do you mean "B can't convince A that he's not maximizing his payoff"?

B already has. A offered the maximum payout he could get under your scenario and B rejected it.

i'm not going in circles on this any longer. B will never try to convince A that B isn't maximizing his payoff when the fact that each player is maximizing his payoff is a defined part of the universe of the game, and each player according to the constuction of the game knows that the other player IS going to choose higher payoffs over lower ones.

if there is perfect information, then A knows everything that B knows. therefore, B will never try to "fool" A into believing something that's false.

[ QUOTE ]
the reason MAD works is that you commit to it beforehand.

[/ QUOTE ]

Not exactly. MAD works because the other actor believes his opponent will launch a spiteful retaliation because he has previously threatened to do so. In our game, it would equate to A believing B's threat to reject offers below a certain point. Coincidentally, it seems to me that MAD is a one time game as well.

yeah my response was intended for a "doomsday device" which is more literally "assured."

the thing that you seem to be refusing to get is that human beings get positive "payoffs" from revenge. it's not correct to make the analogy that nuclear retaliation comes with a zero payoff even though both countries are destroyed, because whoever is "pushing the button" has some INCENTIVE to do so.

in the game the only incentive is PAYOFF.

i'm done with you, if you still don't get it at this point you can go ahead and sound like an idiot when you talk about game theory with people who understand it to any degree.

CallMeIshmael
05-22-2006, 03:36 PM
[ QUOTE ]
[ QUOTE ]
I was saying that rejecting an otherwise favorable offer in a one time game is not a rational strategy, regardless of any statements made previous to the offer.

[/ QUOTE ]

And he responded that an offer of $0 is not favorable.

[/ QUOTE ]


Wait, I dont understand what Leaky was saying.

Was he saying that rejecting 0 is irrational, becaues 0 is favourable?

speakerfreak
05-22-2006, 10:28 PM
Wow, 10 pages for a question about basic induction!

Initially A will offer the second minimum amount above $0 (say $0.02), if B refuses then it is B's turn, which A will refuse (and B knows he will refuse this) then A will offer the minimum amount on the final turn which B has to accept. Since both know how the game will pan out and B wants to maximise his profit from the game the answer is as follows:

A ends up with $99.98
B ends up with $0.02

TomCollins
05-22-2006, 10:47 PM
10 pages of replies and you still got it wrong.

hmkpoker
05-22-2006, 11:08 PM
[ QUOTE ]
Wow, 10 pages for a question about basic induction!

Initially A will offer the second minimum amount above $0 (say $0.02), if B refuses then it is B's turn, which A will refuse (and B knows he will refuse this) then A will offer the minimum amount on the final turn which B has to accept. Since both know how the game will pan out and B wants to maximise his profit from the game the answer is as follows:

A ends up with $99.98
B ends up with $0.02

[/ QUOTE ]

Result: B rejects, and offers A an offer of (A)$80.00 - (B)$10.00. If A rejects this offer, it is impossible for him to make more money on the next turn. He can make the same amount at best, and risks B rejecting the requisite offer of nothing. A accepts.

speakerfreak
05-23-2006, 07:20 AM
Ah so my last post was rubbish...will teach me for posting when I should be writing my dissertation!

So, working backwards:

In the last stage A would offer B $0.01 which B accepts otherwise he gets nothing, this gives A $79.99.
In the last but one stage B would offer A $80 which A would accept since in the next stage he would be getting less, this would leave B with $10.
Therefore in the first stage, if A offers B $10.01 this is the most B can hope to obtain and leaves A with $89.99 which is more than A can get in any other stage.

Answer (hopefully): A gets $89.99, B gets $10.01

jogsxyz
05-23-2006, 06:09 PM
[ QUOTE ]
My answer was 89 and 11. Sweet. From talking to other people in the class I feel I may be the only one who got it right. All others I talked to were somewhere between 60/40 and 50/50. THey are not smart /images/graemlins/smile.gif

[/ QUOTE ]

How do you know this is the right answer????
This assumes (B)ob has no bargaining position.
Call the first player (A)be. Both Abe and Bob
know that if this game reaches the second level
The total value of the game will drop from 100
to 90. Therefore it is in the interest of both
to agree during the first level. On this first level
Abe is in the driver's seat. Abe offers the deals.
But both know that on the second level the power
shifts to Bob. Therefore Abe should offer a 'fair'
amount.

I reject your answer of 89 to 11.
53 to 47 sounds like a 'fair' offer.

TomCollins
05-23-2006, 08:51 PM
Here we go again.

CallMeIshmael
05-24-2006, 01:09 AM
[ QUOTE ]
How do you know this is the right answer????
This assumes (B)ob has no bargaining position.
Call the first player (A)be. Both Abe and Bob
know that if this game reaches the second level
The total value of the game will drop from 100
to 90. Therefore it is in the interest of both
to agree during the first level. On this first level
Abe is in the driver's seat. Abe offers the deals.
But both know that on the second level the power
shifts to Bob. Therefore Abe should offer a 'fair'
amount.

I reject your answer of 89 to 11.
53 to 47 sounds like a 'fair' offer.

[/ QUOTE ]


You bring up excellent points. 53 to 47 is the correct answer.

morphball
05-24-2006, 09:39 AM
[ QUOTE ]
Here we go again.

[/ QUOTE ]

Well, in my defense I did say that the answer for the test was 89/11, I just took issue with saying they were rational and non-spiteful. I stand by my argument that a rational person would have to be spiteful to maximize his return in real life.

jogsxyz
05-24-2006, 07:02 PM
[ QUOTE ]


Have you read Theory of Poker? There are tons of applications in there. I never claimed Game Theory was superior to psychology or vice versa. Just that both are useful.

The same thing applies in war. Even though there are no rules, there are a series of strategies that an opponent can use against a series of strategies that you use. They produce outcomes. War is a lot more complicated than poker, which is much more complicated than the simple game used as an example. I never disagreed that people have made oversimplifications or confused the ideas of optimal strategy vs. non-exploitable strategies. I'm sure thats the case. But to discount game theory as not useful at all is just as absurd. The more you know about your opponents strategy or line of thinking, the more you deviate from game theory to take advantages of these weaknesses. No idea how a simple math problem turned into a philosophical debate about the value of certain theories.

[/ QUOTE ]

You are confusing optimal strategy with the more inclusive term game theory. When you know your opponent's weakness, you may devise an exploitive strategy to take advantage. This is still game theory. It's just not optimal strategy.

You assume opp's strategy is fixed, while yours is flexible. Of course sometimes opp outfoxes you by changing his strategy to exploit your revised strategy.

CallMeIshmael
05-24-2006, 07:10 PM
[ QUOTE ]
[ QUOTE ]


Have you read Theory of Poker? There are tons of applications in there. I never claimed Game Theory was superior to psychology or vice versa. Just that both are useful.

The same thing applies in war. Even though there are no rules, there are a series of strategies that an opponent can use against a series of strategies that you use. They produce outcomes. War is a lot more complicated than poker, which is much more complicated than the simple game used as an example. I never disagreed that people have made oversimplifications or confused the ideas of optimal strategy vs. non-exploitable strategies. I'm sure thats the case. But to discount game theory as not useful at all is just as absurd. The more you know about your opponents strategy or line of thinking, the more you deviate from game theory to take advantages of these weaknesses. No idea how a simple math problem turned into a philosophical debate about the value of certain theories.

[/ QUOTE ]

You are confusing optimal strategy with the more inclusive term game theory. When you know your opponent's weakness, you may devise an exploitive strategy to take advantage. This is still game theory. It's just not optimal strategy.

You assume opp's strategy is fixed, while yours is flexible. Of course sometimes opp outfoxes you by changing his strategy to exploit your revised strategy.

[/ QUOTE ]

You seem to think the 89/11 solution is wrong.

Please explain B's strategy of rejecting 11 that also exploits A's strategy of offering 89.

TomCollins
05-24-2006, 08:49 PM
The optimal strategy is entirely based on your opponents strategy. I'm not confused. If the opponents strategy is non-static, optimal strategy involves variations to throw off your opponent. None of these things have anything to do with Game Theory.

CallMeIshmael
05-24-2006, 10:24 PM
[ QUOTE ]
The optimal strategy is entirely based on your opponents strategy.

[/ QUOTE ]

This is incorrect.


An optimal strategy is one which is the correct response to a perfect opponent. An optimal strategy is the one at Nash equilibrium. And thus completely independent of whatever strategy your opponent opps to play.

A maximal strategy is the best response to your opponents play, and not necessarily at Nash. (and therefore completely dependent on what your opponent is doing)



EDIT: its not like these definitions really mean much, but the disctinction is made

jogsxyz
05-25-2006, 12:09 AM
[ QUOTE ]
The optimal strategy is entirely based on your opponents strategy. I'm not confused. If the opponents strategy is non-static, optimal strategy involves variations to throw off your opponent. None of these things have anything to do with Game Theory.

[/ QUOTE ]

This quote is from Matt Matros in a CardPlayer article.

[ QUOTE ]

The optimal strategy is when you could tell your opponent your
plan and there wouldn’t be anything he could do to change your
expectation in the game. Of course, you couldn’t tell him what
cards you were holding as you played against him, but you could
say, for example, “I’m going to be bluffing 40 percent of the time
and value betting 60 percent of the time. What are you going to do
about it?” A person playing optimal strategy, and executing it
without giving off any tells, cannot be beaten in the long run.
If you were to play such an opponent heads up for a few thousand
hours, your best case would be that you both lose money to the house.


[/ QUOTE ]

In other words you are indifferent to opp's strategy.

Best Response or Maximally Exploitive: The counter-strategy
that maximizes equity given that you know your opponent or opponents'
strategies.

The field of study is game theory. Game theory includes both optimal strategy and exploitive strategies.

CallMeIshmael
05-25-2006, 12:17 AM
[ QUOTE ]
This quote is from Matt Matros in a CardPlayer article.

[ QUOTE ]

The optimal strategy is when you could tell your opponent your
plan and there wouldn’t be anything he could do to change your
expectation in the game. Of course, you couldn’t tell him what
cards you were holding as you played against him, but you could
say, for example, “I’m going to be bluffing 40 percent of the time
and value betting 60 percent of the time. What are you going to do
about it?” A person playing optimal strategy, and executing it
without giving off any tells, cannot be beaten in the long run.
If you were to play such an opponent heads up for a few thousand
hours, your best case would be that you both lose money to the house.


[/ QUOTE ]

In other words you are indifferent to opp's strategy.



[/ QUOTE ]



Though I agree with pretty much everything you posted, this is incorrect.

Just because you are playing an optimal strategy doesnt mean you are indifferent to your opponents strategy. I mean, you still want him to play the worst possible strategy, it just happens that the worst you can do with your strategy is break even against someone playing a maximal response (which is also optimal) (Im still sticking with the poker anology, and ignoring rake)

jogsxyz
05-25-2006, 12:21 AM
[ QUOTE ]
You seem to think the 89/11 solution is wrong.

Please explain B's strategy of rejecting 11 that also exploits A's strategy of offering 89.

[/ QUOTE ]

The 89/11 solution assumes Bob has no bargaining power. Bob rejects the offer. Go to round 2.
Now Bob offers a 79/11 split. Abe rejects Bob's offer.
Round 3. Abe offers a 69/11 split, and so on.

It is really in both Abe's and Bob's mutual interest to agree during round 1. The value of the game is highest in round 1. Each successive round is worth 10 units less. Therefore Abe must offer a deal that Bob would accept.

Abe is in power when the game is worth 100. Bob is in power when the game is worth less, only 90 in round 2. Therefore Abe is entitled to a larger share than Bob.

The 53/47 is arbitrary. Maybe 52/48 is fairer.

CallMeIshmael
05-25-2006, 12:26 AM
[ QUOTE ]
The 89/11 solution assumes Bob has no bargaining power.

[/ QUOTE ]

[ QUOTE ]
It is really in both Abe's and Bob's mutual interest to agree during round 1.

[/ QUOTE ]


The above quotes lead me to believe you are (perhaps not even consciously) thinking of this in terms or cooperation/sharing.

Abe and Bob have no common interest. Each is completely selfish, and wants to maximize their own return.



Do you think Bob will ever reject an offer of $1 in the final round? (just assume that we got to the third round)

jogsxyz
05-25-2006, 10:13 AM
[ QUOTE ]
[ QUOTE ]
It is really in both Abe's and Bob's mutual interest to agree during round 1.

[/ QUOTE ]


The above quotes lead me to believe you are (perhaps not even consciously) thinking of this in terms or cooperation/sharing.

Abe and Bob have no common interest. Each is completely selfish, and wants to maximize their own return.



Do you think Bob will ever reject an offer of $1 in the final round? (just assume that we got to the third round)

[/ QUOTE ]

The final round is round 10. Bob will be the one offering the deals.
If both players are playing logically and playing well, there will be a deal in round 1.
If this game reaches round 10, both players are too subborn to compromise. No deal can to made. Both will receive nothing.

It is in the interest of both parties to offer a fair deal to the other. This is the only way to maximize the total value of the game, the highest joint EV.

TomCollins
05-25-2006, 02:51 PM
Uh, no it's not. The final round is 70. You may have better luck solving simple problems in the future if you learn to read the damn question.

Edit: On second thought, after reading your last reply where they are "too stubborn to accept", reading it properly might not help you solve this right.

atrifix
05-25-2006, 03:10 PM
http://en.wikipedia.org/wiki/Backward_induction

What do people really think about players who care about both theirs and their opponents' payoffs? Whether it's linear or logarithmic/etc., is this a reasonable assumption?

CallMeIshmael
05-25-2006, 03:33 PM
[ QUOTE ]
The final round is round 10. Bob will be the one offering the deals.

[/ QUOTE ]

Well, this is probably where a lot of the problems start.


The final round is round 3, if there is no deal in round 3, the game ends and both get 0.

The fact that Abe offers in round 3 gives him all the power.

CallMeIshmael
05-25-2006, 03:38 PM
[ QUOTE ]
What do people really think about players who care about both theirs and their opponents' payoffs? Whether it's linear or logarithmic/etc., is this a reasonable assumption?

[/ QUOTE ]


Jealousy is sometimes added in to payoff functions, where your payoff is the sum of an increasing function of what you get, and a decreasing function of what your opponent gets.

I'd say its pretty reasonable to assume that humans often DO care about the payoffs of others, though when complete rationality is assumed in the problem, this isnt possible.

madnak
05-25-2006, 03:56 PM
It's not about joint EV. This is game theory, there is no context to give cooperation any intrinsic value. Both players are entirely selfish.

atrifix
05-25-2006, 04:28 PM
Jealousy doesn't strike me as plausible in most cases. Certainly not in ultimatum games, but there are a large variety of other games where jealousy generally seems unreasonable. I am more concerned with cooperative behavior. But if you think there is (widespread) justification for the assumption of jealousy, enlighten me.

jogsxyz
05-25-2006, 04:53 PM
[ QUOTE ]
Uh, no it's not. The final round is 70. You may have better luck solving simple problems in the future if you learn to read the damn question.

Edit: On second thought, after reading your last reply where they are "too stubborn to accept", reading it properly might not help you solve this right.

[/ QUOTE ]
[ QUOTE ]

If player B declines, the amount is reduced to 0 and the game is over.

[/ QUOTE ]

At some point the value is reduced to zero. Whether this is round 4 or round 11, it's still zero.

TomCollins
05-25-2006, 04:55 PM
So I give you a choice. You can have $1 or $0, which do you choose.

CallMeIshmael
05-25-2006, 04:56 PM
[ QUOTE ]
Jealousy doesn't strike me as plausible in most cases. Certainly not in ultimatum games, but there are a large variety of other games where jealousy generally seems unreasonable. I am more concerned with cooperative behavior. But if you think there is (widespread) justification for the assumption of jealousy, enlighten me.

[/ QUOTE ]


When the ultimatum game is done, people routinely reject offers of like 75/25.

Why? No freaking clue.

But, it seems that people some inherent jealousy in their payoffs.

Now, their payoff might not be (us - them), but it might be something more like:

(us - 0.7*them)

or

(us - them^0.5)


But, people seem to act irrationally simply because they arent getting enough relative to their opponent. This seems like jealousy, or a close relative.

atrifix
05-25-2006, 05:46 PM
[ QUOTE ]
When the ultimatum game is done, people routinely reject offers of like 75/25.

Why? No freaking clue.

[/ QUOTE ]

Sure, but people also extremely rarely make offers like 75/25. Maybe it's because they think their opponents will reject; I don't know. Maybe they would like their opponents to have a fairer share. Also, people are much more inclined to accept offers like 75/25 if they are made randomly by a computer.

[ QUOTE ]
But, it seems that people some inherent jealousy in their payoffs.

Now, their payoff might not be (us - them), but it might be something more like:

(us - 0.7*them)

or

(us - them^0.5)


But, people seem to act irrationally simply because they arent getting enough relative to their opponent. This seems like jealousy, or a close relative.

[/ QUOTE ]

I would imagine that the behavior observed in the ultimatum game can also be modeled by a cooperative utility function, although perhaps we are talking about different sides of it. I think it's interesting that offers of 75/25 are so rarely made in practice. What I'm wondering is have there been any (relatively successful) attempts to come up with a more generalized utility equation?

diddle
05-25-2006, 06:11 PM
The question needs to specify whether B will accept or reject $0 in the final offer.

Rejecting $0 is spiteful.

diddle
05-25-2006, 06:13 PM
[ QUOTE ]
The question needs to specify whether B will accept or reject $0 in the final offer.

Rejecting $0 is spiteful.

[/ QUOTE ]

If B is rational and not spiteful, then B ACCEPTS every offer from A in the last round.

A has an EV at least $80.

morphball
05-25-2006, 06:20 PM
[ QUOTE ]
[ QUOTE ]
The question needs to specify whether B will accept or reject $0 in the final offer.

Rejecting $0 is spiteful.

[/ QUOTE ]


If B is rational and not spiteful, then B ACCEPTS every offer from A in the last round.

A has an EV at least $80.

[/ QUOTE ]

This getting toward my points, a rational B would employ a spiteful strategy in this game.

diddle
05-25-2006, 06:20 PM
[ QUOTE ]
[ QUOTE ]
The question needs to specify whether B will accept or reject $0 in the final offer.

Rejecting $0 is spiteful.

[/ QUOTE ]

If B is rational and not spiteful, then B ACCEPTS every offer from A in the last round.

A has an EV at least $80.

[/ QUOTE ]

Working backwards:

Last round has EV of $80 for A and $0 for B. Nash equilibrium is A offers 80/0 and B accepts.

2nd round:

B must offer A at least $80 for A to accept the deal, so B offers $80/$10. Equilibrium is 80/10.

1st round:

A must offer B at least $10 for B to accept the deal, so A offers 90/10. B ACCEPTS because he can do no better in subsequent rounds.

A wins $90. B wins $10. Everyone is happy.

diddle
05-25-2006, 06:24 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The question needs to specify whether B will accept or reject $0 in the final offer.

Rejecting $0 is spiteful.

[/ QUOTE ]


If B is rational and not spiteful, then B ACCEPTS every offer from A in the last round.

A has an EV at least $80.

[/ QUOTE ]

This getting toward my points, a rational B would employ a spiteful strategy in this game.

[/ QUOTE ]

A strictly rational B is indifferent between A $80/ B $0 and A $0/ B $0.

The question needs to specify which one B prefers. If B prefers the first option, then the game's answer is A $90/ B $10. If B prefers the 2nd option, then the game's answer is A $89/ B $11.

CallMeIshmael
05-25-2006, 06:28 PM
[ QUOTE ]
Rejecting $0 is spiteful.

[/ QUOTE ]

How so? He is indifferent to 0. Rejecting and accepting are the same.

Is it also spiteful is player B flips a coin and rejects and accepts 0 with equal probability?

CallMeIshmael
05-25-2006, 06:31 PM
[ QUOTE ]
What I'm wondering is have there been any (relatively successful) attempts to come up with a more generalized utility equation?

[/ QUOTE ]

Though I dont know for sure, it seems like this would be impossible, since its varies so much from person to person

TomCollins
05-25-2006, 06:34 PM
By his same logic, the "best" strategy for B would be to reject any offer under $79. He could say "give me $79 or I will reject it" ahead of time, and be spiteful and reject it otherwise. But this is has been discussed many times, and the word spiteful was used in the problem as an error in my opinion because it is not clear what it means. The only assumption in this game is that a player will seek to maximize his own returns.

By the way, has anyone heard of the Monty Hall problem?

CallMeIshmael
05-25-2006, 06:35 PM
[ QUOTE ]
The question needs to specify which one B prefers. If B prefers the first option, then the game's answer is A $90/ B $10. If B prefers the 2nd option, then the game's answer is A $89/ B $11.

[/ QUOTE ]

FWIW, the generally accpeted solution to situations like this goes like this:

Since its unknown how player B will react to an offer of 80/0 (that is, the definitions in the OP do not lead to us knowing how he acts at this point in the game) player A must offer 79/1 in the final round, since he knows that the offer will be accepted.

diddle
05-25-2006, 06:36 PM
[ QUOTE ]
By his same logic, the "best" strategy for B would be to reject any offer under $79. He could say "give me $79 or I will reject it" ahead of time, and be spiteful and reject it otherwise. But this is has been discussed many times, and the word spiteful was used in the problem as an error in my opinion because it is not clear what it means. The only assumption in this game is that a player will seek to maximize his own returns.

By the way, has anyone heard of the Monty Hall problem?

[/ QUOTE ]

Are you disagreeing that the answer is either $90/10 or $89/11, based on the interpretation of B as rational?

CallMeIshmael
05-25-2006, 06:37 PM
[ QUOTE ]
By his same logic, the "best" strategy for B would be to reject any offer under $79. He could say "give me $79 or I will reject it" ahead of time, and be spiteful and reject it otherwise.

[/ QUOTE ]

You replied to me, so Im going to assume this was directed my way, but I might be wrong.


Again, no, he cannot credibly say that he will only accept 79. Since once the offer of 1 is made, there is no way he can credibly threaten that he will reject.

The only offer he can credibly reject is 0, since he is indifferent.

[ QUOTE ]
By the way, has anyone heard of the Monty Hall problem?

[/ QUOTE ]

Yes

atrifix
05-25-2006, 07:08 PM
[ QUOTE ]
Though I dont know for sure, it seems like this would be impossible, since its varies so much from person to person

[/ QUOTE ]

I'm not interested in cardinal utility functions. I mean is a jealous linear function a reasonable approximation; e.g. is

own payoff - a * opponent's payoff

where a is on the interval [0,1], a reasonable assumption? I don't think that it is in most cases.

CallMeIshmael
05-25-2006, 07:13 PM
[ QUOTE ]
I'm not interested in cardinal utility functions. I mean is a jealous linear function a reasonable approximation; e.g. is

own payoff - a * opponent's payoff

where a is on the interval [0,1], a reasonable assumption? I don't think that it is in most cases.

[/ QUOTE ]


No, it certainly is not.

But, a linear jealousy function is probably better than the complete rationality assumption, since the former is a better predictor of human behaviour than is the latter

jogsxyz
05-25-2006, 08:12 PM
On round 3 Bob should reject any offer less than a 40/40 split. Bob has power. Abe gets nothing if no deal is made. Why should Bob accept 1 and allow Abe 79?
That's what farm workers are accepting to pick fluit.

In the farming game if Bob rejects 1, Abe just offers the deal to Cal, then Dan, and than etc, etc. Bob has no bargaining power.

But in this game if Bob rejects the final offer, both parties get zero. Bob should stand firm and barter for a fair deal.

madnak
05-25-2006, 08:47 PM
That's not jealousy. And it's not mathematical. Psychology can't be quantified like that. It is not a game theory problem. No experiment involving human beings has anything to do with game theory. None, nada, zero.

TomCollins
05-25-2006, 09:15 PM
No, just the new addition who claims that a spiteful strategy is rational. I agree the intended answer is 89/11, but the wording is not the clearest.

CallMeIshmael
05-26-2006, 12:52 AM
[ QUOTE ]
OWhy should Bob accept 1 and allow Abe 79?

[/ QUOTE ]

Because Bob, like most people, prefers $1 to $0.

CallMeIshmael
05-26-2006, 12:57 AM
[ QUOTE ]
That's not jealousy.

[/ QUOTE ]

Yes it is. Your hapiness is not only related to how much you have, but how much you are jealous of what others have.


[ QUOTE ]
No experiment involving human beings has anything to do with game theory.

[/ QUOTE ]


If you actually believe this, you have a very narrow view of what game theory really is.


Are you trying to say that humans never act as game theory would predict? Because, that is still wrong, but much less wrong.



EDIT: 2 book related notes:

FWIW, I recently read this (http://www.amazon.com/gp/product/0691096228/sr=8-1/qid=1148620725/ref=pd_bbs_1/002-3217910-8864013?%5Fencoding=UTF8) book for a class. It discusses the application of game theory to behaviour choices (once again I will stress that humans (and other animals) make unconscious decisions predicted by GT all the time).

It also deals with mating. Now, IIRC we had a discussion some time ago (and if it wasnt you my apologies) where I argued that women and men have different reproductive interests. And that women should be less willing to engage in casual sex than men. And you argued that both men and women should get it on. Also, that there wasnt polygyny in humans. This book (as with most work on the these subjects) back me.


also, that book I said I recalled from the library is in, and Ill have access to it tomorrow.

jogsxyz
05-26-2006, 01:35 AM
[ QUOTE ]
That's not jealousy. And it's not mathematical. Psychology can't be quantified like that. It is not a game theory problem. No experiment involving human beings has anything to do with game theory. None, nada, zero.

[/ QUOTE ]

http://en.wikipedia.org/wiki/Game_theory

Wikipedia disagrees with you. Most game theory problems involves the interactions of human beings. Only oftentimes there's no clearcut optimal strategy.

CallMeIshmael
05-26-2006, 01:40 AM
Also,

this (http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?z=y&isbn=0199261857&itm=1) book is probably my favourite book I have read on pretty much any subject.

It is a look at how game theory underlies much of society. It is also the book from which I got the Jason_t/vase problem I posted in OOT (the writer is the inventor of that problem)

CallMeIshmael
05-26-2006, 01:47 AM
[ QUOTE ]
On round 3 Bob should reject any offer less than a 40/40 split. Bob has power. Abe gets nothing if no deal is made. Why should Bob accept 1 and allow Abe 79?
That's what farm workers are accepting to pick fluit.

In the farming game if Bob rejects 1, Abe just offers the deal to Cal, then Dan, and than etc, etc. Bob has no bargaining power.

But in this game if Bob rejects the final offer, both parties get zero. Bob should stand firm and barter for a fair deal.

[/ QUOTE ]


You and Morph are making similar arguments. And I do see the point.

I just fail to see how, in the third round, when AFTER Abe offers 79/1, Bob will actually not accept. He is preferring $0 to $1, and that goes against the definition of rational.

madnak
05-26-2006, 01:59 AM
[ QUOTE ]
Yes it is. Your hapiness is not only related to how much you have, but how much you are jealous of what others have.

[/ QUOTE ]

No. I would refuse an unfair offer, even 60/40, and it has nothing to do with jealousy. It's a complex matter of reciprocal value, identity, image, patterned or habitual behavior, and it could be affected by mood or diet or level of exhaustion and stress. Jealousy wouldn't even enter into it for me, and your assumption it would is a clear example of rigid and limited thinking. Game theory encourages such thinking, but in the real world it's generally disadvantageous.

[ QUOTE ]
[ QUOTE ]
No experiment involving human beings has anything to do with game theory.

[/ QUOTE ]


If you actually believe this, you have a very narrow view of what game theory really is.


Are you trying to say that humans never act as game theory would predict? Because, that is still wrong, but much less wrong.

[/ QUOTE ]

On the contrary, I think humans often act as game theory would predict. But I don't think game theory ever predicts human action. Sometimes game theory produces the same result as psychology. But that doesn't mean the result was arrived at through game theory; the result was arrived at through psychology, or in some situations and contexts through chemical mechanics or simple probability or environmental conditions. That these mechanics often happen to produce the same result as game theory is irrelevant.

[ QUOTE ]
FWIW, I recently read this (http://www.amazon.com/gp/product/0691096228/sr=8-1/qid=1148620725/ref=pd_bbs_1/002-3217910-8864013?%5Fencoding=UTF8) book for a class. It discusses the application of game theory to behaviour choices (once again I will stress that humans (and other animals) make unconscious decisions predicted by GT all the time).

[/ QUOTE ]

Is game theory the only model that results in these predictions, and does it have perfect accuracy?

[ QUOTE ]
It also deals with mating. Now, IIRC we had a discussion some time ago (and if it wasnt you my apologies) where I argued that women and men have different reproductive interests. And that women should be less willing to engage in casual sex than men. And you argued that both men and women should get it on. Also, that there wasnt polygyny in humans. This book (as with most work on the these subjects) back me.

[/ QUOTE ]

My argument was a bit more complex than that. It was a few months ago so I don't recall in detail. I think I argued that any different reproductive interests between men and women were a result of the environment in which our species evolved, and that in current society promiscuity is a better strategy. I also argued that serial monogamy with regular promiscuous infidelity is the most common human mating strategy, rather than any form of polygamy. I definitely didn't argue there's no polygyny in humans. I grew up in Utah, after all /images/graemlins/wink.gif

My sister's graduating college this weekend so I'll be busy. I'll try to take a look.

CallMeIshmael
05-26-2006, 02:43 AM
[ QUOTE ]
No. I would refuse an unfair offer, even 60/40, and it has nothing to do with jealousy. It's a complex matter of reciprocal value, identity, image, patterned or habitual behavior, and it could be affected by mood or diet or level of exhaustion and stress. Jealousy wouldn't even enter into it for me, and your assumption it would is a clear example of rigid and limited thinking. Game theory encourages such thinking, but in the real world it's generally disadvantageous.

[/ QUOTE ]

Lets compare 4 situations:


- Someone walks up to you and says "here is $5." You can accept or reject it, and get 0.

- You are playing a game with someone where they offer you between 0 and $10 (and they keep the rest). They offer you $5. If you reject, you get 0.

- You are playing a game with someone where they offer you between 0 and $100 (and they keep the rest). They offer you $5. If you reject, you get 0.

- You are playing a game with someone where they offer you between 0 and $1000 (and they keep the rest). They offer you $5. If you reject, you get 0.

You are 100% sure that you will never play any of these games again.

Are you accpeting the $5 in some but not all of the scenarios?

If so, you are using how much THEY get to determine your decision, even though it offers you no benefit. It seems you are displaying jealously under this definition, no?:

"Jealousy is an emotion by one who perceives that another person is giving something that he/she wants or feels is due to them (often attention, love, respect or affection) to an alternate. For example, a child will likely become jealous when their parents give sweets to a sibling but not to them"

[ QUOTE ]
On the contrary, I think humans often act as game theory would predict. But I don't think game theory ever predicts human action. Sometimes game theory produces the same result as psychology. But that doesn't mean the result was arrived at through game theory; the result was arrived at through psychology, or in some situations and contexts through chemical mechanics or simple probability or environmental conditions. That these mechanics often happen to produce the same result as game theory is irrelevant.

[/ QUOTE ]

There is more than one way to explain behaviour.

For example, if you are hungry, you are hungry because you havent eaten food. The feelings you have are caused by your hypothalamus (sp?) that was triggered by a lack of glucose.

(or something like that, physiology BLOWS)

But, that doesnt explain WHY this happened, it explains how.

The reason we get hungry is that food gives us nutrients. And organisms that are triggered to eat probably are more successful than those that dont have this mechanism.


Similarily, just because psychology dicates humans to do something doesnt mean that game theory isnt the reason that psychological mechanism evolved.




[ QUOTE ]
Is game theory the only model that results in these predictions, and does it have perfect accuracy?

[/ QUOTE ]

Game theory is applicable when:

- The problem was present in the EEA
- The problem is still present, and unchanged
- The problem has fitness repercussions

In this situation, yes, game theory is applicable and highly accurate.

It is not 100% accurate, simply because we cannot model the real world completely with variables. But, using enough variables to get a decent picture allows us to make very good predictions.

[ QUOTE ]
I think I argued that any different reproductive interests between men and women were a result of the environment in which our species evolved

[/ QUOTE ]

Im sure neither of us want to go down that road again, but I think I might misunderstand the above.

Are you saying that the fact men should be more willing to have casual sex than women is a function of the environment? Its def. a function of our biology.

[ QUOTE ]
My sister's graduating college this weekend so I'll be busy. I'll try to take a look.

[/ QUOTE ]

Congrats to her!!

Im a junior but I have a lot of friends graduating (im the age of a senior, and took time off). Exciting times.

CallMeIshmael
05-26-2006, 03:30 AM
[ QUOTE ]
Is game theory the only model that results in these predictions, and does it have perfect accuracy?

[/ QUOTE ]

also, complete accuracy seems like a pretty steep requirement.

I remember reading about someone who was monitoring a turtle population that was decreasing in size, and they were getting worried it might become extinct

What the person did was put the turles into 4 categories (newborn, child, adult, parent), where a parent is an adult who just had a child.

They also found the chances of group transition. Like, how often a newborn dies, or how often an adult gives birth, etc.

using JUST that information, they were able to determine where to focus their efforts for preservation.

Ie. with the probabilities of birth, death, etc. they found
it was best to save the lifr of a child (perhaps it was adult, I cant remember)

Basically, if you were going to kill 1 turtle from each group except one, you should choose to save the child, since it has the highest return (ie. each newborn is expected to produce 0.96 new turtles, while each child is expected to produce 1.02).

With this in mind, they focused their attention on helping to ensure the children survived (ie. tyring to improve the environment for where the children tended to be) but they ignored all the other ones.

Now, this model isnt perfect. Not by a long shot. There are more than 4 types of turtle, the chanes of one living is also related to its size and its location. But, that doesnt mean the model doesnt have merit, and it actually helped them save the species.

Similarily, since it is often just about impossible to model all of the variables in a situation, its difficult for GT to predict with 100% how an organism will react to a situation. But, that still doesnt mean it isnt often a very good predictor.

madnak
05-26-2006, 03:38 AM
[ QUOTE ]
Lets compare 4 situations:

- Someone walks up to you and says "here is $5." You can accept or reject it, and get 0.

- You are playing a game with someone where they offer you between 0 and $10 (and they keep the rest). They offer you $5. If you reject, you get 0.

- You are playing a game with someone where they offer you between 0 and $100 (and they keep the rest). They offer you $5. If you reject, you get 0.

- You are playing a game with someone where they offer you between 0 and $1000 (and they keep the rest). They offer you $5. If you reject, you get 0.

You are 100% sure that you will never play any of these games again.

Are you accpeting the $5 in some but not all of the scenarios?

[/ QUOTE ]

I might take $5 in all the scenarios, or I might take $0 in all the scenarios. It would depend on the context and set of circumstances that made the decisions meaningful. With that caveat, I'll assume that this is true for the sake of argument.

[ QUOTE ]
If so, you are using how much THEY get to determine your decision, even though it offers you no benefit. It seems you are displaying jealously under this definition, no?:

"Jealousy is an emotion by one who perceives that another person is giving something that he/she wants or feels is due to them (often attention, love, respect or affection) to an alternate. For example, a child will likely become jealous when their parents give sweets to a sibling but not to them"

[/ QUOTE ]

No, not at all. For one thing, I don't see how my decisions in these situations can possibly be tied to any specific emotion in a deductive fashion. Regardless, this emotional state is completely inconsistent with what I would feel on rejecting these offers. The specifics would very much depend on the situation. I might feel compassion, anger, amusement, or a sense of justice depending on the circumstances. Envy might even be a factor, but the description you gave just doesn't fit many of my emotional experiences.

[ QUOTE ]
There is more than one way to explain behaviour.

For example, if you are hungry, you are hungry because you havent eaten food. The feelings you have are caused by your hypothalamus (sp?) that was triggered by a lack of glucose.

(or something like that, physiology BLOWS)

But, that doesnt explain WHY this happened, it explains how.

[/ QUOTE ]

I'm sorry, you just lost me. I'm a determinist so I don't believe in any grand "why" of human hunger. We get hungry due to biological mechanisms. End of story, as far as I'm concerned.

[ QUOTE ]
The reason we get hungry is that food gives us nutrients. And organisms that are triggered to eat probably are more successful than those that dont have this mechanism.

[/ QUOTE ]

Be careful, we are talking math so we have to be specific. You're not asking why we get hungry, you're asking how our species evolved to get hungry. Big difference.

[ QUOTE ]
Similarily, just because psychology dicates humans to do something doesnt mean that game theory isnt the reason that psychological mechanism evolved.

[/ QUOTE ]

True enough. It doesn't mean that it is, either.

[ QUOTE ]
Game theory is applicable when:

- The problem was present in the EEA
- The problem is still present, and unchanged
- The problem has fitness repercussions

In this situation, yes, game theory is applicable and highly accurate.

It is not 100% accurate, simply because we cannot model the real world completely with variables. But, using enough variables to get a decent picture allows us to make very good predictions.

[/ QUOTE ]

This may be a major point of contention. I don't think game theory would be 100% accurate even with all the variables accounted for, unless considered within such a limited framework that its accuracy would be implied by definition.

[ QUOTE ]
[ QUOTE ]
I think I argued that any different reproductive interests between men and women were a result of the environment in which our species evolved

[/ QUOTE ]

Im sure neither of us want to go down that road again, but I think I might misunderstand the above.

Are you saying that the fact men should be more willing to have casual sex than women is a function of the environment? Its def. a function of our biology.

[/ QUOTE ]

No. The environment in which our species evolved. In particular, I think the mortality rate was much higher as we were evolving, especially among children and especially among males. There were also predators and environmental hazards that had to be dealt with. And basic resources like food were scarce.

Our technology has radically changed our environment. I think the modern world is a world conducive to promiscuity. The relative abundance and lack of danger render our mating strategies obsolete. Unfortunately our social environment has become as relevant as our physical environment, and I worry that "memes" will tend to influence our future evolutionary path, perhaps to our disadvantage. But that's a different subject.

[ QUOTE ]
Congrats to her!!

Im a junior but I have a lot of friends graduating (im the age of a senior, and took time off). Exciting times.

[/ QUOTE ]

Hehe. Well, I'm 24 but I'll just be starting as a freshman this fall. She's only 22, so she's your age. I don't see her much so I'm looking forward to it.

madnak
05-26-2006, 03:47 AM
[ QUOTE ]
also, complete accuracy seems like a pretty steep requirement.

I remember reading about someone who was monitoring a turtle population that was decreasing in size, and they were getting worried it might become extinct

What the person did was put the turles into 4 categories (newborn, child, adult, parent), where a parent is an adult who just had a child.

They also found the chances of group transition. Like, how often a newborn dies, or how often an adult gives birth, etc.

using JUST that information, they were able to determine where to focus their efforts for preservation.

Ie. with the probabilities of birth, death, etc. they found
it was best to save the lifr of a child (perhaps it was adult, I cant remember)

Basically, if you were going to kill 1 turtle from each group except one, you should choose to save the child, since it has the highest return (ie. each newborn is expected to produce 0.96 new turtles, while each child is expected to produce 1.02).

With this in mind, they focused their attention on helping to ensure the children survived (ie. tyring to improve the environment for where the children tended to be) but they ignored all the other ones.

Now, this model isnt perfect. Not by a long shot. There are more than 4 types of turtle, the chanes of one living is also related to its size and its location. But, that doesnt mean the model doesnt have merit, and it actually helped them save the species.

Similarily, since it is often just about impossible to model all of the variables in a situation, its difficult for GT to predict with 100% how an organism will react to a situation. But, that still doesnt mean it isnt often a very good predictor.

[/ QUOTE ]

There's a big difference between being a good predictor and being an accurate model. Newton's mechanics are a good predictor, but they don't describe how our universe actually works.

This doesn't seem like game theory to me, perhaps my view of it is restricted. Is every consideration of expected value considered game theory? I'm going by wiki, "A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies." I'm considering only situations such as this as "game theoretical" situations. You example misses on criteria 2 and 3.

CallMeIshmael
05-26-2006, 08:29 AM
Just to clarify, the study I referenced was not game theory, it was just an example of how a model, despite being incomplete, still has worth.


I found an article I had in a drawer as I clean out my room for the year. It is entitled "Putting game theory to the test" and was published in science in 1995 (sadly I cant find a copy online)

Here are 2 quotes from it:

"Maynard Smith proposed treating a given behavior as a strategy in a game and assuming that strategies evolve just as physical characteristics do. Thus, any well adapted population will follow the "best" strategy"

"Over the past 20 years theorists have modeled nearly every animal behavior imaginable as an ESS: aggression, cooperation, foragine, hunting, rivalry and more"

(the article also claims that there have been thousands of articles published that use game theory to explain animal behaviour)


Finally, when I was searching for the above article online, I found a syllabus which said:

"The main focus is on developing the basic tools of game theory
through lectures and exercises and putting these tools to work by applying them to issues
that arise in many diverse areas of the Social Sciences such as Economics, Sociology, Political
Science, and Law."


To be honest, I dont really see much worth in continuing the debate as to whether or not game theory can be applied to animal behaviour. The belief that game theory underlies animal behavaiour is held by pretty much 100% of ethologists, and I have yet to see a solid argument as to why I should believe they are all wrong.




FWIW:

I remembered a study I read a few years ago that actually relates quite a bit to the OP.

In paper wasps, there is often a dominant female and a subordinate female (sometimes there is just a dominant). These are the only two breeding females. The dominant female does the bulk of the breeding, but lets the subordinate do a small percentage of it. Essentially, the dominant says "I will give you x% of the breeding" and the subordinate either accepts and stays to help, or rejects and attempts to go off and make a nest for herself. This is pretty similar to the OP, in as much as one person makes an offer, and the other accepts or rejects.

The question is, how much mating does dominant queen offer?

Well, there are 2 players to the game, the dominant and subordinate.

The dominant wants the difference:

(payoff with subordinate helping) - (payoff without help) to be greater than 0.

The subordinate wants the difference:

(payoff with a given % of mating) - (payoff of finding own nest) to be > 0


Now, the above payoffs can be modeled. They are clealy a function of three variables:

- The offer itself. A large offer means less mating for the dominant. But an offer too small means that the subordinate will leave since it expects to do better elsewhere

- Relatedness (a subordinate that is more related needs less of an offer since its gets more out of the reproduction of the dominant)

- How well the subordinate can do elsewhere

The first equation is greater than 0 when the offer is fairly small. This is because the percentage of mating the dominant gives up is more than made up for by the help the subordinate gives in raising the young.

The second is greater than 0 when the subordinate gets enough out of the sharing (both in terms of inclusive fitness because of a possible relation and the mating she is given) to cover what she is losing by not starting her own nest.


Now, here is where the game theory comes in. People modeled the above equations. Ie. they estimated how much the EV of starting a new nest is, etc. And they found the line of best fit for the offer of the dominant was right on the nash equilibrium.

That is, the dominant offered just enough for the subordinate to stay, but no more than that. This is the EXACT solution that game theory predicts. Similarily, in the OP player A should offer just enough for the second player to accept.

CallMeIshmael
05-26-2006, 08:41 AM
Question to all who say that 89/11 is not the correct answer:


I will present a very specific situation and ask how you would react.


- You decide to take part in an experiment at a local university

- The professor (we'll call him Stephen) and his assistant (we'll call him Bobby) are running the experiment

- Stephen knows who you are, but is not present at the experiment, since Bobby is running it (and Bobby was never told the names of the participants)

- At the experiment you always wear a bag over your head

- You are put in a room. Bobby announces that he has given a person 100 dollars. He will when offer you some amount. You can reject it, and both get 0, or take it, and you each get that amount

- After the game, Stephen will drive you home still unaware as to what position you played in the game.



I present the absurd situation above since it makes it a completely anonymous 1 shot game.

Under these circumstances are you still rejecting $1? I mean, you literally either walk away with 0 and the knowledge that you punished someone for making a crappy offer, or you can take the dollar with the knowledge that someone out there took $99, but there is no chance you will ever know who the other player was.

jogsxyz
05-26-2006, 12:12 PM
It's not about being spiteful or being jealous. Bob must have respect for himself and not set a poor precedent.
If Bob accepts 89/11 this time, he will be offered the small straw every time.

Change the game. One round only. A young mother returns with 100 pieces of candy. She tells her two young sons to divide the candy as they wish. If the sons do not agree, the candy will go to the neighbor's kids.
The older son offers a 89/11 split. The younger son should refuse. Both get no candy. But if the younger son accepts this unequal share, he will have set a precedent that he is willing to be screwed. He will always receive the small straw.

madnak
05-26-2006, 12:37 PM
[ QUOTE ]
In paper wasps, there is often a dominant female and a subordinate female (sometimes there is just a dominant). These are the only two breeding females. The dominant female does the bulk of the breeding, but lets the subordinate do a small percentage of it. Essentially, the dominant says "I will give you x% of the breeding" and the subordinate either accepts and stays to help, or rejects and attempts to go off and make a nest for herself.

[/ QUOTE ]

"Essentially?" Obviously the dominant doesn't literally say "I will give you x% of the breeding." So how does she say it?

[ QUOTE ]
This is pretty similar to the OP, in as much as one person makes an offer, and the other accepts or rejects.

[/ QUOTE ]

Again, make this concrete. How is the offer presented, and how is it accepted or rejected? Does the dominant female hold a conference?

[ QUOTE ]
The question is, how much mating does dominant queen offer?

Well, there are 2 players to the game, the dominant and subordinate.

[/ QUOTE ]

There are more than 2 players. There may be other females, there's the rest of the hive, predators, etc. To assume that none of them can affect the outcome of the game is a big error IMO.

[ QUOTE ]
The dominant wants the difference:

(payoff with subordinate helping) - (payoff without help) to be greater than 0.

The subordinate wants the difference:

(payoff with a given % of mating) - (payoff of finding own nest) to be > 0

[/ QUOTE ]

I don't think the dominant or the subordinate thinks in terms of mathematics. You're talking about what's advantageous.

[ QUOTE ]
Now, the above payoffs can be modeled. They are clealy a function of three variables:

- The offer itself. A large offer means less mating for the dominant. But an offer too small means that the subordinate will leave since it expects to do better elsewhere

[/ QUOTE ]

I don't believe this is always clear-cut.

[ QUOTE ]
- Relatedness (a subordinate that is more related needs less of an offer since its gets more out of the reproduction of the dominant)

[/ QUOTE ]

This I'll grant.

[ QUOTE ]
- How well the subordinate can do elsewhere

[/ QUOTE ]

There is no way this is possible to quantify. How well the subordinate can do elsewhere is dependent on too many variables. I'd like to see the method used to quantify this.

[ QUOTE ]
The first equation is greater than 0 when the offer is fairly small. This is because the percentage of mating the dominant gives up is more than made up for by the help the subordinate gives in raising the young.

The second is greater than 0 when the subordinate gets enough out of the sharing (both in terms of inclusive fitness because of a possible relation and the mating she is given) to cover what she is losing by not starting her own nest.


Now, here is where the game theory comes in. People modeled the above equations. Ie. they estimated how much the EV of starting a new nest is, etc. And they found the line of best fit for the offer of the dominant was right on the nash equilibrium.

[/ QUOTE ]

By "right on" I assume you mean "closely approximate." There's no way to achieve a mathematical degree of precision in a study like this.

[ QUOTE ]
That is, the dominant offered just enough for the subordinate to stay, but no more than that. This is the EXACT solution that game theory predicts. Similarily, in the OP player A should offer just enough for the second player to accept.

[/ QUOTE ]

If someone offers me either one or two pieces of pie, and I'm trying to maximize my level of pie, game theory indicates that I should choose two pieces of pie. My actual choice in this situation is two pieces of pie. Does that mean I made my choice according to game theory? No.

The Nash Equilibrium found in this model represents the most effective strategy for the wasps. That doesn't mean the wasps make their decisions by finding the Nash Equilibrium, nor does it mean that game theory is the only way to arrive at the optimal strategy. To dramatically oversimplify - in evolution a number of strategies are used by different organisms. The strategy that maximizes fitness becomes more and more clearly represented among the population as the organisms using less fit strategies die off. There is no game theory calculation involved - it's more of a "shotgun" approach.

Game theory is one way we can identify the strategy. But it's not how "nature" reaches the same conclusion. And I believe there are better alternatives to game theory that can identify the same strategy.

To use an analogy, let's take an abstract problem. Say x-2=0, and you're trying to solve for x. It's possible to solve this problem by identifying the underlying mathematical pattern and applying operations in a reasoned way. If you add 2 to each side, you see that x=2. Call this method "game theory." Now imagine that you discover a computer program that can always solve for x given this kind of pattern. You may assume that the computer is using the same method you are when it approaches the problem. It makes sense as an assumption.

But let's say I programmed this computer. And frankly, I'm not very good at it. My program works through trial and error. We can call the program "evolution." The first thing evolution does is try the number 1 and see if it works as a value for x. Then it tries -1, and 2, -2, 3, -3 in sequence. It continues until it finds a value that works for x.

In this situation you can clearly see that, while "evolution" always has the same result as "game theory," it uses a completely different process.

To be fair, you've demonstrated that game theory is useful in some applications. I'm opposed to it largely because I'm a purist, and I don't believe it's the best method. I worry that scientists will overlook a superior alternative due to their reliance on game theory. And I believe the mechanics of game theory are never the process by which results are arrived at in reality.

madnak
05-26-2006, 12:47 PM
This assumes that the sons have a continuing relationship. Game theory assumes the opposite. Also I'm not sure fairness would necessarily be indicated even with an arbitrary number of iterations. Someone correct me if I'm wrong, but in this case player A has a clear strategic advantage and player B has no recourse. Player A knows player B is completely rational and player B knows the same of player A. Therefore the tactics of manipulation won't work - both players know the equilibrium and know that they'll both profit most by going straight to the equilibrium solution.

To put it another way, we can look at an arbitrarily long series of trials and start with the last one. 89/11 is clearly the answer here because there will be no further contact. Since we know that 89/11 will be the offer on the last trial, no matter what, and since B isn't spiteful and will accept that value on the last trial, we know that the second-to-last trial will have no effect on the last trial. Since the second-to-last trial can have no bearing on future interactions, the offer for that trial must also be 89/11. But if that's true then the third-to-last trial has no effect on subsequent trials either, and so on. You can work this all the way back to the first trial.

diddle
05-26-2006, 01:04 PM
Anyone have Cliff notes for this thread?

Game-theoretic answers don't necessarily correspond to what happens in real life.

blah blah blah psychology blah blah jealousy blah blah. The question has nothing to do with these human constructs.

CallMeIshmael
05-26-2006, 01:25 PM
[ QUOTE ]
To dramatically oversimplify - in evolution a number of strategies are used by different organisms. The strategy that maximizes fitness becomes more and more clearly represented among the population as the organisms using less fit strategies die off. There is no game theory calculation involved - it's more of a "shotgun" approach.


[/ QUOTE ]

Game theory is precisely how nature reaches its conclusions.

The 'shotgun' approach spirals towards a Nash Equilibrium.



You keep pointing out that the animals arent using the process of game theory to make their choices. Of course they arent, they are animals without the ability to understand what game theory is.

These choices are all made instinctively, and this is because evolution forces animals to make the play predicted by game theory.



Also, "There is no way this is possible to quantify. How well the subordinate can do elsewhere is dependent on too many variables. I'd like to see the method used to quantify this." I dont have thu study in front of me, but I've seen tougher things than this estimated before. For starters, they probably watched many queens that tried to start their own nest, determined how often they succeeded, and what their reproductive success was (on average) if they succeeded.



Also, Ive posted like 7 or 8 links in this thread to outside info that backs up my contention that game theory underlies animal behaviour. On top of that, one of them stated that there have been thousands of studies conducted which provide evidence for game theory's application to behaviour.

Can you back up your viewpoint with references to literature? Because if you cant, it might be one more reason why maybe, just maybe, you are the one not thinking about this correctly.

TomCollins
05-26-2006, 01:55 PM
[ QUOTE ]
Change the game. One round only. A young mother returns with 100 pieces of candy. She tells her two young sons to divide the candy as they wish. If the sons do not agree, the candy will go to the neighbor's kids.
The older son offers a 89/11 split. The younger son should refuse. Both get no candy. But if the younger son accepts this unequal share, he will have set a precedent that he is willing to be screwed. He will always receive the small straw.

[/ QUOTE ]

This implies multiple games. The game in question is one time only. If there will never be an offer of candy again, it would be completely foolish to reject any offer where you recieve any candy.

madnak
05-26-2006, 02:03 PM
[ QUOTE ]
Game theory is precisely how nature reaches its conclusions.

The 'shotgun' approach spirals towards a Nash Equilibrium.

[/ QUOTE ]

That doesn't make it a game-theoretical approach. It's not about players making decisions, it's an emergent process plain and simple. The decisions made by individual players correspond to "correct" game theoretical decisions in some cases because those correct results are the outcome of the emergent process, not a process of decision-making. In reality there are no decisions involved, just static biological mechanisms. Humans are probably the only species even capable of making reasoned decisions, and humans generally act on biological imperatives over reason anyhow.

[ QUOTE ]
You keep pointing out that the animals arent using the process of game theory to make their choices. Of course they arent, they are animals without the ability to understand what game theory is.

[/ QUOTE ]

Then how is it game theory? Game theory involves situations in which players select options in order to secure a particular outcome. The processes of game theory are clearly outlined and they don't exist in nature. You can't call something game theory based on the outcome - game theory is a matter of process, not outcome. Neither the process by which individual organisms make choices nor the process by which species evolve traits correspond to the processes outlined by game theory. Game theory doesn't work through "shotgun spirals" any more than algebra works through trial and error or calculus through hard processing of power series.

[ QUOTE ]
These choices are all made instinctively, and this is because evolution forces animals to make the play predicted by game theory.

[/ QUOTE ]

So what?

[ QUOTE ]
Also, "There is no way this is possible to quantify. How well the subordinate can do elsewhere is dependent on too many variables. I'd like to see the method used to quantify this." I dont have thu study in front of me, but I've seen tougher things than this estimated before. For starters, they probably watched many queens that tried to start their own nest, determined how often they succeeded, and what their reproductive success was (on average) if they succeeded.

[/ QUOTE ]

And you're suggesting there's no margin of error here? The word "estimate" implies a lack of precision at minimum.

[ QUOTE ]
Also, Ive posted like 7 or 8 links in this thread to outside info that backs up my contention that game theory underlies animal behaviour. On top of that, one of them stated that there have been thousands of studies conducted which provide evidence for game theory's application to behaviour.

[/ QUOTE ]

I've only skimmed them, but none seem to contradict what I'm saying other than putting way too much emphasis on game theory as a predictive tool. Nothing you've linked indicates that game theory exists in nature, only that it can be used to make approximate predictions in limited cases. They all seem to treat GT as a tool, not as any indication of the underlying structure of nature. Economics is the major exception, but it deals with human beings so the consideration of rational choice is more relevant. I still think it's naive to assume humans will act rationally in an economic context, and I think most economists would agree with me.

[ QUOTE ]
Can you back up your viewpoint with references to literature? Because if you cant, it might be one more reason why maybe, just maybe, you are the one not thinking about this correctly.

[/ QUOTE ]

I'm not familiar with the literature. Everything I get is going to come from google. But okay.

this (http://www.santafe.edu/research/publications/wpabstract/199804027) seems like a wonderful treatment of the subject.
this (http://planning.cs.uiuc.edu/node475.html) touches on it
this (http://muse.jhu.edu/cgi-bin/access.cgi?uri=/journals/world_politics/v053/53.2munck.html) is excellent based on the abstract, exactly what I'm talking about

You want more, I'll dig. But frankly you can find this stuff as easily as I can. The fact I don't know the literature doesn't affect the validity of my arguments.

CallMeIshmael
05-26-2006, 02:46 PM
[ QUOTE ]
url=http://www.santafe.edu/research/publications/wpabstract/199804027]this[/url] seems like a wonderful treatment of the subject.
this (http://planning.cs.uiuc.edu/node475.html) touches on it
this (http://muse.jhu.edu/cgi-bin/access.cgi?uri=/journals/world_politics/v053/53.2munck.html) is excellent based on the abstract, exactly what I'm talking about

You want more, I'll dig. But frankly you can find this stuff as easily as I can. The fact I don't know the literature doesn't affect the validity of my arguments.

[/ QUOTE ]

I couldnt read the third one, but the first two dont touch on why game theory cant be used to explain behaviour.

Im growing tired with this debate. If you want to provide some evidence for why GT cant be used to explain behaviour, please do so. If you want to google "game theory" + "problems" and link me to posts, I'll pass.

Ive spoken to two people about this debate via PM, and both agree that this is probably beyond the point of no return.

Out of curiosity, what do you think is in books like:

Game Theory and Animal Behavior (http://www.amazon.com/gp/product/0195137906/sr=8-1/qid=1148668853/ref=sr_1_1/002-3217910-8864013?%5Fencoding=UTF8)

or

The Survival Game : How Game Theory Explains the Biology of Cooperation and Competition (http://www.amazon.com/gp/product/0805076999/sr=8-2/qid=1148668853/ref=sr_1_2/002-3217910-8864013?%5Fencoding=UTF8)

if GT cant be used to describe animal behaviour?



EDIT: not that im dying for more evidence, but I just looked up game theory at wiki and it says "Beginning in the 1970s, game theory has been applied to animal behavior, including species' development by natural selection."

atrifix
05-26-2006, 03:34 PM
[ QUOTE ]
You can't call something game theory based on the outcome - game theory is a matter of process, not outcome. Neither the process by which individual organisms make choices nor the process by which species evolve traits correspond to the processes outlined by game theory. Game theory doesn't work through "shotgun spirals" any more than algebra works through trial and error or calculus through hard processing of power series.

[/ QUOTE ]

I don't agree with this. The branch of game theory called evolutionary game theory works essentially in the way you describe. Equilibria actually arise through trial and error, but the function of game theory is to explain why those equilibria evolved, or why they were adaptive.

[ QUOTE ]
They all seem to treat GT as a tool, not as any indication of the underlying structure of nature. Economics is the major exception, but it deals with human beings so the consideration of rational choice is more relevant. I still think it's naive to assume humans will act rationally in an economic context, and I think most economists would agree with me.

[/ QUOTE ]

I don't think that anyone thinks that GT should be used to explain the methodology of psychophysical phenomena. Rather it is used as a way for us to understand why psychophysical phenomena occur. That is, we can formulate a model that resembles a certain species and ask why a certain trait was selected for, or if that trait was selected in spite of its adaptive value, etc.

If GT made very strong predictions, we could classify observed behavior that conflicts with GT's predictions as irrational. However, the predictions of GT are often weak. But I don't think that points to a defect in the fundamental idea that a mathematical study of behavior is possible. I think it points to our lack of understanding in GT, probably because the discipline is so new.

atrifix
05-26-2006, 03:36 PM
[ QUOTE ]
Anyone have Cliff notes for this thread?

Game-theoretic answers don't necessarily correspond to what happens in real life.

blah blah blah psychology blah blah jealousy blah blah. The question has nothing to do with these human constructs.

[/ QUOTE ]

The answer is 89/11, or possibly 90/10. Determining the answer really only takes a half a minute or so. The rest is more interesting, anyway.

madnak
05-26-2006, 03:59 PM
All three of those touch on why game theory can't be used to explain behavior. I'd point out the relevant quotes but it's clear that wouldn't get me anywhere. You've cut out the bodies of my last two posts and have refused to respond to my arguments. You've made it clear you're only willing to debate on your own terms, and I won't jump through your hoops if you aren't even willing to dignify my points with a response.

madnak
05-26-2006, 04:41 PM
[ QUOTE ]
I don't agree with this. The branch of game theory called evolutionary game theory works essentially in the way you describe. Equilibria actually arise through trial and error, but the function of game theory is to explain why those equilibria evolved, or why they were adaptive.

[/ QUOTE ]

I looked it up. That's much more palatable to me than classical game theory, but I still don't agree that it's necessarily a good perspective. I can see where there are valid applications of evolutionary game theory, and I suppose some of the anarcho-capitalist reasoning I engage in over in politics could be considered "game-theoretical" in that context. So I'll concede that there's applicability. But I still firmly believe that GT is a tool and a limited one, not a good basis for the entire body of analytical theory regarding subjects like evolution. A hammer is very useful, but some problems are better solved with more subtlety. I don't think GT is very subtle at all. In particular a GT model must rely on many assumptions, and it makes convenient post-hoc explanations very easy to fall victim to.

I don't like the term "why" in this kind of context either - it implies a purpose and therefore a direction or will. Maybe I'm being nitpicky here, I just think many of the common misconceptions about evolution stem from this kind of language.

It also puts evolution in a very brutal light. That's not necessarily a bad thing but isn't, in my opinion, the only valid perspective. The emphasis is on struggle, and on individuals in the system, and on a kind of abstraction represented as agency (even if it's acknowledged that agency isn't "really" involved). Game theory seems to lose the forest for the trees pretty often.

[ QUOTE ]
I don't think that anyone thinks that GT should be used to explain the methodology of psychophysical phenomena.

[/ QUOTE ]

Unless I'm radically misinterpreting him, that seems to be exactly what CMI is arguing through this thread.

[ QUOTE ]
Rather it is used as a way for us to understand why psychophysical phenomena occur. That is, we can formulate a model that resembles a certain species and ask why a certain trait was selected for, or if that trait was selected in spite of its adaptive value, etc.

[/ QUOTE ]

Again, that's at the risk of artificially constructing an economy and system of evaluation based on limited information. In this sense it's possible to apply GT to anything almost by definition, including chemistry and physics. You could call virtually any natural state of equilibrium a "Nash equilibrium" and work backwards from there. Do you think that's necessarily a good idea? Should we consider covalent bonding as a cooperative game between atoms? Wouldn't it be valid? Would it help or hinder further understanding of chemistry?

[ QUOTE ]
If GT made very strong predictions, we could classify observed behavior that conflicts with GT's predictions as irrational.

[/ QUOTE ]

But why? It would be irrational only according to the definition provided specifically by GT. One of the most disturbing trends that I see in game theorists is the tendency to equate the mathematical definition of rational in GT with dictionary definitions like "consistent with or based on reason; logical" or even "of sound mind; sane." These are separate ideas and shouldn't be used interchangeably. Even if a game theorist believes a logical person will always act according to GT, it doesn't follow implictly.

[ QUOTE ]
However, the predictions of GT are often weak. But I don't think that points to a defect in the fundamental idea that a mathematical study of behavior is possible. I think it points to our lack of understanding in GT, probably because the discipline is so new.

[/ QUOTE ]

As new branches of GT arise that reject some of the assumptions of classical GT, the entire discipline will become something else. And since probability mechanics and are now being labeled as "game theory," eventually the subject may be so broad as to render my objections irrelevant. Right now I don't think that's the case. Its main application involves the tautological concept that all action is based on adaptation, that the value of every action is defined by "fitness." But "fitness" itself is really just a jumbling-together of a large number of needs and goals, many of which are relative or circumstantial. Reducing such a complex system into a single value of a quantifiable "self-interest" seems deeply misguided to me.

CallMeIshmael
05-26-2006, 05:00 PM
Here's the thing... this debate started with this:

[ QUOTE ]
[ QUOTE ]
BUT, organisms (including humans) make unconscious decisions everyday that are directly predicted by game theory since evolution seeks the solutions predicted by GT.

[/ QUOTE ]

That's just plain false. Back it up.

[/ QUOTE ]


You tell me I am plain false, then I post articles showing how GT was used to predict behaviour. Now that I have clearly backed it up (note: the post didnt say that decisions used the process of game theory, simply that GT predicted behaviours) you have changed this into a an argument about whether or not the spiral towards equilibrium that occurs through natural selection can be defined as game theory.

Im not dying to argue semantics.

"Game theory is a branch of applied mathematics that studies strategic situations where players choose different actions in an attempt to maximize their returns." Since the spiral is the result of players maximizing their fitness vs the strategies of other players, I choose to call this game theory. FWIW, Atrifix's post said it better than I ever could.

madnak
05-26-2006, 05:04 PM
That's not where this started.

[ QUOTE ]
The idiots are the ones who act according to game theory. "Rational" in game theory actually means irrational based on most practical definitions of rationality. Any game theory opponent is by definition extremely stupid, shortsighted, and foolish.

More importantly, it assumes that neither opponent has a psychology. Any psychology. The introduction of psychology into the situation makes game theoretical opponents pathetically weak. Which is exactly why human beings have evolved traits that are antithetical to game theoretically correct action.

The correct answer to this question is 50/50, and game theory is completely irrelevant to the situation. I believe those who suggest otherwise are honestly mentally deficient.

[/ QUOTE ]

I already conceded that point. But if you need me to do it explicitly, okay. You're right, I'm wrong, game theory does predict human decisions.

CallMeIshmael
05-26-2006, 05:06 PM
[ QUOTE ]
[ QUOTE ]
I don't think that anyone thinks that GT should be used to explain the methodology of psychophysical phenomena.

[/ QUOTE ]

Unless I'm radically misinterpreting him, that seems to be exactly what CMI is arguing through this thread.

[/ QUOTE ]


I know very little of psychophysics, and I dont *think* I argued that, but to be honest Im not 100% sure what you claim I am arguing.


My argument:

- organisms are presented with decisions that affect fitness
- evolution will have pushed decision making process towards an ESS

CallMeIshmael
05-26-2006, 05:10 PM
[ QUOTE ]
I already conceded that point. But if you need me to do it explicitly, okay. You're right, I'm wrong, game theory does predict human decisions.

[/ QUOTE ]


Then what are we even debating now?

Just whether or not the spiral is technically game theory? I mean, if thats it, we should just let this die, since thats semantics.

madnak
05-26-2006, 05:12 PM
[ QUOTE ]
My argument:

- organisms are presented with decisions that affect fitness
- evolution will have pushed decision making process towards an ESS

[/ QUOTE ]

I don't understand how some of your posts were designed to support this position. Maybe I've just misinterpreted statements like "Game theory is precisely how nature reaches its conclusions." To me that seems like an assertion that evolution works according to game theory rather than trial and error. I suppose you meant it as "evolution will have pushed decision making process towards an ESS?"

madnak
05-26-2006, 05:13 PM
[ QUOTE ]
Then what are we even debating now?

[/ QUOTE ]

The value of a game theoretical approach, philosophically. That's been my interpretation of what this debate is about since moorobot brought it up early in the thread.

CallMeIshmael
05-26-2006, 05:29 PM
[ QUOTE ]
I don't understand how some of your posts were designed to support this position. Maybe I've just misinterpreted statements like "Game theory is precisely how nature reaches its conclusions." To me that seems like an assertion that evolution works according to game theory rather than trial and error. I suppose you meant it as "evolution will have pushed decision making process towards an ESS?"

[/ QUOTE ]

Evolution acts by trial and error.

- It favours the organisms that are playing a game theoretically superior strategy (though it also favours other things... like, you can make good decisions but still be too weak to survive... but, all else equal the better decision makers are favoured).... when I say a GT superior strategy, they are making better decisions than their conspecifics given the environment, and other players

- Eventually, the participants reach and ESS, or put another way, a nash equilibrium.


This is what I meant by the statement.

madnak
05-26-2006, 05:30 PM
Okay, I see. It's probably my fault I misinterpreted, I've been trigger-happy lately. Sorry to waste your time.

CallMeIshmael
05-26-2006, 05:34 PM
[ QUOTE ]
Okay, I see. It's probably my fault I misinterpreted, I've been trigger-happy lately. Sorry to waste your time.

[/ QUOTE ]

I cant believe this entire thread was the result of a misunderstanding.


HAHHAHA

Its all good /images/graemlins/smile.gif

atrifix
05-26-2006, 05:43 PM
[ QUOTE ]
I don't like the term "why" in this kind of context either - it implies a purpose and therefore a direction or will. Maybe I'm being nitpicky here, I just think many of the common misconceptions about evolution stem from this kind of language.

[/ QUOTE ]

I don't agree. The term "why"--in this instance--refers to our understanding of said phenomena. To take Putnam's example, there are (at least) two ways of explaining why a square peg doesn't fit into a round hole: one based on its shape, diamater, etc., and the other based on the spatiotemporal location of all of its atoms. Both of these explain why the square peg doesn't fit into the round hole (Putnam doesn't think so, but I do). Presumably the former is a better explanation than the latter. But the "why" doesn't indicate that there is a direction or will, and certainly doesn't ascribe those properties to the peg or the hole.

[ QUOTE ]
It also puts evolution in a very brutal light. That's not necessarily a bad thing but isn't, in my opinion, the only valid perspective. The emphasis is on struggle, and on individuals in the system, and on a kind of abstraction represented as agency (even if it's acknowledged that agency isn't "really" involved). Game theory seems to lose the forest for the trees pretty often.

[/ QUOTE ]

I agree with this. I think that it's a problem with the construct and some of the common assumptions (common knowledge of rationality, etc.), rather than something fundamentally wrong with the idea of mathematically analyizing decision theory.

[ QUOTE ]
[ QUOTE ]
I don't think that anyone thinks that GT should be used to explain the methodology of psychophysical phenomena.

[/ QUOTE ]

Unless I'm radically misinterpreting him, that seems to be exactly what CMI is arguing through this thread.

[/ QUOTE ]

I don't think so, but perhaps I'm misinterpreting him.

[ QUOTE ]
Again, that's at the risk of artificially constructing an economy and system of evaluation based on limited information. In this sense it's possible to apply GT to anything almost by definition, including chemistry and physics. You could call virtually any natural state of equilibrium a "Nash equilibrium" and work backwards from there. Do you think that's necessarily a good idea? Should we consider covalent bonding as a cooperative game between atoms? Wouldn't it be valid? Would it help or hinder further understanding of chemistry?

[/ QUOTE ]

Well, we don't want to work backward for everything in the theory. That would be bad. But for some things I think it is valid and useful.

Should we view covalent bonding as a coordination game? I don't think so. On the one hand, it seems strange to apply game theory to disciplines like physics or chemistry, where the entire discipline can probably be summed up in natural laws. It makes more sense to apply it to things like biology, where there are selection mechanisms in place. Now, you could argue that certain stable atoms/molecules are "selected"; we could view radioactive emission as a kind of "natural selection", for instance. But that seems very strange.

Is there a fundamental difficulty with applying GT to covalent bonding? I don't see what it is. I don't think it would be useful anytime in the forseeable future. I don't think that it would give us a better explanation than the ones we could formulate in more standard language. I do think that applying GT to things like evolution or rational decisions could give us a better explanation than the ones we currently have.

[ QUOTE ]
[ QUOTE ]
If GT made very strong predictions, we could classify observed behavior that conflicts with GT's predictions as irrational.

[/ QUOTE ]

But why? It would be irrational only according to the definition provided specifically by GT.

[/ QUOTE ]

Well, if the theory made very accurate predictions, we could bite the bullet and accept some casualties, so to speak. No one thinks behavior is rational at every given moment in the history of the universe. The fact that the theory makes very inaccurate predictions is a good indicator that there is something wrong with the theory rather than with the world.

[ QUOTE ]
One of the most disturbing trends that I see in game theorists is the tendency to equate the mathematical definition of rational in GT with dictionary definitions like "consistent with or based on reason; logical" or even "of sound mind; sane." These are separate ideas and shouldn't be used interchangeably. Even if a game theorist believes a logical person will always act according to GT, it doesn't follow implictly.

[/ QUOTE ]

I agree, but I think game theorists have gotten much better about this as time has gone by.

[ QUOTE ]
As new branches of GT arise that reject some of the assumptions of classical GT, the entire discipline will become something else.

[/ QUOTE ]

I don't know. I think of classical game theory, behavioral game theory, evolutionary game theory, bounded rationality models, etc., as all the same discipline. They all involve attempts to capture what's going on in multiperson decision theory, even though they have very different methodologies. If you were to reject the axiom of choice, you'd still be doing set theory.

[ QUOTE ]
And since probability mechanics and are now being labeled as "game theory," eventually the subject may be so broad as to render my objections irrelevant. Right now I don't think that's the case. Its main application involves the tautological concept that all action is based on adaptation, that the value of every action is defined by "fitness." But "fitness" itself is really just a jumbling-together of a large number of needs and goals, many of which are relative or circumstantial. Reducing such a complex system into a single value of a quantifiable "self-interest" seems deeply misguided to me.

[/ QUOTE ]

Well, I agree that there are fundamental difficulties in imagining that we can accurately determine an individual's preferences. But I don't think that is a deathblow to game theory. We can construct reasonable models that are analogously similar to observed behavior. And we can also form a rough idea of individual's preferences, and, as the theory progresses, hope to make that margin of error smaller.

CallMeIshmael
05-26-2006, 05:47 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
I don't think that anyone thinks that GT should be used to explain the methodology of psychophysical phenomena.

[/ QUOTE ]

Unless I'm radically misinterpreting him, that seems to be exactly what CMI is arguing through this thread.

[/ QUOTE ]

I don't think so, but perhaps I'm misinterpreting him.

[/ QUOTE ]


Im actually kind of an expert in the field of what Im thinking.

I am, however, lacking in understanding of what exactly "the methodology of psychophysical phenomena" means. I mean, I know what all of those words mean. But, when I put them together I have problems.

Little help?

atrifix
05-26-2006, 05:55 PM
[ QUOTE ]
Im actually kind of an expert in the field of what Im thinking.

I am, however, lacking in understanding of what exactly "the methodology of psychophysical phenomena" means. I mean, I know what all of those words mean. But, when I put them together I have problems.

Little help?

[/ QUOTE ]

I just mean the mechanisms by which they develop. That would have been a better phrase than methodology. Obviously species don't evolve because they are thinking about their payoffs.

CallMeIshmael
05-26-2006, 05:57 PM
FWIW, I would also contend that humans always act rationally. (though Im a lot less sure that this is correct than other statements Ive made in this thread)


For example, if we use the OP: a person giving 50/50 can be rational because they value the idea of fairness more than the $49 they lose by not offering less.


Someone who gives to charity is rational because the cost of giving is outweighed by the satisfaction of giving to charity. But, at the same time, someone else is rational for NOT giving to charity because they may simply get less satisfaction out of giving.


Someone who drives drunk is (horribly) rational because they feel they are not drunk enough to drive.



I think the statement of "humans sometimes dont act rationally" is probably more accurately stated as "psychology leads humans to have different payouts than are predicted by math, and so varied that we cant begin to describe them"

atrifix
05-26-2006, 06:04 PM
[ QUOTE ]
FWIW, I would also contend that humans always act rationally. (though Im a lot less sure that this is correct than other statements Ive made in this thread)

[/ QUOTE ]

I suppose that you'd need to indicate what rationality means in this context. If rationality is defined as "the way a human acts", then it's true but totally uninteresting. If you're arguing for something like rational choice theory (e.g., that people's preferences are always transitive), I disagree.

CallMeIshmael
05-26-2006, 06:13 PM
[ QUOTE ]
[ QUOTE ]
FWIW, I would also contend that humans always act rationally. (though Im a lot less sure that this is correct than other statements Ive made in this thread)

[/ QUOTE ]

I suppose that you'd need to indicate what rationality means in this context. If rationality is defined as "the way a human acts", then it's true but totally uninteresting.

[/ QUOTE ]

This is essentially what Im saying. And I agree the definion is uninteresting.

BUT, what is interesting is devising payoff functions from observations, given we know that at the time of the decision, the given path had the maximum payoff of all possible options.



For example, lets say the ultmatum game is played twice.

The first time with 100 1$ bills, and the second with 100 $100 bills.

So, player A offers $1 in the first game and $100 in the second.

Im certain there is someone out there who would reject in the first game and not in the second.

We have just learned that this person values not getting a crappy offer in teh game as more than $1 but less than $100.

madnak
05-26-2006, 06:24 PM
[ QUOTE ]
I don't agree. The term "why"--in this instance--refers to our understanding of said phenomena. To take Putnam's example, there are (at least) two ways of explaining why a square peg doesn't fit into a round hole: one based on its shape, diamater, etc., and the other based on the spatiotemporal location of all of its atoms. Both of these explain why the square peg doesn't fit into the round hole (Putnam doesn't think so, but I do). Presumably the former is a better explanation than the latter. But the "why" doesn't indicate that there is a direction or will, and certainly doesn't ascribe those properties to the peg or the hole.

[/ QUOTE ]

You have a point. I don't know, I'm more comfortable with "how" where GT is concerned. I guess it seems more functional than structural, or something.

[ QUOTE ]
I agree with this. I think that it's a problem with the construct and some of the common assumptions (common knowledge of rationality, etc.), rather than something fundamentally wrong with the idea of mathematically analyizing decision theory.

[/ QUOTE ]

I agree, but I also think assumptions about the discrete nature of decision-making are relevant. If someone offer me $5, I can accept it or reject it. But I can also punch him in the face, or perform showtunes, or try to wheedle more out of him or pull out a gun and ask for his wallet too. Is there any part of game theory that applies when the range of decisions is qualitatively infinite?

[ QUOTE ]
Well, we don't want to work backward for everything in the theory. That would be bad. But for some things I think it is valid and useful.

Should we view covalent bonding as a coordination game? I don't think so. On the one hand, it seems strange to apply game theory to disciplines like physics or chemistry, where the entire discipline can probably be summed up in natural laws. It makes more sense to apply it to things like biology, where there are selection mechanisms in place. Now, you could argue that certain stable atoms/molecules are "selected"; we could view radioactive emission as a kind of "natural selection", for instance. But that seems very strange.

Is there a fundamental difficulty with applying GT to covalent bonding? I don't see what it is. I don't think it would be useful anytime in the forseeable future. I don't think that it would give us a better explanation than the ones we could formulate in more standard language. I do think that applying GT to things like evolution or rational decisions could give us a better explanation than the ones we currently have.

[/ QUOTE ]

I agree with "better" in the sense of "more useful," but I don't know about "better" in the sense of "more correct."

[ QUOTE ]
Well, if the theory made very accurate predictions, we could bite the bullet and accept some casualties, so to speak. No one thinks behavior is rational at every given moment in the history of the universe. The fact that the theory makes very inaccurate predictions is a good indicator that there is something wrong with the theory rather than with the world.

[/ QUOTE ]

But why the connection with substantive rationality in the first place? If GT could perfectly predict everything in the universe, everything would be rational by definition, wouldn't it? It would still be impossible to make valuative distinctions based on GT.

[ QUOTE ]
I agree, but I think game theorists have gotten much better about this as time has gone by.

[/ QUOTE ]

That's a perspective I lack. If it's "getting better," that's promising.

[ QUOTE ]
I don't know. I think of classical game theory, behavioral game theory, evolutionary game theory, bounded rationality models, etc., as all the same discipline. They all involve attempts to capture what's going on in multiperson decision theory, even though they have very different methodologies. If you were to reject the axiom of choice, you'd still be doing set theory.

[/ QUOTE ]

Can you elaborate on the specific defintion of "multiperson decision theory" that's relevant? What, mathematically, defines game theory as a discipline? Is it clear-cut or is there gray area?

[ QUOTE ]
Well, I agree that there are fundamental difficulties in imagining that we can accurately determine an individual's preferences. But I don't think that is a deathblow to game theory. We can construct reasonable models that are analogously similar to observed behavior. And we can also form a rough idea of individual's preferences, and, as the theory progresses, hope to make that margin of error smaller.

[/ QUOTE ]

I think there's probably a limit to the applicability of game theory. At a certain point either it will cease to be a useful approach for many problems, or there will be a margin or error that can't be mitigated.

madnak
05-26-2006, 06:30 PM
This is intuitive, but I have reason to believe it's not true. I think the brain is designed to be rational in this sense, but I don't know if the actual chemical mechanism of the brain can be relied upon. What happens when the brain short-circuits? For example, I'd say that a seizure doesn't represent rational action. But I think this might also apply to conscious action in cases where something just isn't "working right," such as severe psychosis or perhaps some kinds of chemical interaction.

(And now I'm using "designed." Shoot me.)

madnak
05-26-2006, 06:35 PM
[ QUOTE ]
We have just learned that this person values not getting a crappy offer in teh game as more than $1 but less than $100.

[/ QUOTE ]

The person might reject $1 even if there's no game involved. Hey, I'd reject a nickel no matter what. Not worth carrying around. Who uses nickels, anyway?

I'm being tongue-in-cheek here, and am not actually disputing you. After all, if a nickel has negative value for me, then needless to say I value not getting a crappy offer more than a nickel. But that would be true even if I didn't care about crappy offers one way or the other - a value of 0 is higher than a negative value.

atrifix
05-26-2006, 07:13 PM
[ QUOTE ]
You have a point. I don't know, I'm more comfortable with "how" where GT is concerned. I guess it seems more functional than structural, or something.

[/ QUOTE ]

I don't understand this distinction. Care to elaborate?

[ QUOTE ]
I agree, but I also think assumptions about the discrete nature of decision-making are relevant. If someone offer me $5, I can accept it or reject it. But I can also punch him in the face, or perform showtunes, or try to wheedle more out of him or pull out a gun and ask for his wallet too. Is there any part of game theory that applies when the range of decisions is qualitatively infinite?

[/ QUOTE ]

You have a point there. I don't know of any such thing, and I imagine it'd be exceptionally difficult to do because as you introduce more decisions, players, etc., the game increases exponentially in complexity. Game theorists have an incredibly difficult time solving 3-person games, let alone infinite-person games.

[ QUOTE ]
I agree with "better" in the sense of "more useful," but I don't know about "better" in the sense of "more correct."

[/ QUOTE ]

I think we are in agreement. I probably have a minority view with regards to this aspect of explanation, but I don't think that there are explanations that are "more correct" than other ones--I think that "more correct" is a misnomer. An explanation is either true or false, but certain ones are more useful than others.

[ QUOTE ]
[ QUOTE ]
Well, if the theory made very accurate predictions, we could bite the bullet and accept some casualties, so to speak. No one thinks behavior is rational at every given moment in the history of the universe. The fact that the theory makes very inaccurate predictions is a good indicator that there is something wrong with the theory rather than with the world.

[/ QUOTE ]

But why the connection with substantive rationality in the first place? If GT could perfectly predict everything in the universe, everything would be rational by definition, wouldn't it? It would still be impossible to make valuative distinctions based on GT.

[/ QUOTE ]

If GT perfectly predicted everything in the universe, then I think everything would be rational, but it wouldn't be empty. We don't want the theory to reduce to emptiness. But GT could be used to explain why what we see is rational, and what we don't see is irrational. We certainly don't want to lump everything imaginable into rationality. At any rate, I don't think that's a problem at the moment /images/graemlins/wink.gif

If we had an intutive, adequate model that successfully explained why people playing the ultimatum game commonly make and accept offers around 50-50, and if that model also made accurate predictions about other behavior not yet observed, that would be a strong starting point. Presumably there would be some behavior in conflict with the model. But we would have a much better case to say that the behavior is irrational rather than say that the model is wrong. As it is, almost all experiments--prisoner's dilemmas, centipede games, ultimatum games, dictator games--have produced behavior wildly incompatible with the results predicted by game theory. That is not a strong starting point.

[ QUOTE ]
Can you elaborate on the specific defintion of "multiperson decision theory" that's relevant?

[/ QUOTE ]

A wise person wrote that game theory is really a misnomer for multiperson decision theory. I forget who that was.

[ QUOTE ]
What, mathematically, defines game theory as a discipline? Is it clear-cut or is there gray area?

[/ QUOTE ]

Well, games are clear-cut. The study of game theory as a discipline is not as much. We have games (multiperson decisions) when we have well-defined sets of players, strategies, preferences, and outcomes. I suppose we could say that game theorists study whether games tell us anything meaningful.

[ QUOTE ]
I think there's probably a limit to the applicability of game theory. At a certain point either it will cease to be a useful approach for many problems, or there will be a margin or error that can't be mitigated.

[/ QUOTE ]

Probably, but I don't think it speaks against it as a discipline. Physics is basically useless when trying to describe the behavior of cells, because cells are complex things and physics studies simple things. But physics is a pretty useful science.

atrifix
05-26-2006, 07:15 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
FWIW, I would also contend that humans always act rationally. (though Im a lot less sure that this is correct than other statements Ive made in this thread)

[/ QUOTE ]

I suppose that you'd need to indicate what rationality means in this context. If rationality is defined as "the way a human acts", then it's true but totally uninteresting.

[/ QUOTE ]

This is essentially what Im saying. And I agree the definion is uninteresting.

BUT, what is interesting is devising payoff functions from observations, given we know that at the time of the decision, the given path had the maximum payoff of all possible options.



For example, lets say the ultmatum game is played twice.

The first time with 100 1$ bills, and the second with 100 $100 bills.

So, player A offers $1 in the first game and $100 in the second.

Im certain there is someone out there who would reject in the first game and not in the second.

We have just learned that this person values not getting a crappy offer in teh game as more than $1 but less than $100.

[/ QUOTE ]

This is interesting to me only as a psychological/cognitive study. It really has nothing to do with rationality. If the definition of rationality reduces to "people prefer to do what they prefer to do", then it becomes completely uninteresting.

jogsxyz
05-26-2006, 07:19 PM
[ QUOTE ]
This assumes that the sons have a continuing relationship. Game theory assumes the opposite. Also I'm not sure fairness would necessarily be indicated even with an arbitrary number of iterations. Someone correct me if I'm wrong, but in this case player A has a clear strategic advantage and player B has no recourse. Player A knows player B is completely rational and player B knows the same of player A.

[/ QUOTE ]
The original question seems to be paraphased from a class.
Neither player has recourse. No deal each receives zero, nada. Both players know neither will receive anything if there's no deal. The argument for Bob either accepting 11 or he will get nothing can equally apply to Abe. If Abe doesn't offer an acceptable deal Abe gets nothing.

[ QUOTE ]

Therefore the tactics of manipulation won't work - both players know the equilibrium and know that they'll both profit most by going straight to the equilibrium solution.

[/ QUOTE ]

Obviously we dont agree what equilibrium is for this problem. We only agree what if the two players make a deal on the first round the joint EV is 100.

[ QUOTE ]

To put it another way, we can look at an arbitrarily long series of trials and start with the last one. 89/11 is clearly the answer here because there will be no further contact. Since we know that 89/11 will be the offer on the last trial, no matter what, and since B isn't spiteful and will accept that value on the last trial, we know that the second-to-last trial will have no effect on the last trial. Since the second-to-last trial can have no bearing on future interactions, the offer for that trial must also be 89/11. But if that's true then the third-to-last trial has no effect on subsequent trials either, and so on. You can work this all the way back to the first trial.

[/ QUOTE ]

89/11 is not clearly anything. Abe is spiteful in offering so little. Bob should refuse. Both get nothing.

If it were a farmer with an apple tree, he could make an unfair offer. If the first picker doesn't accept, the farmer could find an another picker til he finds one willing to accept.

madnak
05-26-2006, 11:32 PM
[ QUOTE ]
[ QUOTE ]
You have a point. I don't know, I'm more comfortable with "how" where GT is concerned. I guess it seems more functional than structural, or something.

[/ QUOTE ]

I don't understand this distinction. Care to elaborate?

[/ QUOTE ]

It seems like game theory is about what things do, rather than what they are. So how seems more appropriate. But why is appropriate the way you used it. It's not a big deal.

[ QUOTE ]
You have a point there. I don't know of any such thing, and I imagine it'd be exceptionally difficult to do because as you introduce more decisions, players, etc., the game increases exponentially in complexity. Game theorists have an incredibly difficult time solving 3-person games, let alone infinite-person games.

[/ QUOTE ]

Yeah. I read a book awhile ago called Finite and Infinite Games (http://www.amazon.com/gp/product/0345341848/sr=8-1/qid=1148699795/ref=pd_bbs_1/002-5536401-9603259?%5Fencoding=UTF8). It wasn't exactly scientific, but interesting food for thought. Will game theory ever be able to explain the appeal of concepts like Taoism, or the mechanics of functions like imagination? Even in extremely finite games, optimal strategy is hard to calculate. I hear Go, for all its apparent simplicity, is almost impossible to crack.

[ QUOTE ]
I think we are in agreement. I probably have a minority view with regards to this aspect of explanation, but I don't think that there are explanations that are "more correct" than other ones--I think that "more correct" is a misnomer. An explanation is either true or false, but certain ones are more useful than others.

[/ QUOTE ]

Well, I believe in shades of truth. I brought up Newton's mechanics as an example - strictly speaking, they're false. They fail to account, apparently, for relativity mechanics and similar issues. But they are definitely useful.

[ QUOTE ]
If we had an intutive, adequate model that successfully explained why people playing the ultimatum game commonly make and accept offers around 50-50, and if that model also made accurate predictions about other behavior not yet observed, that would be a strong starting point. Presumably there would be some behavior in conflict with the model. But we would have a much better case to say that the behavior is irrational rather than say that the model is wrong. As it is, almost all experiments--prisoner's dilemmas, centipede games, ultimatum games, dictator games--have produced behavior wildly incompatible with the results predicted by game theory. That is not a strong starting point.

[/ QUOTE ]

I agree. Is there any work on decision theory that involves "fuzzy logic" and incomplete information and conflicting objectives, outside of artificial intelligence?

[ QUOTE ]
Well, games are clear-cut. The study of game theory as a discipline is not as much. We have games (multiperson decisions) when we have well-defined sets of players, strategies, preferences, and outcomes. I suppose we could say that game theorists study whether games tell us anything meaningful.

[/ QUOTE ]

Does this apply to applications of game theory? Does it work? What about erratic behavior that doesn't fit the well-defined sets of strategies and outcomes? How is that handled under GT? Ignore it? Update the model?

atrifix
05-27-2006, 12:54 AM
[ QUOTE ]
Yeah. I read a book awhile ago called Finite and Infinite Games (http://www.amazon.com/gp/product/0345341848/sr=8-1/qid=1148699795/ref=pd_bbs_1/002-5536401-9603259?%5Fencoding=UTF8). It wasn't exactly scientific, but interesting food for thought. Will game theory ever be able to explain the appeal of concepts like Taoism, or the mechanics of functions like imagination?

[/ QUOTE ]

Maybe. Not anytime in the forseeable future. edit: Thanks for the reference; it looks interesting.

[ QUOTE ]
Even in extremely finite games, optimal strategy is hard to calculate. I hear Go, for all its apparent simplicity, is almost impossible to crack.

[/ QUOTE ]

I'd believe that, although I've never learned to play Go. i am a strong chess player, though. It's almost impossible to use game theory in things like poker and chess except in extremely limited circumstances, because the game almost immediately reaches the complexity that is outside the realm of game theory. But the development of game theory has actually been highly useful in those fields for AI algorithms. Computers are now next to unbeatable at chess, and are quickly approaching that level in poker.

[ QUOTE ]
[ QUOTE ]
If we had an intutive, adequate model that successfully explained why people playing the ultimatum game commonly make and accept offers around 50-50, and if that model also made accurate predictions about other behavior not yet observed, that would be a strong starting point. Presumably there would be some behavior in conflict with the model. But we would have a much better case to say that the behavior is irrational rather than say that the model is wrong. As it is, almost all experiments--prisoner's dilemmas, centipede games, ultimatum games, dictator games--have produced behavior wildly incompatible with the results predicted by game theory. That is not a strong starting point.

[/ QUOTE ]

I agree. Is there any work on decision theory that involves "fuzzy logic" and incomplete information and conflicting objectives, outside of artificial intelligence?

[/ QUOTE ]

I don't know. I've only scratched the surface of the literature on GT, and I come from a philosophical background rather than a mathematical one, so some of the models are beyond me. There is a very large body of work on games of incomplete information, even in classical game theory. I should also mention that there is a large body of work on infinitely repeated games, since those are much easier to analyze than games with an infinite number of players.

There are about 5 assumptions that go into classical GT:

(1) Players know the structure of the game.
(2) Players are rational. That is, they are completely selfish, and their preferences can be modeled by a '>' relation on a set, so their preferences are complete and transitive. Choice follows preference.
(3) Players know that other players are rational (CKR).
(4) Players are capable of performing the necessary calculations.
(5) Players have perfect recall.

These assumptions can be reformulated in a variety of ways, but that's the basic idea. They all seem more or less reasonable--particularly for simple games like the ultimatum game--but when you put people in laboratory you get wildly conflicting behavior. So one of the assumptions has to go.

No one has managed to come up with any kind of generalized model that explains observed behavior in a reasonable way. Whoever does so will probably win the Nobel Prize. Camerer's book, Behavioral Game Theory, is the most interesting on this subject.

[ QUOTE ]
[ QUOTE ]
Well, games are clear-cut. The study of game theory as a discipline is not as much. We have games (multiperson decisions) when we have well-defined sets of players, strategies, preferences, and outcomes. I suppose we could say that game theorists study whether games tell us anything meaningful.

[/ QUOTE ]

Does this apply to applications of game theory? Does it work?

[/ QUOTE ]

I'm not sure what you mean by that. Personally, I am skeptical of most applications of game theory, except perhaps in biology. For example, Hobbes seems to think that prisoner's dilemmas are the underlying structure that give rise to political bodies, whereas Rosseau thinks it is more like a stag hunt. Obviously those guys weren't using game theory because it hadn't been invented yet, but it's not difficult to reinterpret their arguments to account for GT. Who's to say who is correct?

[ QUOTE ]
What about erratic behavior that doesn't fit the well-defined sets of strategies and outcomes? How is that handled under GT? Ignore it? Update the model?

[/ QUOTE ]

Well, if we have erratic behavior not in the set, then we don't have a game /images/graemlins/wink.gif That's all I meant by well-defined. For example, you have two people play an ultimatum game. One proposes a split of 100 $100 dollar bills, and the other accepts or rejects. The player who is supposed to propose the split grabs the money and runs out of the room. I don't think our theory should be required to account for such an action. If I found that in the experimental results, I would interpret it as a joke.

dms
05-27-2006, 02:49 AM
[ QUOTE ]
FWIW, I would also contend that humans always act rationally. (though Im a lot less sure that this is correct than other statements Ive made in this thread)


[/ QUOTE ]

This is just not true unless you're going to say that people intend to act as to maximize their utility. People are to irrational/stupid/however you want to put it to always maximize their utility. (addictions are one example of this: a drunk will drink every day of his life and this will not maximize his utility)

dms
05-27-2006, 03:00 AM
The advantage/power lies with A. This is a one time trial. The two people never interact after it is over. A can offer B $1 in the last round and B is better off accepting that rejecting the offer.

The problem is not flawed. It is specific. It's not going to model every/most real life situations you want to compare it to.

CallMeIshmael
05-27-2006, 03:58 PM
[ QUOTE ]
[ QUOTE ]
FWIW, I would also contend that humans always act rationally. (though Im a lot less sure that this is correct than other statements Ive made in this thread)


[/ QUOTE ]

This is just not true unless you're going to say that people intend to act as to maximize their utility. People are to irrational/stupid/however you want to put it to always maximize their utility. (addictions are one example of this: a drunk will drink every day of his life and this will not maximize his utility)

[/ QUOTE ]

Well, this depends on his utility function.

The drunk is maximizing his utility if he values getting drunk more than his other options.

(Ironically, I mentioned this (http://www.amazon.com/gp/product/0199261857/sr=8-1/qid=1148759827/ref=sr_1_1/002-7038021-9995245?%5Fencoding=UTF8) book earlier in the thread as one of the best things Ive ever read. It actually has a section explaining how addiction can result from rational decision making)

jogsxyz
05-27-2006, 07:29 PM
War Games

Thermonuclear War.

One side strikes first. The another side retaliates. Both sides are annihilated. The solution is to dont play.
-----
This game.

Abe offers an uneven split. Bob must accept or receive zero. Except it's not just Bob who receives zero. Both Abe and Bob receives zero. Bob must exert his leverage to assure his fair share. When Bob is both willing to receive zero and Abe knows that Bob is willing, Abe must offer a fair split or receive zero himself.

dms
05-27-2006, 07:59 PM
I hear what you're saying, but let's be serious.

Do you really think that a person who destroys his life by drinking/gambling money away/etc...is really going to be as happy overall as he would've if he took control of his addiction and was therefor healthier, more successful in his endevours, and more loved, etc/whatever.

I'm not saying it isn't theoretically possible for someone to gain so much from getting wasted that it would be the right choice, but regarding extreme abuse of alcohol, anyone who thinks that all alcoholics are maximizing their utility is slightly delusional. (not saying at all that this is what you are claiming, just that I think it proves my point that people are not always rational.)

CallMeIshmael
05-27-2006, 08:04 PM
[ QUOTE ]
Abe offers an uneven split. Bob must accept or receive zero. Except it's not just Bob who receives zero. Both Abe and Bob receives zero. Bob must exert his leverage to assure his fair share. When Bob is both willing to receive zero and Abe knows that Bob is willing, Abe must offer a fair split or receive zero himself.

[/ QUOTE ]

The problem with your argument is an irrationality inherent to humans.

I assure you the argument hinges on the fact that you are a spiteful person (as all humans are) who values not giving him money at more than $1 (which despite being irrational by defintion is probably OK, and perhaps even good).


Lets change the game slightly: it is being played with one million dollar bills (lets just pretend they exist) and is only 1 round.

So, Abe has $100 million, and can offer Bob {0, 1 million, 2 million...} since he only has million dollar bills, he cant cut them.


Now, your logic still works. Bob can tell Abe he will refuse any unfair offer (for whatever defintion we give unfair) but I ask you:

If you were Bob, and Abe offered $1 Million, would you reject the offer and take 0?


Unless Bob is EXTREMELY wealthy, the value of not getting pushed around in the deal is going to be much less than the million he has in front of him, and he is going to take the deal everytime.

(For example, if Bill Gates were player 2, the utility of $one million is almost 0, and he could credibly threaten to reject an offer of one million)



Now, since the definition in the OP assumes rational and non-spiteful players, the offer of $1 is accepted by definition. Thus, the only correct solution is 89/11.

atrifix
05-27-2006, 08:58 PM
I'm not sure why this is in response to me, but it's clearly just wrong. Given these assumptions:

(1) Both players are rational. They care only about their own payoff, and they prefer more to less.
(2) There is common knowledge of rationality.

The game will clearly end with 89/11 or 90/10.

dms
05-27-2006, 09:03 PM
jogsxyz's posts in this thread make it obvious that he has no experience with game theory. he should not be posting. the problem is what it is, not what jogsxyz wants the problem to be.

CallMeIshmael
05-27-2006, 09:55 PM
[ QUOTE ]
anyone who thinks that all alcoholics are maximizing their utility is slightly delusional. (not saying at all that this is what you are claiming, just that I think it proves my point that people are not always rational.)

[/ QUOTE ]

I think this depends on the defintion of rationality.


I mean, I view their decisions as irrational, as do you. And I think many would agree with this.

But, that doesnt mean the person isnt acting to maximize their utility. They just view the positive utility of drinking as higher than the negative consequences of drinking.

Now, if we want to define the fact that they love drinking so much as irrational, then yes you are right. But, I dont know if irrational is the best word for what we're describing. I mean, they are still acting to maximize utility, they just just have a poorer judgement of utility than we do.



OK, I cant think of a great example... (I just had friends over, and im kinda intoxicated, so forgive me)... but here goes:

Say for example that you are running a business. And you have a choice between being ruthless and not. Lets also assume that being ruthless somehow pays 1 million higher than not being ruthless. But, there are problems with being ruthless in that you have to do something you would prefer not to do (that is, if being ruthless and not had the same monetary payoff, you would prefer to not be ruthless)


Now, I would argue that it is perfectly rational for some to be ruthless and some to not be. I mean, simply because people put different values on the morals behind not being ruthless doesnt mean either is irrational, it just means they have different morals.


The problem with making judgements like "alcoholics are irrational" is that it involves making subjective judgements. Its irrational to prefer $10 to $8. But, its not irrational to prefer a red shirt to three green shirts.

I mean, even if drinking somehow costs you $50,000, it doesnt mean drinking is irrational. It just means you value it at higher than $50,000. A claim that it is irrational involves YOU making the judgement that this person is paying too much for alcohol, which is subjective.

dms
05-27-2006, 10:43 PM
I really don't think there is any chance that all alcoholics gain more utility by drinking their lives away than they could otherwise. That is my opinion, but I think it's incredibly solid and believe that it's enough to base my case on.

Assuming I'm right, if person A chooses to keep drinking and gets X in return instead of staying sober and getting Y in return, and X < Y, then person A has acted irrationally.

My argument is worthless if you think that X > Y for every single person who has ever chosen X. Otherwise, at least one person has acted irrationally at one point in regard to this example. Again, I admit it is theoretically possible, but seriously...

jogsxyz
05-27-2006, 11:15 PM
[ QUOTE ]
jogsxyz's posts in this thread make it obvious that he has no experience with game theory. he should not be posting. the problem is what it is, not what jogsxyz wants the problem to be.

[/ QUOTE ]

STAT 168. I took a course in GT probably before you were born. 1970.

It's just as logical to say Abe will get zero if he doesn't offer Bob 89. With no agreement both get nothing. It's not only Bob who gets nothing.
You guys seem to think that Abe is guaranteed some finite amount. He isn't. Abe has no absolute power over Bob. If he did he should offer 1, not 11.

jogsxyz
05-27-2006, 11:21 PM
[ QUOTE ]

If you were Bob, and Abe offered $1 Million, would you reject the offer and take 0?


[/ QUOTE ]

Yes, I would reject the offer and demand a 50/50 split.
Abe and I would get the same amount. Both get $50M or zero.

dms
05-27-2006, 11:26 PM
Maybe it's cause it's been so long that you took GT that you're a little rusty. Whatever the case, you don't understand this problem and you posts combined would've gotten you 0 points.

Bottom line: the last guy to offer has the power because he knows that the other guy will accept anything greater than 0 because this is not a repeated game. The other guy would be irrational to take 0 instead of 1 in the final round and this is the only decision he can make in the game. To accept or deny the final offer. This is the starting point in solving the problem using backward induction.

The fact that your posts show you don't understand this is why I was so blunt. Everyone's wrong sometimes and your arguments aren't nonsense, they just don't relate to this problem.

CallMeIshmael
05-28-2006, 12:42 AM
[ QUOTE ]
[ QUOTE ]

If you were Bob, and Abe offered $1 Million, would you reject the offer and take 0?


[/ QUOTE ]

Yes, I would reject the offer and demand a 50/50 split.
Abe and I would get the same amount. Both get $50M or zero.

[/ QUOTE ]


Im sorry, but let me get this straight.

Abe: here is one million dollars. You can say OK and take it, or reject and leave it.

You: Ill reject!



There is no demanding a fair split. You either reject or accept. That is it.

CallMeIshmael
05-28-2006, 12:58 AM
[ QUOTE ]
I really don't think there is any chance that all alcoholics gain more utility by drinking their lives away than they could otherwise. That is my opinion, but I think it's incredibly solid and believe that it's enough to base my case on.

Assuming I'm right, if person A chooses to keep drinking and gets X in return instead of staying sober and getting Y in return, and X < Y, then person A has acted irrationally.

My argument is worthless if you think that X > Y for every single person who has ever chosen X. Otherwise, at least one person has acted irrationally at one point in regard to this example. Again, I admit it is theoretically possible, but seriously...

[/ QUOTE ]


Think of this: how much affect will drinking/smoking 1 more day affect your life in the signigicant future?

ie. Is the payoff for stopping smoking for the rest of your life today that much different than the payoff for stopping smoking starting your life tomorrow?

This is the first step in the common explanation for addiction being rational.

jogsxyz
05-28-2006, 01:10 AM
[ QUOTE ]
Maybe it's cause it's been so long that you took GT that you're a little rusty. Whatever the case, you don't understand this problem and you posts combined would've gotten you 0 points.

Bottom line: the last guy to offer has the power because he knows that the other guy will accept anything greater than 0 because this is not a repeated game. The other guy would be irrational to take 0 instead of 1 in the final round and this is the only decision he can make in the game. To accept or deny the final offer. This is the starting point in solving the problem using backward induction.

The fact that your posts show you don't understand this is why I was so blunt. Everyone's wrong sometimes and your arguments aren't nonsense, they just don't relate to this problem.

[/ QUOTE ]

The same agrument you use for Bob accepting the short straw applies equally to Abe.
Abe having the power to make the offer is misdirection by the prof. designed to mislead you to thinking Abe has more control over the split than Bob.
The value of the game is 100 when both agree on a split on round one. The value of the game is zero if they dont ever agree. Bob actually has equal control over the final split.
Once Abe realizes Bob will refuse an unequal share and Abe will also get nothing, he will offer 50/50 split.

jogsxyz
05-28-2006, 01:21 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]

If you were Bob, and Abe offered $1 Million, would you reject the offer and take 0?


[/ QUOTE ]

Yes, I would reject the offer and demand a 50/50 split.
Abe and I would get the same amount. Both get $50M or zero.

[/ QUOTE ]


Im sorry, but let me get this straight.

Abe: here is one million dollars. You can say OK and take it, or reject and leave it.

You: Ill reject!



There is no demanding a fair split. You either reject or accept. That is it.

[/ QUOTE ]

Yes, I reject it. Rational and have perfect information.
Abe would not make such a lowball offer, if he truly wanted a chunk of that $100M. He would know in advance I would reject the offer and he will also receive nothing.
[ QUOTE ]

Two players are playing a game in which they are trying to maximize their own payoffs and are not interested in any spiteful outcomes. They are rational and have perfect information about the game before it starts.

The players are given $100 and Player A makes an offer to player B for a distribution of the money. e.g. (60/40) If player B accepts, the game ends and they get their respective distributions. If player B declines, the amount of money goes down to $90 and now player B makes an offer to player A. If player A declines this offer, the amount goes down to $80 and he makes an offer to player B. If player B declines, the amount is reduced to 0 and the game is over.

What is the outcome/solution?

dms
05-28-2006, 01:27 AM
Certainly this is a probable justification that would go through someone's mind. But are you really going to deny my claim that at least one drunk's overall utility would not have been higher had they decided to get sober?

jogsxyz
05-28-2006, 01:33 AM
Hope this pleases you.

If Bill Gates offered me $1M, I would accept in a heartbeat. I have zero leverage over him.

atrifix
05-28-2006, 01:36 AM
Again, I think you are going to have problems with rationality reducing to emptiness. What makes it irrational to prefer $8 to $10? That is, what could possibly be irrational? If the claim is tautologous it loses a lot of force.

dms
05-28-2006, 01:44 AM
The problem assumes that you prefer more money to less money. Money is just used as a representation of utility. Etc, etc, etc...if you refuse any amount of money in a non-repeating game to instead get 0 where retaliation does not benefit you, you are not rational.

CallMeIshmael
05-28-2006, 02:37 AM
[ QUOTE ]
Hope this pleases you.

If Bill Gates offered me $1M, I would accept in a heartbeat. I have zero leverage over him.

[/ QUOTE ]


I just want to clarify this once more: once he has offered you the money, his part of the game is over.

How do you have no leverage against Bill Gates after he offers 1 million but leverage over a random dude, considering that ONCE THEY MAKE THE OFFER THE GAME IS COMPLETELY IN YOUR HANDS.

CallMeIshmael
05-28-2006, 02:53 AM
[ QUOTE ]
Certainly this is a probable justification that would go through someone's mind. But are you really going to deny my claim that at least one drunk's overall utility would not have been higher had they decided to get sober?

[/ QUOTE ]

Ohh, I dont deny it at all.


But my point is, when a person is given the options:

1. Quit drinking tonight
2. Enjoy the alcohol tonight, but quit tomorrow

The payoff for drinking tonight and living clean for (N-1) days can be higher than living clean for N days.


This is how addiction works in general. The cost of the addictive product is less than its enjoyment for any ONE day, but more than its enjoyment for a large number of days.



EDIT: just because I didnt quite address your question. No, I agree that essentially all drunks will have wished they werent drunks, and would have had higher utility. BUT, a series of rational decisions can come to that outcome.

CallMeIshmael
05-28-2006, 02:56 AM
[ QUOTE ]
What makes it irrational to prefer $8 to $10?

[/ QUOTE ]

Technically you are correct. I should have said no one perfers 8 utils to 10 utils. Money just, in general, follows the same pattern for essentially all people.

dms
05-28-2006, 04:29 AM
[ QUOTE ]
But my point is, when a person is given the options:

1. Quit drinking tonight
2. Enjoy the alcohol tonight, but quit tomorrow

The payoff for drinking tonight and living clean for (N-1) days can be higher than living clean for N days.


[/ QUOTE ]

Although I didn't have much problem with this when I first read it, I think this is wrong except for very small N. For instance, if N is 5 (and maybe heroin would be more clear-cut for this situation), then the 5 days that a heroin addict lives for after going cold turkey will be miserable (I hear). And shooting up at any point during those last 5 days will increase overall utility.

It is this initial period during which using is actually higher utility due to withdrawal symptoms that clouds the issue I think.

If we're going to assume that being sober is higher utility long term than being drunk/using, then the long-term utility (S-l) for a sober day must be greater than the long-term utility (D-l) for a drunk day. So S-l > D-l.

Let's call the variables during the intial period S-i and D-i, where S-i < D-i.

Bleh, so you can make utility functions with those based on different schedules, which I'm not gonna do cause it's obvious and tedious, haha, but yeah.

Anyways, however you want to set it up (ie 5 days of withdrawal, 20 days of withdrawal, etc.) and whatever values you assign to the variables, if the way you set it up favors long-term sobriety, then you maximize utility by becoming sober the very first day.

This is because each sober day after the withdrawal period has more utility than each drunk day before the withdrawal period and the withdrawal period is a fixed amount of time. So each day that the drunk/user waits is a waste of that difference in utility. Even though the initial sober days during the withdrawal are not maximizing utility in the short term. The withdrawal period is fixed as whatever you say it is and is just a bridge that needs to be crossed.

So, as long as N is big enough to make it so sobriety's overall utility is greater than being drunk's over all utility (and N and N-1 aren't such that they are the specific dividing line between making drunkiness's overall utility overtake sobriety's because N is just barely long enough and N-1 is just not quite long enough), then the utility of that last day of being drunk plus the rest of the days being sober is smaller than the utility of being sober for all of the days.


way too tired to be doing this, hope it doesn't come out as total nonsense...

dms
05-28-2006, 04:43 AM
The final offer will not be a 50/50 split because the problem states that both players are rational and the game does non-repeating. Both are assumed to prefer more money to less money also. This means that Bob will accept any offer above 0 in the final round because otherwise his payoff with be 0. Your argument assumes at least one irrational player or that the game would be repeated which would give Bob more power.

This is not the problem. If you answer anything close to a split on the test, your grade will suck hard. I don't know how many ways people can show you you're wrong.

jogsxyz
05-28-2006, 11:12 AM
[ QUOTE ]
[ QUOTE ]
Hope this pleases you.

If Bill Gates offered me $1M, I would accept in a heartbeat. I have zero leverage over him.

[/ QUOTE ]


I just want to clarify this once more: once he has offered you the money, his part of the game is over.

How do you have no leverage against Bill Gates after he offers 1 million but leverage over a random dude, considering that ONCE THEY MAKE THE OFFER THE GAME IS COMPLETELY IN YOUR HANDS.

[/ QUOTE ]

Abe and Bob are two homeless people. Bill Gates offers them 100 $20 bills. Abe has complete information. Meaning he knows Bob will refuse an uneven split. Either both Abe and Bob will receive 50 $20 bills or Gates leave the two worthless bums empty handed.
That's the game. Abe must offer 50 or Abe will receive nothing. That's what complete information is. Total advance knowledge of the other player's strategy.
What Bob chooses has no effect on Gates. Gates will always have billions no matter what Bob does. It's a entirely different situation.

TomCollins
05-28-2006, 11:24 AM
You are misunderstanding what perfect information means. It does not mean knowing what people will accept or won't accept.

jogsxyz
05-28-2006, 01:19 PM
[ QUOTE ]
You are misunderstanding what perfect information means. It does not mean knowing what people will accept or won't accept.

[/ QUOTE ]

Perfect information means Abe know Bob's strategy.

You guys think Abe holds the trump card, when in fact there is no trump card. There's only a veto card and Bob holds the veto card.

CallMeIshmael
05-28-2006, 01:27 PM
[ QUOTE ]
[ QUOTE ]
But my point is, when a person is given the options:

1. Quit drinking tonight
2. Enjoy the alcohol tonight, but quit tomorrow

The payoff for drinking tonight and living clean for (N-1) days can be higher than living clean for N days.


[/ QUOTE ]

Although I didn't have much problem with this when I first read it, I think this is wrong except for very small N. For instance, if N is 5 (and maybe heroin would be more clear-cut for this situation), then the 5 days that a heroin addict lives for after going cold turkey will be miserable (I hear). And shooting up at any point during those last 5 days will increase overall utility.

It is this initial period during which using is actually higher utility due to withdrawal symptoms that clouds the issue I think.

If we're going to assume that being sober is higher utility long term than being drunk/using, then the long-term utility (S-l) for a sober day must be greater than the long-term utility (D-l) for a drunk day. So S-l > D-l.

Let's call the variables during the intial period S-i and D-i, where S-i < D-i.

Bleh, so you can make utility functions with those based on different schedules, which I'm not gonna do cause it's obvious and tedious, haha, but yeah.

Anyways, however you want to set it up (ie 5 days of withdrawal, 20 days of withdrawal, etc.) and whatever values you assign to the variables, if the way you set it up favors long-term sobriety, then you maximize utility by becoming sober the very first day.

This is because each sober day after the withdrawal period has more utility than each drunk day before the withdrawal period and the withdrawal period is a fixed amount of time. So each day that the drunk/user waits is a waste of that difference in utility. Even though the initial sober days during the withdrawal are not maximizing utility in the short term. The withdrawal period is fixed as whatever you say it is and is just a bridge that needs to be crossed.

So, as long as N is big enough to make it so sobriety's overall utility is greater than being drunk's over all utility (and N and N-1 aren't such that they are the specific dividing line between making drunkiness's overall utility overtake sobriety's because N is just barely long enough and N-1 is just not quite long enough), then the utility of that last day of being drunk plus the rest of the days being sober is smaller than the utility of being sober for all of the days.


way too tired to be doing this, hope it doesn't come out as total nonsense...

[/ QUOTE ]


This is a well thought-out argument, but I think it is mistaken.

From what Im reading, we are assuming the payoffs to be:

Quitting today: S-i + q*S-i + q^2*S-i... + q^19S-i + q^20-S-L + q^21-S-L

Drinking today and quitting tomorrow: D-i + q*S-i + q^2*S-i... + q^20S-i + q^21-S-L + q^22-S-L...



Where q is the appropriate discount factor, and we are assuming a 20 period of negative affects.


Now, the difference (pay of quit tomorrow) - (pay of quit today) =

D-i - q^19*S-i - q^20*(S-L)/(1-q) + q^21*(S-L)/(1-q)


Now, it depends on the values of the different variables, but the last two almost cancel out, and since Di > S-i, the first two terms are going to be positive.


The reason people sober up is that they finally realize that the above will be true for every day, and thus their decision to "drink today and quit tomorrow" never happens. But until the realization is made, they are making rational decisions.


(FWIW, there is a decent chance I got some math wrong from what you were saying, but Im not gonna be around today, so if I did, I wont be replying until late tonight, or tomorrow)

CallMeIshmael
05-28-2006, 01:44 PM
[ QUOTE ]
[ QUOTE ]
You are misunderstanding what perfect information means. It does not mean knowing what people will accept or won't accept.

[/ QUOTE ]

Perfect information means Abe know Bob's strategy.

You guys think Abe holds the trump card, when in fact there is no trump card. There's only a veto card and Bob holds the veto card.

[/ QUOTE ]


I just want to point out that your argument holds only when the following is actually capable of happening:

Player A: I offer you 1 million dollars
Player B: NO THANKS!!!!

Just think about that fact, and maybe, just maybe, ponder that you might be wrong.



I think you are not realizing that there are 2 phases to this game.

Phase 1 occurs before Player A makes an offer, and phase 2 occrs after the offer is made. (assume this is just the one round version of the game in the OP)


Now, player B can say whatever he wants in phase 1. He can say "if you dont offer 50:50, then I will refuse, and you are screwed... hahah!!"

But, that doesnt change the fact that once phase 2 hits, player B accepts anything over 0. In phase 2, player A is irrelevant. Player B either gets something or nothing. By definition, you prefer something.



FWIW, I actually got in contact with a GT prof I know fairly well here about this problem. I assure you the answer the experts give is the same as many in the thread say: 89/11 (unless you state the assumption that he accepts 0, in which case its 90/10). He also said that every year there are a few people in his class that make similar arguments, and you just have to accept that some people just wont get it. I assure you that you are one of those people, and I dont mean to offend.

atrifix
05-28-2006, 04:13 PM
[ QUOTE ]
Perfect information means Abe know Bob's strategy.

[/ QUOTE ]
Perfect information is not usually used in this context. The conventional use of the phrase "perfect information" means that both Abe and Bob know the rules of the game. Abe does know Bob's strategy because both players are rational and there is common knowledge of rationality, but generally game theorists do not refer to CKR as perfect information.

If you still don't agree that the assumptions of classical GT entail that Abe will offer 89/11 and Bob will accept, you would probably be better served by buying a textbook on introductory game theory or googling "ultimatum game". Or you could enumerate all 350 strategies and iteratively eliminate all but 348 of them.

atrifix
05-28-2006, 04:20 PM
[ QUOTE ]
Player A: I offer you 1 million dollars
Player B: NO THANKS!!!!

Just think about that fact, and maybe, just maybe, ponder that you might be wrong.

[/ QUOTE ]

The "raising the stakes" argument carries a certain force, because if people's preferences were really captured by an isomorphism onto dollars, then we would observe behavior similar to the predicted solution. If those were really someone's preferences, then he'd have to be stupid not to accept.

Nevertheless, I'm always skeptical when people say that raising the stakes will eliminate "irrational" behavior. All the empirical evidence suggests otherwise. As you increase the stakes in this game, people begin offering even more fair offers because they are afraid of rejection.

jogsxyz
05-28-2006, 04:36 PM
[ QUOTE ]


Just think about that fact, and maybe, just maybe, ponder that you might be wrong.



I think you are not realizing that there are 2 phases to this game.

Phase 1 occurs before Player A makes an offer, and phase 2 occrs after the offer is made. (assume this is just the one round version of the game in the OP)

[/ QUOTE ]

Our disagreement is the definition of complete information.

[ QUOTE ]

FWIW, I actually got in contact with a GT prof I know fairly well here about this problem. I assure you the answer the experts give is the same as many in the thread say: 89/11 (unless you state the assumption that he accepts 0, in which case its 90/10). He also said that every year there are a few people in his class that make similar arguments, and you just have to accept that some people just wont get it. I assure you that you are one of those people, and I dont mean to offend.

[/ QUOTE ]

Fine you and your GT prof are Abe. I'll be Bob. I assure you we will both receive zero.

Second play thru. I'll be Abe. You and you GT prof are Bob. I get 89. You guys get 11.

Bob's strategy. If offered 50 or more, accept. If
offered 49 or less, refuse.

Abe knows how Bob will react before Abe makes his offer.

The value of the game for Abe.

Offer Bob 51 or more. Value equals 100 minus offer.

Offer Bob exactly 50. Value equals 50.

Offer Bob 49 or less. Value is zero.

In every game of game theory the payoff matrix is
always known.

If you disagree with this solution, what does
complete information mean to you?

dms
05-28-2006, 04:58 PM
Forgive my late night babbling and introducing variables when I'm not motivated enough to actually go through with a proof.

Since I'm lazy, I'll say this for the moment.

If you're going to say that being sober has greater utility than drinking in the long run, then I don't think you can say that being sober for N-1 days has greater utility than being sober for N days.

I believe these two claims to contradict each other. If you use induction regarding an N and N-1 claim, you'll come to the conclusion that long-term drinking is better than sobriety. (again, too lazy to go through any proofs)

If it is the case that long-term sobriety is better than long-term drinking, then the per day utility of post-withdrawal sobriety must be greater than the per day utility of pre-withdrawal drinking. Otherwise, long-term drinking would obviously be utility maximizing. And in the argument for N-1 days of sobriety, you are trading 1 day of post-withdrawal sobriety for 1 day of pre-withdrawal drinking. Which can be accounted for with a big enough discount factor, but so could trading daily sex with Jessica Alba starting tomorrow for a dollar in your pocket today.

dms
05-28-2006, 05:04 PM
The disagreement with your solution has nothing to do with complete information. It is because everyone else understands that both players are said to be utility maximizing and rational. You are not.

jogsxyz
05-28-2006, 05:50 PM
[ QUOTE ]
The disagreement with your solution has nothing to do with complete information. It is because everyone else understands that both players are said to be utility maximizing and rational. You are not.

[/ QUOTE ]

It's irrational for Abe to offer Bob an amount Abe knows Bob will refuse. It guarantees Abe will receive zero.

dms
05-28-2006, 05:53 PM
Stop being stupid. His offers depends on what Bob will accept. Bob is rational and utility maximizing, and to clarify, not spiteful. Bob will accept $1.

TomCollins
05-28-2006, 07:11 PM
So why doesn't Bob let him know ahead of time that he will accept only $99 or $100. Why settle for $50?

CallMeIshmael
05-29-2006, 02:17 AM
[ QUOTE ]
If you're going to say that being sober has greater utility than drinking in the long run, then I don't think you can say that being sober for N-1 days has greater utility than being sober for N days.

[/ QUOTE ]

I dont disagree with that. Being sober for N-1 days pays less than N days.

BUT, the difference between them is pretty small. Since we are looking at adding only 1 more day to a significant number of days.

The difference is so small that it is less than what you gain for drinking that one day.

CallMeIshmael
05-29-2006, 02:19 AM
[ QUOTE ]
Second play thru.

[/ QUOTE ]

this is a single shot game

dms
05-29-2006, 06:09 AM
"I dont disagree with that. Being sober for N-1 days pays less than N days.

BUT, the difference between them is pretty small. Since we are looking at adding only 1 more day to a significant number of days.

The difference is so small that it is less than what you gain for drinking that one day."


If the difference is so small that it is less than what you gain for drinking that one day, then you are saying that a day of drinking pre-withdrawal has greater utility than a sober day post-withdrawal. If you say this, you cannot say that long-term sobriety has greater utility than long-term drinking.

dms
05-29-2006, 06:14 AM
Also, I was unclear when I said this, but I meant for the N-1 situation to include the utility of the extra day of drinking.

madnak
05-29-2006, 12:02 PM
The problem here is that Bob is destroyed by his own rationality (selfishness). If, on the final round, Abe has offered Bob $1, Bob, according to the definition of rationality given in game theory, must take the $1. He is unable to refuse it. At this point Abe has already made the offer, therefore Bob no longer has any leverage over Abe whatsoever. As Bob isn't spiteful, he must accept Abe's offer of $1 on the final round.

*Abe knows this.* He knows that, at bare minimum, he can get $79 on the final round. Bob *can't* choose to reject that offer according to the GT definition of rationality. Once the offer has been made, Bob must accept it. Abe knows it, Bob knows it. That is the certain outcome of the third round, if the third round happens.

As a result, in the second round, Abe must receive more than $79 or he'll reject the offer and get his *guaranteed* $79 in the third round. Bob must offer Abe at least $80/$10 to have any chance of seeing his offer accepted. And since the alternative is a payoff of only $1, Bob *must* make this offer.

Abe knows all this. He knows that Bob will receive $10 in round two, and he must therefore offer at least $11 in round 1 or be rejected. To maximize his own value, he must offer Bob $89/$11. He doesn't have a choice. Due to the fact that Bob *must* accept $1 offer in round 3, he *must* also accept the $11 in round 1. Ultimately, that's how much his option to refuse is "worth," that's the most he can expect to be paid for it.

The answer changes based on assumptions about how decisions are made when accepting and rejecting have the same payoff - with wacky enough assumptions the answer could go anywhere from 88/12 to 91/9, but it will usually be 89/11 or 90/10. No matter what, the answer is what it is.

This isn't necessarily the "best" solution for Bob, or even Abe. In some situations like the Jason_t airplane/vase situation, two irrational players will both receive much greater payoff than two rational players. That's part of the problem I have with GT. But the fact is it's mathematically demonstrable based on the assumptions of Game Theory that this kind of result will happen in a GT context. Every time.

jogsxyz
05-29-2006, 01:48 PM
This game doesn't fit the classical game theory
mode. It's missing the element of incomplete
information. Both Abe and Bob know $100 is being
offered. Both know that no agreement will result
in both receiving nothing. Abe knows in advance
that Bob will refuse any offer of uneven split.

NLS, TC, dms, CMI, you guys obviously don't play
tournament bridge. Bridge has full disclosure.
Each partnership must have a convention card
which explains all partnership understandings
for all bids.

------
UC at Berkeley.

For years the University has been crying poverty.
They have raised tuitions and fees of students as
they reduced services and classes.
Audits have discovered that the University has
been lavishing execs with unauthorized salaries
and perks. Departed execs have been paid over
$300,000 a year so that they will qualify for
undeserved pensions.
The school is refusing lowly janitors $13/hr.
Crying poverty. Take the school's offer or
leave it.
We, the Juans and Bennys of the world, aren't
going to take this anymore.

CLEAN YOUR OWN FILTHY TOILETS.

jogsxyz
05-29-2006, 02:04 PM
So why doesn't Bob let him know ahead of time that he will accept only $99 or $100. Why settle for $50?

This is the crux of my agrument. Bob will not settle for less than $50. Abe also will not settle for less than $50. Therefore only a 50/50 split is acceptable to both.

Also both know this before Abe makes his offer.

TomCollins
05-29-2006, 02:25 PM
But why settle for $50? You have not mentioned it once. Why can't Bob say "hey, I'm only going to take $99".

CallMeIshmael
05-29-2006, 04:08 PM
If the difference is so small that it is less than what you gain for drinking that one day, then you are saying that a day of drinking pre-withdrawal has greater utility than a sober day post-withdrawal. If you say this, you cannot say that long-term sobriety has greater utility than long-term drinking.

Keep in mind the discount factors. You are comparing D-T to q^x*S-L

This is part of human psychology that explains a lot of seemingly irrational behaviour.

Even though it doesnt seem to make sense, I think you can see how the following statement is logical in the context of a human saying it: "Id rather be a non-drinker than a drinker a year from now, but I'd prefer to be a drinker to a non-drinker today"


You can take this one step further and say that I think even this makes sense: "I want to quit drinking starting tomorrow"


Humans inherently view the "now" as far more important than the "future". Using similar logic to addiction, you can use math to show why humans procrastinate.


ie. You have a paper due in 30 days. I think we can agree that its better to have it done than not have it done. Ie. D-D > D-ND. But, a day of doing is much worse than a day of not doing it. The reason we procrastinate is that the gains of doing it (living the days of having it done) are in the future, and subject to discounting that doesnt occur when we compare the choice of opting to do it on any given day.

dms
05-29-2006, 04:57 PM
If you want to include a discount factor that is large enough to make drinking-for-one-day/then-sober the utility maximizing choice between that choice and being-sober-starting-today, then you cannot say that long-term sobriety is a better choice than long-term drinking.

"Id rather be a non-drinker than a drinker a year from now, but I'd prefer to be a drinker to a non-drinker today"

Someone who would say this is either thinking illogically or forgetting the existence of the withdrawal period, which, if drinking for one day is utility maximizing because of a large enough discount factor, means that they would not in fact want to be a non-drinker in a year in any sense that they would not want to be a non-drinker right now (they just want it to happen magically without any negative (withdrawal)).

CallMeIshmael
05-29-2006, 05:04 PM
If you want to include a discount factor that is large enough to make drinking-for-one-day/then-sober the utility maximizing choice between that choice and being-sober-starting-today, then you cannot say that long-term sobriety is a better choice than long-term drinking.

This is incorrect.

Im unpacking now, but once Im done Ill provide the function that does exactly this. (its a bit long and mathy, and I dont quite have the time right now)

CallMeIshmael
05-29-2006, 07:31 PM
OK, I dont know where the book I read this in is (and to be honest Im not even 100% sure which book Im looking for) so this is from memory and Im pretty sure its at least a tad off, because I can see a hole in it, but I will present what I can remember in the hopes someone else has seen a similar line of reasoning and help out, BUT first I will respond under our framework.


Our framework:

"If you want to include a discount factor that is large enough to make drinking-for-one-day/then-sober the utility maximizing choice between that choice and being-sober-starting-today, then you cannot say that long-term sobriety is a better choice than long-term drinking."


We are looking at this equation:

D-i - q^19*S-i - q^20*(S-L)/(1-q) + q^21*(S-L)/(1-q)


Let us define things as:

S-L > D-i > S-i

That is, the best day is a sober day after withdrawl, the second best day is drinking whil still addicted, which is better than not drinking while still addicted.

If we set S-L = 5, D-I = 4, S-i = 0 and q = 0.98

we come to the conclusion that drinking today is better than not drinking today, despite the knowledge that after the withdrawl period you can get an even higher payout by being sober.




This is a more general concept from what I had read but cant remember exactly what was written (the idea was quite eloquent and sadly beyond my ability)


Define a strategy for drinking as a binary string ie. (1,0,1,1,1,0,0inf...)

Where a 1 represents a day of drinking and a 0 represents a day of non drinking. Also let 0inf be an infitite string of 0s (ie. he has stopped drinking totally) and 1inf an infinite string of 1s.

Let x and y be strategies. That is,

x = (x1,x2,x3...) and
y = (y1,y2,y3...) and

Where x1,x2..,y1,y2... are all either 1s or 0s.


So, we need to find a utility function u function such that:

A. if x and y are the same, except at one point, ie. Xk = Yk for all k except some t, and Xt = 1 and Yt = 0, then

u(x) > u(y)

B. x = (1inf) and y = (0inf), then u(y) > u(x).



Now, property A states that you can improve any strategy by taking any indiviudal day where you planned not to drink and drinking. And property B states that a longterm plan of drinking is worse off than a longterm plan of not drinking.



Now, if we define the function u as:


u(x) =

f(x1) + q*f(x2) + q^2*f(x3)... (when sum(xn) < inf)

and

g(x1) + q*g(x2) + q^2*g(x3)... (when sum(xn) is inf)


and we set f and g to be arbitrary functions with the property that f(1) > f(0) > g(0) > g(1) then we have found u such that both properties A and B are true.

dms
05-29-2006, 08:22 PM
"drinking today is better than not drinking today, despite the knowledge that after the withdrawl period you can get an even higher payout by being sober."

If this is what you want to claim, I don't think we're disagreeing. My claim is that drinking today can be better than not drinking today for the single day. Exclusive drinking can also be better than not drinking long term. Exclusive not drinking could be better than drinking long term given different conditions.

However, if you want to claim that in terms of the overall utility (including your discounting) that long-term sobriety > long-term drinking AND that drinking the first day + long-term sobriety > long-term sobriety, then I disagree. This is what I'm claiming not to be true and the statement here at the top does not contradict that.

CallMeIshmael
05-29-2006, 09:10 PM
"However, if you want to claim that in terms of the overall utility (including your discounting) that long-term sobriety > long-term drinking AND that drinking the first day + long-term sobriety > long-term sobriety, then I disagree. This is what I'm claiming not to be true and the statement here at the top does not contradict that."


Yes, this is what Im claiming.


Lets address the model. Given your definitions, do we agree that the payouts are:

Quitting today: S-i + q*S-i + q^2*S-i... + q^19S-i + q^20-S-L + q^21-S-L

Drinking today and quitting tomorrow: D-i + q*S-i + q^2*S-i... + q^20S-i + q^21-S-L + q^22-S-L...


(assuming a 20-day esitmated withdrawl)?

dms
05-29-2006, 09:22 PM
Looks good.

CallMeIshmael
05-29-2006, 09:30 PM
I think we should also define:


Long term NON-drinking = S-L + q*S-L + q^2*S-L...

Long term drinking = D-i + q*D-i + q^2*D-i...


Agree?


(Note: D-i and D-L are the same value, since we have no need to differentiate between post and pre withdawl drinking since if we drink everyday there is no withdrawl)


EDIT: i forgot the Non

dms
05-29-2006, 09:35 PM
NON-drinking needs to include the withdrawal period, but good other than that.

CallMeIshmael
05-29-2006, 09:50 PM
[ QUOTE ]
NON-drinking needs to include the withdrawal period

[/ QUOTE ]

No it doesnt.

The argument is whether a person who knows that he would prefer to be sober than a drinker can still make the rational choice to drink on a given day.

Hence, the long term payoffs simply need to compare S-L to D.

dms
05-29-2006, 09:50 PM
He can't just magically become sober, he has to go through withdrawal still...no?

CallMeIshmael
05-29-2006, 10:03 PM
[ QUOTE ]
He can't just magically become sober, he has to go through withdrawal still...no?

[/ QUOTE ]

Most certainly.

But we arent talking about any person's decison here, we are more talking about conditions of the world.

The equations:

Quitting today: S-i + q*S-i + q^2*S-i... + q^19S-i + q^20-S-L + q^21-S-L

Drinking today and quitting tomorrow: D-i + q*S-i + q^2*S-i... + q^20S-i + q^21-S-L + q^22-S-L...

Represent his decision for that day.




The equations for long term drinking/non-drinking illustrate not HIS choice, but the fact that being a non-drinker is better than a drinker. (the original question dealt with a person knowing that drinking is worse than non-drinking, yet still rationally drinking on a given day)

dms
05-29-2006, 10:09 PM
The situation of a person starting as a non-drinker has not entered my mind at any point up til now. My bad if I was confusing, but I have always assumed each hypothetical situation to start with the person being a drinker.

CallMeIshmael
05-29-2006, 10:27 PM
[ QUOTE ]
The situation of a person starting as a non-drinker has not entered my mind at any point up til now. My bad if I was confusing, but I have always assumed each hypothetical situation to start with the person being a drinker.

[/ QUOTE ]


Ahh, perhaps this is where we differ.

I mean, they all do start with the person being a drinker.

We dont really need to define the S-L and D equations in terms of q, simply just need to say that S-L > D.



The point I was trying to make (and which clearly got muddled up) was that what appears to be irrational results can come from rational decisions.

For example, if you took a drunk, and examined his payoffs for the previous year for IF he had stopped drinking, and if he kept drinking, you would get:

365*D

vs

20*S-i + 345 * S-L


Now, clearly the latter has larger utility. But, humans unfortunatley put a lot of emphasis on the here and now, and the act that maximizes total expected utility can somehow not maximize total realized utility. (with realized utility NOT using a discount factor)


Similarily with procrastination: the person maximizes total realized utility by working the first day, but if everyday they opt to maximize expected utility, they end up waiting until the last day

dms
05-29-2006, 10:35 PM
"But, humans unfortunatley put a lot of emphasis on the here and now, and the act that maximizes total expected utility can somehow not maximize total realized utility. (with realized utility NOT using a discount factor)"

Is this what you have been arguing for? One situation discounted while the other isn't? That seems to be more of an issue of whether or not you think that discounting future events is reasonable. In my claims, I intend for all situations for any claim to be all discounted or none.

DMACM
05-29-2006, 10:40 PM
Im sorry if everyone has already said this but if I am player B i reject all offers less than 79. I do this because I know I can make a 79-1 offer on turn 2 and player A will accept this, because he isnt spiteful. B-79 A-1, B has the complete upper hand in this problem because he is last to act.

CallMeIshmael
05-30-2006, 12:30 AM
[ QUOTE ]
Is this what you have been arguing for? One situation discounted while the other isn't?

[/ QUOTE ]

No. Everyone makes decsions with a discount factor, and it is completely rational to do so.

What Im saying is when we judge the drunk's actions as irrational, we have a tendency judge his payoff of:

20*S-i + 345*S-L

ignoring the discount.

atrifix
05-30-2006, 01:23 AM
[ QUOTE ]
B has the complete upper hand in this problem because he is last to act.

[/ QUOTE ]

Being first to act carries an enormous advantage in this game. If both players acted simultaneously, there would be a large variety of equilibria, but 50/50 would be the most common. Because the first player can commit himself to a strategy while the second player cannot, the first player is at an advantage.