PDA

View Full Version : Ockham's Razor


wazz
06-13-2007, 11:07 PM
I'm curious as to whether there's some sort of logical proof for this. While I'm reasonably mathematically and philosophically educated (to a small degree - two dropped-out ones, to be precise), I wouldn't have any idea as to how to go about the problem.

oe39
06-13-2007, 11:36 PM
[ QUOTE ]
I'm curious as to whether there's some sort of logical proof for this. While I'm reasonably mathematically and philosophically educated (to a small degree - two dropped-out ones, to be precise), I wouldn't have any idea as to how to go about the problem.

[/ QUOTE ]

how could there be a proof? it's not even really well-defined.

wazz
06-13-2007, 11:48 PM
Well the way I understand it (I know it was originally defined differently) was that in absence of other information, the simpler explanation for a phenomenon is more likely to be true. I've seen it explained that this is not a 'theory', in the sense that it is merely a rule to choose between theories, but this strikes me as wrong given you could choose the alternative theory (given no more information) that the more complicated theory is likely to be true, or even that the simplicity or complexity of a theory has no bearing on its truth-value.

Given that information can be quantified, would it not be possible to construct a continuous 'theory-space' whereby different theories are compared, then some prior probability criterion applied and compared to the results of applying ockham's razor? I guess that would be an analytic way of doing it, and that may well be the only way, if logical methods are out of the window.

I'm sorry if this question is a little too silly/abstract or badly worded, or even absurd - I'm out of practise, please humour me.

Siegmund
06-14-2007, 12:03 AM
The only thing similar to it that I can think of is the likelihood ratio test to compare "full" and "reduced" models in statistics - adding an extra explanatory variable always gives you a better fit even if the extra variable is meaningless, so you have to prove the extra variable has improved the fit more than would be expected by chance.

wazz
06-14-2007, 12:15 AM
I have no understanding of full and reduced models in statistics, but what you've said sounds both right and wrong, if you see what I mean - right in that it sounds analagous, wrong in that the extra information is making the proposition likelier rather than less likely.

jgodin
06-14-2007, 01:28 AM
What, if any, is the relationship between Ockham's Razor and Sklansky's "Coincidence Theory"?

I found the two to be quite similar in nature.

oe39
06-14-2007, 03:42 AM
[ QUOTE ]
Well the way I understand it (I know it was originally defined differently) was that in absence of other information, the simpler explanation for a phenomenon is more likely to be true. I've seen it explained that this is not a 'theory', in the sense that it is merely a rule to choose between theories, but this strikes me as wrong given you could choose the alternative theory (given no more information) that the more complicated theory is likely to be true, or even that the simplicity or complexity of a theory has no bearing on its truth-value.

Given that information can be quantified, would it not be possible to construct a continuous 'theory-space' whereby different theories are compared, then some prior probability criterion applied and compared to the results of applying ockham's razor? I guess that would be an analytic way of doing it, and that may well be the only way, if logical methods are out of the window.

I'm sorry if this question is a little too silly/abstract or badly worded, or even absurd - I'm out of practise, please humour me.

[/ QUOTE ]

i think this has gotten a little past the point of making sense... think about what you are asking for!

Neuge
06-14-2007, 03:53 AM
Ockham's Razor has for centuries been put into layman's terms as "the simpler explanation is the correct one," but that's not really what it's for. A more accurate scientific view of the postulate is "don't add any extraneous information." Basically (a largely exaggerated version), Newton comes up with his theory of gravitation. He finds that the force is proportional to the two masses and inversely proportional to the distance between said objects squared. It's a fine theory and fits empirical observation extremely well (until Einstein LDO).

Now what if he had postulated that the force is proportional to mass, yada yada yada... AND that this force was due to green aliens? That's obviously absurd, but such is the point of Ockham's Razor. If you have a working theory of an empirically observable phenomena, it's not necessary to add anything to it. It's always theoretically possible to find something "simpler" with fewer variables, but that rarely happens in scientific practice.

pzhon
06-14-2007, 04:27 AM
Suppose there is only one correct theory, and theories correspond to finite strings of letters. There are only finitely many incorrect theories that are shorter than the correct theory, but there are infinitely many incorrect theories which are longer than the correct theory.

This doesn't prove Ockham's Razor, but it's a start.

EnderIII
06-14-2007, 04:41 AM
Kevin Kelly's Research (http://www.hss.cmu.edu/philosophy/kelly/research.htm#ockham)

Here is a link to a guy that works on this stuff. It may well be too heavy on the math for casual consumption, but just thought i'd toss it out there.

soon2bepro
06-14-2007, 05:45 AM
[ QUOTE ]
This doesn't prove Ockham's Razor, but it's a start.

[/ QUOTE ]

It's nowhere near anything resembling proof

kerowo
06-14-2007, 08:04 AM
I'm sure a large part of the appeal of Ockham's Razor lies in it's focus on simplicity. Scientists and Mathematicians are usually trying to find elegant solutions to problems, so a solution that has x-5 (or whatever) steps is considered better than a solution that has x steps.

This gets translated into other fields as things like K.I.S.S and 'don't look for zebras when you hear huff beats.' It also goes hand in hand with 'extraordinary theories need extraordinary proof' because it is much more likely that someone is experiencing a false memory or is hallucinating than to think Elvis is an alien.

pzhon
06-14-2007, 02:47 PM
[ QUOTE ]
[ QUOTE ]

Suppose there is only one correct theory, and theories correspond to finite strings of letters. There are only finitely many incorrect theories that are shorter than the correct theory, but there are infinitely many incorrect theories which are longer than the correct theory.

This doesn't prove Ockham's Razor, but it's a start.

[/ QUOTE ]
It's nowhere near anything resembling proof

[/ QUOTE ]
As I stated, it is not a proof. It is the key idea behind many much longer justifications of Ockham's razor. Feel free to read those if you can't flesh out the argument from the above paragraph.

Borodog
06-14-2007, 04:29 PM
I use Occam's razor because it's far simpler than the alternative.

Andy Ross
06-14-2007, 07:23 PM
[ QUOTE ]
I use Occam's razor because it's far simpler than the alternative.

[/ QUOTE ] /images/graemlins/smile.gif

Philo
06-14-2007, 07:24 PM
[ QUOTE ]
I'm curious as to whether there's some sort of logical proof for this.

[/ QUOTE ]

I don't think there is any such thing as a logical proof for Occam's razor, since it is generally understood as a heuristic principle or a principle of parsimony, and so is not the kind of thing that admits of logical proof.

There is this:

"Jerrold Katz has outlined a deductive justification of Occam's razor:

"If a hypothesis, H, explains the same evidence as a hypothesis G, but does so by postulating more entities than G, then, other things being equal, the evidence has to bear greater weight in the case of H than in the case of G, and hence the amount of support it gives H is proportionately less than it gives G."

From http://en.wikipedia.org/wiki/Occam's_Razor

oldbookguy
06-15-2007, 12:26 AM
Here is a simple, non math answer.

Two gals are gossiping;

Gal 1. Mary went out with Bill last night.
Gal. 2. Mary went out with Bill last night to make Bob mad.

Statement number 1 is fact, statement 2 is extranious and adds nothing to making number 1 any more or less correct, only adding more gossip.

Another way of looking at it, a prosecutor need only prove a crime, not a crime and a motive.

Thus Ockum's Razor.

obg

tessarji
06-15-2007, 06:57 PM
The mutilated and incorrect statement of Ockham's Razor is that 'the simplest explanation is the correct one'. It should take all of about a minute to think of a counter-example.

The best statement of Ockham's Razor is simply, 'a smaller model is more useful than a larger model, if both make sufficiently accurate predictions for your purposes'.

This isn't a really a mind-blowing insight, thus Ockham's Razor is hugely overrated.

Metric
06-15-2007, 08:07 PM
In the context of computer science, there are principles resembling Ockham's razor (simpler explanation which fits the data is best) which are mathematically well-defined.

http://en.wikipedia.org/wiki/Minimum_description_length

MaxWeiss
06-15-2007, 10:49 PM
In fact it is somewhat of a proof. If I run over glass and then park, go somewhere, and come back to my car to find a glass shard in it (and it's flat) I could conclude an infinite number possibilities, including one that involved David S. and Brandi following me and stabbing glass into my tire in order to send a message to 2+2ers at large. The evidence given to me certainly does not negate that possibility, but there is no reason think that likely. Given the small amount of evidence I have, the most probable option is the simplest and easiest. As I get more data I can exclude more theories, although I can still come up with an infinite number. But mathematically, I approach the limit of just one theory, and with all available evidence, it is reasonable to assume what the limit is approaching is the right choice, until I find other evidence that suggests another theory is more likely. When you average out all the unavailable evidence, the simplest theory is easily the most probable choice.

PairTheBoard
06-15-2007, 11:22 PM
[ QUOTE ]
In fact it is somewhat of a proof. If I run over glass and then park, go somewhere, and come back to my car to find a glass shard in it (and it's flat) I could conclude an infinite number possibilities, including one that involved David S. and Brandi following me and stabbing glass into my tire in order to send a message to 2+2ers at large. The evidence given to me certainly does not negate that possibility, but there is no reason think that likely. Given the small amount of evidence I have, the most probable option is the simplest and easiest. As I get more data I can exclude more theories, although I can still come up with an infinite number. But mathematically, I approach the limit of just one theory, and with all available evidence, it is reasonable to assume what the limit is approaching is the right choice, until I find other evidence that suggests another theory is more likely. When you average out all the unavailable evidence, the simplest theory is easily the most probable choice.

[/ QUOTE ]

Right. It was obviously witchcraft.

PairTheBoard

Philo
06-16-2007, 12:36 AM
[ QUOTE ]
As I get more data I can exclude more theories, although I can still come up with an infinite number. But mathematically, I approach the limit of just one theory, and with all available evidence, it is reasonable to assume what the limit is approaching is the right choice, until I find other evidence that suggests another theory is more likely.

[/ QUOTE ]

I think this is wrong. The empirical justification of a theory does not work with mathematical precision. There are always an indefinite number of theories that are empirically adequate (i.e., that are consistent with all of the available evidence). There is no such thing as 'approaching the right' choice if by that you mean eliminating all but one theory based on the evidence.

Hence the need for Occam's razor. It is not an empirical principle but a heuristic one, which gives us some principled reason for choosing among equally empirically adequate theories.

wazz
06-16-2007, 01:11 PM
Thanks for all the input, guys. I'm understanding this a bit better.

[ QUOTE ]
The mutilated and incorrect statement of Ockham's Razor is that 'the simplest explanation is the correct one'. It should take all of about a minute to think of a counter-example.

The best statement of Ockham's Razor is simply, 'a smaller model is more useful than a larger model, if both make sufficiently accurate predictions for your purposes'.

This isn't a really a mind-blowing insight, thus Ockham's Razor is hugely overrated.

[/ QUOTE ]

Why isn't it a mind-blowing insight? I never suggested it was but am curious as to why you find this so trivial and don't find any use in Ockham's Razor.

Your first statement bears no relation to the rest of this thread, I feel.

Borodog
06-16-2007, 01:21 PM
[ QUOTE ]
The mutilated and incorrect statement of Ockham's Razor is that 'the simplest explanation is the correct one'. It should take all of about a minute to think of a counter-example.

The best statement of Ockham's Razor is simply, 'a smaller model is more useful than a larger model, if both make sufficiently accurate predictions for your purposes'.

[/ QUOTE ]

I don't think either one of these is a correct statement of Occam's Razor. A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]

[ QUOTE ]
This isn't a really a mind-blowing insight, thus Ockham's Razor is hugely overrated.

[/ QUOTE ]

I think it would be an incredibly mind-blowing insight for the people that don't actually understand it, which is the majority of people. Occam's razor informs my entire world view. In my opinion it's practically impossible to overrate it.

Philo
06-16-2007, 02:44 PM
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

vhawk01
06-16-2007, 03:17 PM
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

Yep, thats a better way to put it. In practice we don't really care how likely to be capital T true the explanation is. Thats probably entirely impossible to determine for anything you'd apply the razor to. We just care about selecting one of the infinite explanations that fits the bill. Since you can only go UP infinitely, and cannot go down past some real minimum, we prefer the 'simplest' as a convention.

And I'm with Boro, this is probably the most important concept that exists, for me, and I think its really difficult to overrate it. Its amazing how the two most important, powerful concepts (this and evolution) both seem so ridiculous self-evident and simple, when you finally understand them.

Borodog
06-16-2007, 03:39 PM
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

luckyme
06-16-2007, 03:42 PM
[ QUOTE ]
And I'm with Boro, this is probably the most important concept that exists, for me, and I think its really difficult to overrate it. Its amazing how the two most important, powerful concepts (this and evolution) both seem so ridiculous self-evident and simple, when you finally understand them.

[/ QUOTE ]

Well, they are related, so getting one opens the door to the other.

luckyme

vhawk01
06-16-2007, 04:08 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

[/ QUOTE ]

I don't think thats quite right. There is really no reason to think any theory, X, is more likely to be correct than another theory, X+invisible blue goblins. Its just that there are an infinite number of more complicated theories, and we couldn't ever have ANY meaningful consensus or discussion about any theory if we just accepted any of the infinite as 'equally good.' They are still equally likely, I think, whatever that means. They just aren't as...easy to talk about? To think about, maybe.

Borodog
06-16-2007, 04:22 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

[/ QUOTE ]

I don't think thats quite right. There is really no reason to think any theory, X, is more likely to be correct than another theory, X+invisible blue goblins. Its just that there are an infinite number of more complicated theories, and we couldn't ever have ANY meaningful consensus or discussion about any theory if we just accepted any of the infinite as 'equally good.' They are still equally likely, I think, whatever that means. They just aren't as...easy to talk about? To think about, maybe.

[/ QUOTE ]

This is silly. They are certainly all not just as likely to be correct. That is the principle the razor embodies.

If I can't find my keys, and it were REALLY just as likely that invisible blue goblins stole them and altered my memory so that I don't remember where I left them as it is that I just forgot where I put them, and hence an infinite number of other theories, then it would LITERALLY be the case the the chances that I just forgot where I put them would be 0%, when obviously it is near 100%. This is patently ridiculous. The only way this is avoided is if the simple explanations is MORE likely than alternative explanations that invoke extraneous ad hoc hypotheticals.

I repeat, if the simpler explanation were not more likely to be correct, then parsimony would be a useless concept.

vhawk01
06-16-2007, 04:42 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

[/ QUOTE ]

I don't think thats quite right. There is really no reason to think any theory, X, is more likely to be correct than another theory, X+invisible blue goblins. Its just that there are an infinite number of more complicated theories, and we couldn't ever have ANY meaningful consensus or discussion about any theory if we just accepted any of the infinite as 'equally good.' They are still equally likely, I think, whatever that means. They just aren't as...easy to talk about? To think about, maybe.

[/ QUOTE ]

This is silly. They are certainly all not just as likely to be correct. That is the principle the razor embodies.

If I can't find my keys, and it were REALLY just as likely that invisible blue goblins stole them and altered my memory so that I don't remember where I left them as it is that I just forgot where I put them, and hence an infinite number of other theories, then it would LITERALLY be the case the the chances that I just forgot where I put them would be 0%, when obviously it is near 100%. This is patently ridiculous. The only way this is avoided is if the simple explanations is MORE likely than alternative explanations that invoke extraneous ad hoc hypotheticals.

I repeat, if the simpler explanation were not more likely to be correct, then parsimony would be a useless concept.

[/ QUOTE ]

That is a misapplication of the concept. To stick with your keys scenario, the two (or infinite) competing theories are:

You forgot where you put your keys
and
You forgot where you put your keys and invisible blue goblins watched you do it.

Both of these are equally likely. There is absolutely no difference in explanatory power. Its just the second one involves a whole bunch of unnecessary information. Your scenario is different because there are real differences in outcome or explanatory power to the two theories.

kerowo
06-16-2007, 05:15 PM
It sounds like you think all theories are equally valid which is not true, despite what schools are teaching kids these days. Some theories are stupid and don't deserve the same weight as other theories. "Blue goblins" falls into that camp. A first pass at determining if a theory is stupid or not is given by OR.

vhawk01
06-16-2007, 05:31 PM
[ QUOTE ]
It sounds like you think all theories are equally valid which is not true, despite what schools are teaching kids these days. Some theories are stupid and don't deserve the same weight as other theories. "Blue goblins" falls into that camp. A first pass at determining if a theory is stupid or not is given by OR.

[/ QUOTE ]

Not equally valid, perhaps. But only because of things like OR. What possible measure can you use to tell me which of the two theories I mentioned above has a higher probability of being true? I guess you can say OR, but thats just question begging.

Borodog
06-16-2007, 05:48 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

[/ QUOTE ]

I don't think thats quite right. There is really no reason to think any theory, X, is more likely to be correct than another theory, X+invisible blue goblins. Its just that there are an infinite number of more complicated theories, and we couldn't ever have ANY meaningful consensus or discussion about any theory if we just accepted any of the infinite as 'equally good.' They are still equally likely, I think, whatever that means. They just aren't as...easy to talk about? To think about, maybe.

[/ QUOTE ]

This is silly. They are certainly all not just as likely to be correct. That is the principle the razor embodies.

If I can't find my keys, and it were REALLY just as likely that invisible blue goblins stole them and altered my memory so that I don't remember where I left them as it is that I just forgot where I put them, and hence an infinite number of other theories, then it would LITERALLY be the case the the chances that I just forgot where I put them would be 0%, when obviously it is near 100%. This is patently ridiculous. The only way this is avoided is if the simple explanations is MORE likely than alternative explanations that invoke extraneous ad hoc hypotheticals.

I repeat, if the simpler explanation were not more likely to be correct, then parsimony would be a useless concept.

[/ QUOTE ]

That is a misapplication of the concept. To stick with your keys scenario, the two (or infinite) competing theories are:

You forgot where you put your keys
and
You forgot where you put your keys and invisible blue goblins watched you do it.

Both of these are equally likely.

[/ QUOTE ]

No, they aren't. You have totally sidestepped my entire argument. If this were actually true, then the chance that I simply forgot where I put my keys is literally 0%. The only way to avoid this farcical result is to conclude that the simplest explanation is literally more likely than the alternative explanations. That's the entire point of the principle of parsimony (Occam's Razor).

[ QUOTE ]
There is absolutely no difference in explanatory power.

[/ QUOTE ]

I would argue that yes, there is a difference in explanatory power; that unneccesary ad hoc hypotheticals reduce the explanatory power, even if the theory accounts for all the evidence in question.

[ QUOTE ]
Its just the second one involves a whole bunch of unnecessary information.

[/ QUOTE ]

Here's the crux: SO WHAT? If the one involving the unnecessary information is really just as likely, how is the unnecessary information "bad"? What exactly does it mean for the simplest explanation to be "better" in such a crazy world? That you have to type less to describe it, even though an infinite number of alternative theories are just as likely, and hence the odds of the simple theory being the correct one are 0%? Why think up any theory at all if it isn't more likely than any alternative? Such a premise would render science pointless, not to mention impossible.

[ QUOTE ]
Your scenario is different because there are real differences in outcome or explanatory power to the two theories.

[/ QUOTE ]

How so? /images/graemlins/confused.gif

Borodog
06-16-2007, 05:52 PM
[ QUOTE ]
[ QUOTE ]
It sounds like you think all theories are equally valid which is not true, despite what schools are teaching kids these days. Some theories are stupid and don't deserve the same weight as other theories. "Blue goblins" falls into that camp. A first pass at determining if a theory is stupid or not is given by OR.

[/ QUOTE ]

Not equally valid, perhaps. But only because of things like OR. What possible measure can you use to tell me which of the two theories I mentioned above has a higher probability of being true?

[/ QUOTE ]

Probability?

The probability that I forgot where I put my keys is F, where F is less than 1. The probability that invisible blue goblins watched me is B, where B is less than 1. Hence BF < F. Hence F is more likely than BF.

Philo
06-16-2007, 06:00 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

[/ QUOTE ]

No. If one theory is more likely to be true given the evidence, we don't need a heuristic principle like Occam's Razor in order to choose among theories. We can just go by the evidence in that case.

We apply Occam's Razor when we have competing theories each of which is empirically adequate. Since you can't choose among theories in that case based solely on the evidence, we need some other principle to guide theory choice, like Occam's Razor. Occam's Razor is not an empirical principle.

Borodog
06-16-2007, 06:18 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]


A correct statement of Occam's razor is, "The simplest explanation that fits the data is the most likely to be correct.[/b]



[/ QUOTE ]

A number of respondents have said something like this. As I understand it, Occam's razor is not about which theory is more likely to be 'correct'. Occam's razor has nothing to do with how likely a theory is to be true, but instead is a principle that directs us how to choose among theories, on non-empirical grounds, that are equally likely to be true given the evidence.

[/ QUOTE ]

I couldn't disagree more. This doesn't even make sense. If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor.

[/ QUOTE ]

No. If one theory is more likely to be true given the evidence, we don't need a heuristic principle like Occam's Razor in order to choose among theories. We can just go by the evidence in that case.

We apply Occam's Razor when we have competing theories each of which is empirically adequate. Since you can't choose among theories in that case based solely on the evidence, we need some other principle to guide theory choice, like Occam's Razor. Occam's Razor is not an empirical principle.

[/ QUOTE ]

The bolded part is an incorrect interpolation by you of my statement. You make it seem like I am saying some piece of evidence exists which points to the simpler theory being more likely than the other. That's not what I'm saying. I'm saying that given the same evidence, a simpler theory that adequately explains all that evidence is LITERALLY more likely to be correct than less simple alternatives tha adequately explain that same evidence. This is why parsimony and Occam's Razor are useful in the first place.

I repeat, if the simpler explanation is NOT more likely to be correct, WHAT IS THE JUSTIFICATION OF OCCAM'S RAZOR IN THE FIRST PLACE? The simpler theory requires fewer keystrokes?

/images/graemlins/confused.gif

Philo
06-16-2007, 06:38 PM
[ QUOTE ]



The bolded part is an incorrect interpolation by you of my statement. You make it seem like I am saying some piece of evidence exists which points to the simpler theory being more likely than the other. That's not what I'm saying. I'm saying that given the same evidence, a simpler theory that adequately explains all that evidence is LITERALLY more likely to be correct than less simple alternatives tha adequately explain that same evidence. This is why parsimony and Occam's Razor are useful in the first place.

I repeat, if the simpler explanation is NOT more likely to be correct, WHAT IS THE JUSTIFICATION OF OCCAM'S RAZOR IN THE FIRST PLACE? The simpler theory requires fewer keystrokes?

/images/graemlins/confused.gif

[/ QUOTE ]

Perhaps this will help, from http://en.wikipedia.org/wiki/Occam's_Razor

"Empirical justification

One way a theory or a principle could be justified is empirically; that is to say, if simpler theories were to have a better record of turning out to be correct than more complex ones, that would corroborate Occam's razor. However, this type of justification has several complications.

First of all, even assuming that simpler theories have been more successful, this observation provides little insight into exactly why this is, and thus leaves open the possibility that the factor behind the success of these theories was not their simplicity but rather something that causally correlates with it (see Correlation vs. Causation). Second, Occam's Razor is not a theory; it is a heuristic maxim for choosing among theories, and attempting to choose between it and some alternative as if they were theories of the regular sort invokes circular logic. We rely on the razor when we justify induction; by attempting to in turn rely on induction when we justify the razor, we are begging the question.

There are many different ways of making inductive inferences from past data concerning the success of different theories throughout the history of science; inferring that "simpler theories are, other things being equal, generally better than more complex ones" is just one way of many, and only seems more plausible to us because we are already assuming the razor to be true (see e.g. Swinburne 1997). Inductive justification for Occam's razor being a dead-end game, we have the choice of either accepting it as an article of faith based on pragmatist considerations or attempting deductive justification."

I posted one such deductive justification earlier, here it is again, by Jerrold Katz:

"If a hypothesis, H, explains the same evidence as a hypothesis G, but does so by postulating more entities than G, then, other things being equal, the evidence has to bear greater weight in the case of H than in the case of G, and hence the amount of support it gives H is proportionately less than it gives G."

wazz
06-16-2007, 07:00 PM
[ QUOTE ]
[ QUOTE ]



The bolded part is an incorrect interpolation by you of my statement. You make it seem like I am saying some piece of evidence exists which points to the simpler theory being more likely than the other. That's not what I'm saying. I'm saying that given the same evidence, a simpler theory that adequately explains all that evidence is LITERALLY more likely to be correct than less simple alternatives tha adequately explain that same evidence. This is why parsimony and Occam's Razor are useful in the first place.

I repeat, if the simpler explanation is NOT more likely to be correct, WHAT IS THE JUSTIFICATION OF OCCAM'S RAZOR IN THE FIRST PLACE? The simpler theory requires fewer keystrokes?

/images/graemlins/confused.gif

[/ QUOTE ]

Perhaps this will help, from http://en.wikipedia.org/wiki/Occam's_Razor

"Empirical justification

One way a theory or a principle could be justified is empirically; that is to say, if simpler theories were to have a better record of turning out to be correct than more complex ones, that would corroborate Occam's razor. However, this type of justification has several complications.

First of all, even assuming that simpler theories have been more successful, this observation provides little insight into exactly why this is, and thus leaves open the possibility that the factor behind the success of these theories was not their simplicity but rather something that causally correlates with it (see Correlation vs. Causation). Second, Occam's Razor is not a theory; it is a heuristic maxim for choosing among theories, and attempting to choose between it and some alternative as if they were theories of the regular sort invokes circular logic. We rely on the razor when we justify induction; by attempting to in turn rely on induction when we justify the razor, we are begging the question.

There are many different ways of making inductive inferences from past data concerning the success of different theories throughout the history of science; inferring that "simpler theories are, other things being equal, generally better than more complex ones" is just one way of many, and only seems more plausible to us because we are already assuming the razor to be true (see e.g. Swinburne 1997). Inductive justification for Occam's razor being a dead-end game, we have the choice of either accepting it as an article of faith based on pragmatist considerations or attempting deductive justification."

I posted one such deductive justification earlier, here it is again, by Jerrold Katz:

"If a hypothesis, H, explains the same evidence as a hypothesis G, but does so by postulating more entities than G, then, other things being equal, the evidence has to bear greater weight in the case of H than in the case of G, and hence the amount of support it gives H is proportionately less than it gives G."

[/ QUOTE ]

This ALL seems wrong. Firstly, an empirical justification is of no use or interest of me, as per my OP; secondly, like i said earlier, one could posit the alternatives to OR and easily find examples that support each; thirdly, induction does not need justification by OR, it stands up (as far as induction will stand up) by itself, and I don't see how it would be possible to justify OR without induction; fourthly, with regards to Katz's deductive justification, the fact that we give more weighting to the evidence of a simpler theory does not mean that the simpler theory is more likely to be true.

Philo
06-16-2007, 07:19 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]



The bolded part is an incorrect interpolation by you of my statement. You make it seem like I am saying some piece of evidence exists which points to the simpler theory being more likely than the other. That's not what I'm saying. I'm saying that given the same evidence, a simpler theory that adequately explains all that evidence is LITERALLY more likely to be correct than less simple alternatives tha adequately explain that same evidence. This is why parsimony and Occam's Razor are useful in the first place.

I repeat, if the simpler explanation is NOT more likely to be correct, WHAT IS THE JUSTIFICATION OF OCCAM'S RAZOR IN THE FIRST PLACE? The simpler theory requires fewer keystrokes?

/images/graemlins/confused.gif

[/ QUOTE ]

Perhaps this will help, from http://en.wikipedia.org/wiki/Occam's_Razor

"Empirical justification

One way a theory or a principle could be justified is empirically; that is to say, if simpler theories were to have a better record of turning out to be correct than more complex ones, that would corroborate Occam's razor. However, this type of justification has several complications.

First of all, even assuming that simpler theories have been more successful, this observation provides little insight into exactly why this is, and thus leaves open the possibility that the factor behind the success of these theories was not their simplicity but rather something that causally correlates with it (see Correlation vs. Causation). Second, Occam's Razor is not a theory; it is a heuristic maxim for choosing among theories, and attempting to choose between it and some alternative as if they were theories of the regular sort invokes circular logic. We rely on the razor when we justify induction; by attempting to in turn rely on induction when we justify the razor, we are begging the question.

There are many different ways of making inductive inferences from past data concerning the success of different theories throughout the history of science; inferring that "simpler theories are, other things being equal, generally better than more complex ones" is just one way of many, and only seems more plausible to us because we are already assuming the razor to be true (see e.g. Swinburne 1997). Inductive justification for Occam's razor being a dead-end game, we have the choice of either accepting it as an article of faith based on pragmatist considerations or attempting deductive justification."

I posted one such deductive justification earlier, here it is again, by Jerrold Katz:

"If a hypothesis, H, explains the same evidence as a hypothesis G, but does so by postulating more entities than G, then, other things being equal, the evidence has to bear greater weight in the case of H than in the case of G, and hence the amount of support it gives H is proportionately less than it gives G."

[/ QUOTE ]

This ALL seems wrong. Firstly, an empirical justification is of no use or interest of me, as per my OP; secondly, like i said earlier, one could posit the alternatives to OR and easily find examples that support each; thirdly, induction does not need justification by OR, it stands up (as far as induction will stand up) by itself, and I don't see how it would be possible to justify OR without induction; fourthly, with regards to Katz's deductive justification, the fact that we give more weighting to the evidence of a simpler theory does not mean that the simpler theory is more likely to be true.

[/ QUOTE ]

Firstly, I was responding to Boro, not to your OP. I already responded to that. In fact, I responded with Katz's attempt at a deductive justification, which is probably the closest thing you can get to a 'proof' of OR.

Fourthly, Katz's deductive justification is not meant to show that the simpler theory is more likely to be true. That's the whole point in giving a deductive justification rather than an empirical one.

brandofo
06-16-2007, 07:33 PM
Does the razor have four or five blades?

Philo
06-16-2007, 08:19 PM
[ QUOTE ]
Does the razor have four or five blades?

[/ QUOTE ]

Just two actually. For an even more parsimonious shave try the new Ernst Mach III.

Metric
06-16-2007, 08:26 PM
You're missing Boro's (correct) point. Here's the prototype example:

Suppose you have a computer producing a string of output characters -- you don't know the program (input), but you do so far have the first 183,000 output characters. It just so happens that these 183,000 characters happen to be the first 183,000 digits of pi.

Now for the central question -- if you had to predict the next character (number 183,001), is it MORE LIKELY to be the next digit of pi, or a random character? What would you bet on, and why?

Keep in mind, there are plenty of possible programs that say "calculate the first 183,000 digits of pi, and then do something completely different," and given only the first 183,000 output characters, you can't distinguish between any of these and the much simpler program "calculate pi."

The fields of inductive inference, algorithmic probability, etc. were set up to answer this sort of question. And the answer turns out, not surprisingly, to be that "it is more probable that the next output character will be the next digit of pi." As such, they represent something of a rigorous justification for Ockham's razor, and makes Borodog's point -- that simpler explanations that fit the data equally well are typically more likely to be true.

Borodog
06-17-2007, 02:06 AM
[ QUOTE ]
You're missing Boro's (correct) point. Here's the prototype example:

Suppose you have a computer producing a string of output characters -- you don't know the program (input), but you do so far have the first 183,000 output characters. It just so happens that these 183,000 characters happen to be the first 183,000 digits of pi.

Now for the central question -- if you had to predict the next character (number 183,001), is it MORE LIKELY to be the next digit of pi, or a random character? What would you bet on, and why?

Keep in mind, there are plenty of possible programs that say "calculate the first 183,000 digits of pi, and then do something completely different," and given only the first 183,000 output characters, you can't distinguish between any of these and the much simpler program "calculate pi."

The fields of inductive inference, algorithmic probability, etc. were set up to answer this sort of question. And the answer turns out, not surprisingly, to be that "it is more probable that the next output character will be the next digit of pi." As such, they represent something of a rigorous justification for Ockham's razor, and makes Borodog's point -- that simpler explanations that fit the data equally well are typically more likely to be true.

[/ QUOTE ]

THANK YOU!

I cannot even understand how this point is in question.

Philo
06-17-2007, 05:00 AM
[ QUOTE ]
You're missing Boro's (correct) point. Here's the prototype example:

Suppose you have a computer producing a string of output characters -- you don't know the program (input), but you do so far have the first 183,000 output characters. It just so happens that these 183,000 characters happen to be the first 183,000 digits of pi.

Now for the central question -- if you had to predict the next character (number 183,001), is it MORE LIKELY to be the next digit of pi, or a random character? What would you bet on, and why?

Keep in mind, there are plenty of possible programs that say "calculate the first 183,000 digits of pi, and then do something completely different," and given only the first 183,000 output characters, you can't distinguish between any of these and the much simpler program "calculate pi."

The fields of inductive inference, algorithmic probability, etc. were set up to answer this sort of question. And the answer turns out, not surprisingly, to be that "it is more probable that the next output character will be the next digit of pi." As such, they represent something of a rigorous justification for Ockham's razor, and makes Borodog's point -- that simpler explanations that fit the data equally well are typically more likely to be true.

[/ QUOTE ]

There are two points of disagreement here. The first was, what is the correct interpretation of OR? Is it an empirical claim which says that theories that are more ontologically parsimonious are more likely to be true? Or is it a heuristic principle that says something like, given two or more theories all of which are on an equal par with respect to the evidence and with respect to their explanatory power, choose the simplest. The right answer here is, OR is a heuristic principle, not an empirical one. The empirical claim that I just mentioned is a much stronger claim than OR. That's why OR is an interesting topic in the Philosophy of Science. If it was simply an empirical claim we could just test it against our actual results and see if it was right. That's of no special interest to philosophers. You can read about OR yourself to find out that this is true.

The second disagreement grew out of the first. The second one was, given two or more theories all of which are on an equal par with respect to the evidence and with respect to their explanatory power, is the simplest one more likely to be true? I say it's an open question whether or not ontological parsimony makes a theory more likely to be true. String theory is alive and well, despite requiring at least 10 space-time dimensions.

I think the analogy with the computer generated string of output characters is a poor one. It's an open question whether or not nature itself conforms to the principle of parsimony, such that the simplest theory is indeed more likely to be true, but even if it does your analogy would be a poor reason for thinking so. A computer program is written by human beings who have knowledge of pi, and if the string of digits through the first 183,000 matched pi, that would be the reason one would believe that the next digit is likely to be the next digit of pi. This says nothing about whether or not natural phenomena conform with the empirical claim that the more parsimonious theory is more likely to be true.

Metric
06-17-2007, 07:17 AM
[ QUOTE ]
I think the analogy with the computer generated string of output characters is a poor one.

[/ QUOTE ]
Of course you do. But that's only because it is, in fact, such a perfect example unpolluted with [censored] side issues. All you have are data, equally good (but differing in complexity) theories, and the ability to test them. I.E. a scenario where Ockham's razor is essentially the only relevant principle, and where it can be seen to be mathematically correct -- simple explanations will be more likely to be correct, all else being equal.

[ QUOTE ]
A computer program is written by human beings who have knowledge of pi, and if the string of digits through the first 183,000 matched pi, that would be the reason one would believe that the next digit is likely to be the next digit of pi.

[/ QUOTE ]
Sigh. Actually, the assumptions going into these arguments are that the input program is randomly generated -- i.e. specifically not written by a human with knowledge of pi. Inductive inference and algorithmic probability are not results in psychology -- they are general logical and probabilistic conclusions resting on general principles.

Next you will argue that the programming language matters, or the specific type of computer -- no, it doesn't. An invariance theorem exists that shows that these arguments have the same content on essentially any computer in any language.

After that, you will object that computers have little or nothing to do with the universe or the rest of science. But from a physics point of view it turns out that the laws of the universe can be thought of more or less completely in the language of computing if you want to (quantum computing, specifically).

After that, the thread will probably just die because you'll shift the argument to some subtly different question that no one really cares about, but one where you're not quite so obviously and dramatically wrong.

pzhon
06-17-2007, 07:52 AM
[ QUOTE ]
If one theory is more likely to be true given the evidence, we don't need a heuristic principle like Occam's Razor in order to choose among theories. We can just go by the evidence in that case.

[/ QUOTE ]
No, simplicity is not only a tie-breaker. It's a significant indication of the merit of a theory. Ockham's Razor indicates that you should sometimes choose the simpler theory even when the evidence fits a more complicated theory better.

The best quadratic approximation to a function is better than the best linear approximation (except in degenerate situations). However, you should require a substantial increase in accuracy in order to accept the increase in complexity from 2 parameters to 3.

22/7 is a better approximation to pi than 102985/32768, even though the latter is more accurate. 22/7 is surprisingly accurate, relative to its complexity. |Pi-22/7|*7^2 is small. 102985/32768 is more accurate, but it's not even the right choice of numerator for that denominator. If 22/7 isn't accurate enough for you, 355/113 is off by less than a millionth, and it's much less complicated than 102985/32768.

There is a classic urban legend used to illustrate a sacrifice of accuracy to improve a model: When Copernicus proposed a model of the solar system centered about the Sun, his predictions were less accurate than the highly developed geocentric model accepted for thousands of years. However, he needed only about 30 circular motions rather than 80. (Epicycles were still needed because the true Newtonian motion is closer to an ellipse.) This story isn't literally true, but it spreads in part because we recognize that we should be willing to trade some accuracy for simplicity.

Piers
06-17-2007, 07:57 AM
Ockham's Razor is a useful rule of thumb for simplifying decision-making.

But however useful Ockham’s razor might be in practise it is instructive to observe that it is almost always wrong. There will always be a more complicated model that is more accurate, but which is likely lost amongst the uncountable number of plausible but incorrect more complicated models.

[ QUOTE ]
Suppose you have a computer producing a string of output characters -- you don't know the program (input), but you do so far have the first 183,000 output characters. It just so happens that these 183,000 characters happen to be the first 183,000 digits of pi.

Now for the central question -- if you had to predict the next character (number 183,001), is it MORE LIKELY to be the next digit of pi, or a random character? What would you bet on, and why?

Keep in mind, there are plenty of possible programs that say "calculate the first 183,000 digits of pi, and then do something completely different," and given only the first 183,000 output characters, you can't distinguish between any of these and the much simpler program "calculate pi."


[/ QUOTE ]

The model that the computer will continue outputted the first n digits of pi means that its next output will be the n+1th is a good assumption to make in practise. However it is also clearly false. At some point, with 100% certainty the computer will not output the next digit of pi. Either because it not programed to, a power cut, a bug in the program, the expanding sun finally engulfs the earth and the computer, some hardware anomaly or someone just turning the computer off.

wazz
06-17-2007, 08:03 AM
Or the more simple explanation that the computer is neither capable nor programmed to display pi to infinity. I have to say, I don't find that example very convincing at all.

Metric
06-17-2007, 08:24 AM
[ QUOTE ]
The model that the computer will continue outputted the first n digits of pi means that its next output will be the n+1th is a good assumption to make in practise. However it is also clearly false. At some point, with 100% certainty the computer will not output the next digit of pi. Either because it not programed to, a power cut, a bug in the program, the expanding sun finally engulfs the earth and the computer, some hardware anomaly or someone just turning the computer off.

[/ QUOTE ]
None of which effects the result that simpler programs are the more likely explanation, given access to a finite amount of the output (and the assumption of truly random input -- i.e. no cheating allowed).

"Your computer will eventually burn up" is not really a good objection to rigorous results concerning algorithmic probability.

PairTheBoard
06-17-2007, 09:13 AM
The thing is, for any randomly generated program that computes pi correctly for some digits, that program can have add-on lines of code that don't essentially change it. It can have infinitely many add-ons. I don't see any way you could reasonably Pronounce a probalility measure on that space of unbounded finite randomly generated codes. I don't think the proper measure on such a space would even be a probability measure.

If this arguement really has technical merit, somebody has written a peer reviewed paper on it somewhere that you should be able to reference. I doubt you can do that.

PairTheBoard

luckyme
06-17-2007, 09:28 AM
[ QUOTE ]
The thing is, for any randomly generated program that computes pi correctly for some digits, that program can have add-on lines of code that don't essentially change it. It can have infinitely many add-ons. I don't see any way you could reasonably Pronounce a probalility measure on that space of unbounded finite randomly generated codes. I don't think the proper measure on such a space would even be a probability measure.

If this arguement really has technical merit, somebody has written a peer reviewed paper on it somewhere that you should be able to reference. I doubt you can do that.

PairTheBoard

[/ QUOTE ]

Have you not mixed 'what it is doing' with 'what it is'?

luckyme

jason1990
06-17-2007, 09:43 AM
Here is an article on algorithmic probability: http://www.scholarpedia.org/article/Algorithmic_Probability . I do not know anything about it. But the article seems to be saying (near the beginning) that the choice of the prior distribution in algorithmic probability is justified, in part, by Occam's razor.

Piers
06-17-2007, 10:20 AM
[ QUOTE ]
None of which effects the result that simpler programs are the more likely explanation, given access to a finite amount of the output (and the assumption of truly random input -- i.e. no cheating allowed).

"Your computer will eventually burn up" is not really a good objection to rigorous results concerning algorithmic probability.

[/ QUOTE ]

The point I am making is that although Ockham's Razor is an extremely useful tool for practical applications, it is guaranteed to give an inaccurate model in any non trivial situation.

You can restrict consideration of the program results to algorithmic consideration if you want, but reality respects no such bounds.

[ QUOTE ]
(and the assumption of truly random input -- i.e. no cheating allowed).


[/ QUOTE ]

You cannot reliably make such assumptions in a real situation, only in a toy model.

[ QUOTE ]
"Your computer will eventually burn up" is not really a good objection to rigorous results concerning algorithmic probability

[/ QUOTE ]

In your own model maybe, but a real computer does not exist inside your model. The program being run is a guide to what will happen next not a guarantee. A real computer will not continue churning out digits of pi forever. Something will happen, if it’s like my PC the CPU will likely overheat at some point and the OS will crash /images/graemlins/frown.gif

You can model the situation as a computer continuingly outputting digits of pi. This will usually give the correct result, but it is guaranteed to fail at some point, which is the usual fate of the consequences of Ockham’s razor.

luckyme
06-17-2007, 10:31 AM
[ QUOTE ]
You cannot reliably make such assumptions in a real situation, only in a toy model.

[/ QUOTE ]

It looked like a bear.
It looked like it was running at me.
It sounded like it was runninng at me.

It fell in a hole.
Therefore it wasn't a bear running at me?

Ok, you've totally confused me. If you think you can clarify enough, please do.

luckyme

Piers
06-17-2007, 10:33 AM
Here is a reply I made when someone made the same post on RPG about five years ago.

[ QUOTE ]
Take a problem consider the set of explanations. Order this set by
complexity and goodness (how well it fits in with the rest of observed
reality)


The number of 'bad' solutions increases dramatically with increase in
complexity, hence if you pick two solutions, the simplest is
almost always going to be better. This is basis for Occam's
Razor.


Paradoxically it is also true that for any solution, every higher
level of complexity contains a 'better' solution. This does not
contradict Occam's Razor, just highlights something different.




[/ QUOTE ]

luckyme
06-17-2007, 11:07 AM
I'm trying to follow this exchange and seem to be reading a mixture of two different concepts ( or I'm just missing the points as OR suggests :-)

Those familiar with Dennetts work may see the jumping between the design stance and the physical stance.

To use the computer pi example. It appears to be coming from a program "designed" to produce pi and that is the OR position.
It doesn't matter whether the program is running on strung together rice crispies or a Cray and will one day be subject to mice gnawing on key parts and muck up the output.
To mix the claim about the programs output, which is the level OR is dealing with in this example with the physical makeup of what the program is running on is shifting to a different topic altogether.
That level would apply if OR was making a statement about the physical structure of the system producing the output.

that's the best I can to to explain my confusion,

hope there is help out there, luckyme

Borodog
06-17-2007, 12:04 PM
[ QUOTE ]
Ockham's Razor is a useful rule of thumb for simplifying decision-making.

But however useful Ockham’s razor might be in practise it is instructive to observe that it is almost always wrong. There will always be a more complicated model that is more accurate, but which is likely lost amongst the uncountable number of plausible but incorrect more complicated models.

[/ QUOTE ]

That doesn't make Occam's Razor wrong. Do you see why?

Piers
06-17-2007, 12:25 PM
[ QUOTE ]
it is instructive to observe that it is almost always wrong.

[/ QUOTE ]
Ops! In an attempt to be contentious I was being a little wayward in my use of language. What I meant of course was that not that Ockham's razor itself was wrong, but any theory it promotes was, in the sense that in any non trivial scenario the will always be a more accurate theory.

Piers
06-17-2007, 12:32 PM
[ QUOTE ]
Suppose you have a computer producing a string of output characters -- you don't know the program (input), but you do so far have the first 183,000 output

[/ QUOTE ]

I took this to refer to a ‘computer producing a string of output characters’, perhaps if the OP had referred instead to an ‘algorithm producing a string of output characters’ there would be less opportunity for pedantry.

Due to our limited minds, when we consider the real world we have to simplify, put boundaries around things and ignore irrelevant information. While this works in practise most of the time, the flaws are obvious.

Say we use Ockham’s razor to create a simplified model of some aspect of reality. Then create a problem within that model, use Ockham’s razor to solve that problem accurately, and claim that vindicates Ockham’s razor, feels to me a bit like circular reasoning.

aeest400
06-17-2007, 12:33 PM
[ QUOTE ]
[ QUOTE ]
it is instructive to observe that it is almost always wrong.

[/ QUOTE ]
Ops! In an attempt to be contentious I was being a little wayward in my use of language. What I meant of course was that not that Ockham's razor itself was wrong, but any theory it promotes was, in the sense that in any non trivial scenario the will always be a more accurate theory.

[/ QUOTE ]


F=ma?

I think you're forgetting the need for empirical adequacy, which any Occam's Razor pretty much assumes.

Piers
06-17-2007, 12:42 PM
[ QUOTE ]
i.e. no cheating allowed).

[/ QUOTE ]

"Lets assume that no cheating is allowed.”

“Yes but what happens if someone cheats.”

“We are assuming that no on cheats.”

“Yes I know that, but what happens if nether the less someone cheats.”

… 10 years later…

… no one is allowed to cheat”

“I know that, but what happens if despite that assumption someone actually does….

….
… later…

… cheats”
“but…

….

Piers
06-17-2007, 12:51 PM
[ QUOTE ]
F=ma?

I think you're forgetting the need for empirical adequacy, which any Occam's Razor pretty much assumes.

[/ QUOTE ]

Am I?

F=ma, A useful approximation or absolute truth?

What’s force, what’s mass what’s acceleration? To define the mass of an object we put a boundary round it, is this how things really work?

Empirical adequacy not empirical accuracy good maybe you agree with me? F=ma is certainly adequate for most uses.

Philo
06-17-2007, 02:37 PM
[ QUOTE ]
[ QUOTE ]
I think the analogy with the computer generated string of output characters is a poor one.

[/ QUOTE ]
Of course you do. But that's only because it is, in fact, such a perfect example unpolluted with [censored] side issues. All you have are data, equally good (but differing in complexity) theories, and the ability to test them. I.E. a scenario where Ockham's razor is essentially the only relevant principle, and where it can be seen to be mathematically correct -- simple explanations will be more likely to be correct, all else being equal.

[ QUOTE ]
A computer program is written by human beings who have knowledge of pi, and if the string of digits through the first 183,000 matched pi, that would be the reason one would believe that the next digit is likely to be the next digit of pi.

[/ QUOTE ]
Sigh. Actually, the assumptions going into these arguments are that the input program is randomly generated -- i.e. specifically not written by a human with knowledge of pi. Inductive inference and algorithmic probability are not results in psychology -- they are general logical and probabilistic conclusions resting on general principles.

Next you will argue that the programming language matters, or the specific type of computer -- no, it doesn't. An invariance theorem exists that shows that these arguments have the same content on essentially any computer in any language.

After that, you will object that computers have little or nothing to do with the universe or the rest of science. But from a physics point of view it turns out that the laws of the universe can be thought of more or less completely in the language of computing if you want to (quantum computing, specifically).

After that, the thread will probably just die because you'll shift the argument to some subtly different question that no one really cares about, but one where you're not quite so obviously and dramatically wrong.

[/ QUOTE ]

The programming example might be useful in saying something about induction (and I do believe that the assumption that nature is uniform or pattern-like is acceptable). If we see a pattern in nature that conforms to some law-like principle, then it's ok to postulate a 'law of nature' that relies on some principle like that nature is uniform. But being pattern-like is not the same thing as being more ontolologically parsimonious. Do you think justifying induction is the same thing as justifying OR?

If justifying OR is a straightforward empirical matter, why does it generate so much philosophical discussion, and why do philosophers offer various justifications that are non-empirical, like aesthetic, pragmatic, and deductive justifications? Are these philosophers all just thick-headed and ignorant, missing the easy-to-grasp point that OR is straightforwardly justified on empirical grounds?

KipBond
06-17-2007, 03:05 PM
[ QUOTE ]
Its amazing how the two most important, powerful concepts (this and evolution) both seem so ridiculous self-evident and simple, when you finally understand them.

[/ QUOTE ]

I would add Hume's maxim (http://en.wikipedia.org/wiki/User:Laurence_Boyce/Hume's_maxim) to this list:

[ QUOTE ]
The plain consequence is (and it is a general maxim worthy of our attention) that no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish: and even in that case, there is a mutual destruction of arguments, and the superior only gives us an assurance suitable to that degree of force, which remains, after deducting the inferior. When anyone tells me, that he saw a dead man restored to life, I immediately consider with myself, whether it be more probable, that this person should either deceive or be deceived, or that the fact he relates should really have happened. I weigh the one miracle against the other, and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle. If the falsehood of his testimony would be more miraculous, than the event, which he relates; then, and not till then, can he pretend to command my belief or opinion.

[/ QUOTE ]

Borodog
06-17-2007, 03:14 PM
[ QUOTE ]
If justifying OR is a straightforward empirical matter . . .

[/ QUOTE ]

Why do you keep saying this, when that isn't what the people you are arguing with are saying?

borisp
06-17-2007, 03:38 PM
According to the principles of this razor, it is more likely to be spelled "Occam" than "Ockham." So I agree with Borodog on whatever he says.

KipBond
06-17-2007, 03:40 PM
The people arguing that OR implies that certain theories are more likely 'true' or 'correct' have different definitions of those words than is meant in an empirical/scientific sense.

http://en.wikipedia.org/wiki/Occam's_Razor

[ QUOTE ]
Occam's razor (sometimes spelled Ockham's razor) is a principle attributed to the 14th-century English logician and Franciscan friar William of Ockham. The principle states that the explanation of any phenomenon should make as few assumptions as possible, eliminating, or "shaving off," those that make no difference in the observable predictions of the explanatory hypothesis or theory. The principle is often expressed in Latin as the lex parsimoniae ("law of parsimony" or "law of succinctness"):

[ QUOTE ]
entia non sunt multiplicanda praeter necessitatem,

[/ QUOTE ]

which translates to:

[ QUOTE ]
entities should not be multiplied beyond necessity.

[/ QUOTE ]

This is often paraphrased as "All things being equal, the simplest solution tends to be the best one." In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest hypothetical entities. It is in this sense that Occam's razor is usually understood.

[/ QUOTE ]

Borodog
06-17-2007, 03:44 PM
[ QUOTE ]
The people arguing that OR implies that certain theories are more likely 'true' or 'correct' have different definitions of those words than is meant in an empirical/scientific sense.

[/ QUOTE ]

Not really. To wit:

[ QUOTE ]
This is often paraphrased as "All things being equal, the simplest solution tends to be the best one." In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest hypothetical entities. It is in this sense that Occam's razor is usually understood.

[/ QUOTE ]

What definition of "best"? My contention is that the only definition of "best" that can possibly have any meaning is "the most likely to be correct."

Again, if the simplest explanation is not more likely to be true than alternatives containing unnecessary complications, the what benefit is choosing the simpler one? What is the justification? These are the essential questions that keep getting dodged.

KipBond
06-17-2007, 04:06 PM
[ QUOTE ]
Again, if the simplest explanation is not more likely to be true than alternatives containing unnecessary complications, the what benefit is choosing the simpler one? What is the justification? These are the essential questions that keep getting dodged.

[/ QUOTE ]

The answer is this:

OR says nothing about which equally reliable & explanatory theory is "more likely to be true".

You can read a lot about this here:

http://en.wikipedia.org/wiki/Occam's_Razor

We use the simplest of equally reliable & explanatory theories for various reasons: practicality, easier to learn, apply, remember, teach, etc. Once the more complicated model provides a bit more explanatory & reliable predictions, then simplicity takes a back seat when we need to be more accurate.

You can argue that, indeed, it is the case, that the simplest theory is "most likely to be true", and then you could create your own maxim for that. But, that's not Occam's Razor.

But, I think you'll be hard pressed to show that this is the case. It seems to me that more complicated models are usually more reliable & explanatory than simpler models. We still use the simpler models sometimes because they are easier and are 'good enough', but we know they are not "more likely to be true". There may be exceptions, but I think they are rare. Of course, at some point you are going to have to define & present a way to measure "simplest".

KipBond
06-17-2007, 04:17 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
it is instructive to observe that it is almost always wrong.

[/ QUOTE ]
Ops! In an attempt to be contentious I was being a little wayward in my use of language. What I meant of course was that not that Ockham's razor itself was wrong, but any theory it promotes was, in the sense that in any non trivial scenario the will always be a more accurate theory.

[/ QUOTE ]

F=ma?

I think you're forgetting the need for empirical adequacy, which any Occam's Razor pretty much assumes.

[/ QUOTE ]

I think you mean that Newton's 2nd law is "empirically adequate", even though it is not as accurate as Einstein's theory of special relativity (because mass & time are dependent upon velocity).

But, the point should be noted that "F=ma" is not as accurate of a model as Einstein's -- even though it is very simple, and still widely used (because it is "good enough"). In this sense, it is not "more likely to be true"; rather, Einstein's more complicated model is "more likely to be true".

Metric
06-17-2007, 05:12 PM
[ QUOTE ]
The thing is, for any randomly generated program that computes pi correctly for some digits, that program can have add-on lines of code that don't essentially change it. It can have infinitely many add-ons. I don't see any way you could reasonably Pronounce a probalility measure on that space of unbounded finite randomly generated codes. I don't think the proper measure on such a space would even be a probability measure.

If this arguement really has technical merit, somebody has written a peer reviewed paper on it somewhere that you should be able to reference. I doubt you can do that.

PairTheBoard

[/ QUOTE ]
If you generate your input program by, say, flips of a fair coin, you are more likely to run into a relatively short self-delimiting program for output string S before you have huge add-ons that essentially do nothing and an enormously huge program for output string S. In fact, the probability that your machine outputs S is dominated by the probability for you to randomly generate the simplest few programs that compute S.

Much has been written about this stuff -- here's a nice little intro that I just googled: http://www.research.ibm.com/journal/rd/214/ibmrd2104G.pdf

If you're only interested in the first peer-reviewed papers on the subject, look for the work of Solomonoff and Levin.

Metric
06-17-2007, 05:40 PM
[ QUOTE ]
[ QUOTE ]
i.e. no cheating allowed).

[/ QUOTE ]

"Lets assume that no cheating is allowed.”

“Yes but what happens if someone cheats.”

“We are assuming that no on cheats.”

“Yes I know that, but what happens if nether the less someone cheats.”

… 10 years later…

… no one is allowed to cheat”

“I know that, but what happens if despite that assumption someone actually does….

….
… later…

… cheats”
“but…

….

[/ QUOTE ]
If you do cheat, that's fine -- it just means that I no longer care about the result from the point of view of Ockham's razor. Because then any statement about probabilities is directly related to how you cheated, and is no longer a general logical principle connecting generic cause (with no prior) to effect.

vhawk01
06-17-2007, 06:01 PM
[ QUOTE ]
[ QUOTE ]
The people arguing that OR implies that certain theories are more likely 'true' or 'correct' have different definitions of those words than is meant in an empirical/scientific sense.

[/ QUOTE ]

Not really. To wit:

[ QUOTE ]
This is often paraphrased as "All things being equal, the simplest solution tends to be the best one." In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest hypothetical entities. It is in this sense that Occam's razor is usually understood.

[/ QUOTE ]

What definition of "best"? My contention is that the only definition of "best" that can possibly have any meaning is "the most likely to be correct."

Again, if the simplest explanation is not more likely to be true than alternatives containing unnecessary complications, the what benefit is choosing the simpler one? What is the justification? These are the essential questions that keep getting dodged.

[/ QUOTE ]

I did my best to show the benefit, but I am at a loss to show the justification. I was always under the impression that the Razor is, in fact, unjustified. The benefit is simply that it is always easier if we can decide on one explanation or theory to talk about, EVEN IF WE REALIZE that there are an infinite number of acceptable ones. We could make any sort of arbitrary selection we wish, but the only one that ISN'T arbitrary is the Razor. Its my understanding that it is just as justified to instead choose the most complicated one, but that it is, in practice, impossible to do so. We can always find a more complicated explanation.

vhawk01
06-17-2007, 06:03 PM
[ QUOTE ]
[ QUOTE ]
Again, if the simplest explanation is not more likely to be true than alternatives containing unnecessary complications, the what benefit is choosing the simpler one? What is the justification? These are the essential questions that keep getting dodged.

[/ QUOTE ]

The answer is this:

OR says nothing about which equally reliable & explanatory theory is "more likely to be true".

You can read a lot about this here:

http://en.wikipedia.org/wiki/Occam's_Razor

We use the simplest of equally reliable & explanatory theories for various reasons: practicality, easier to learn, apply, remember, teach, etc. Once the more complicated model provides a bit more explanatory & reliable predictions, then simplicity takes a back seat when we need to be more accurate.

You can argue that, indeed, it is the case, that the simplest theory is "most likely to be true", and then you could create your own maxim for that. But, that's not Occam's Razor.

But, I think you'll be hard pressed to show that this is the case. It seems to me that more complicated models are usually more reliable & explanatory than simpler models. We still use the simpler models sometimes because they are easier and are 'good enough', but we know they are not "more likely to be true". There may be exceptions, but I think they are rare. Of course, at some point you are going to have to define & present a way to measure "simplest".

[/ QUOTE ]

Yeah, this is better than my post.

vhawk01
06-17-2007, 06:04 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The people arguing that OR implies that certain theories are more likely 'true' or 'correct' have different definitions of those words than is meant in an empirical/scientific sense.

[/ QUOTE ]

Not really. To wit:

[ QUOTE ]
This is often paraphrased as "All things being equal, the simplest solution tends to be the best one." In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest hypothetical entities. It is in this sense that Occam's razor is usually understood.

[/ QUOTE ]

What definition of "best"? My contention is that the only definition of "best" that can possibly have any meaning is "the most likely to be correct."

Again, if the simplest explanation is not more likely to be true than alternatives containing unnecessary complications, the what benefit is choosing the simpler one? What is the justification? These are the essential questions that keep getting dodged.

[/ QUOTE ]

I did my best to show the benefit, but I am at a loss to show the justification. I was always under the impression that the Razor is, in fact, unjustified. The benefit is simply that it is always easier if we can decide on one explanation or theory to talk about, EVEN IF WE REALIZE that there are an infinite number of acceptable ones. We could make any sort of arbitrary selection we wish, but the only one that ISN'T arbitrary is the Razor. Its my understanding that it is just as justified to instead choose the most complicated one, but that it is, in practice, impossible to do so. We can always find a more complicated explanation.

[/ QUOTE ]

As a matter of fact, Boro, if you interpret the Razor the way you seem to be doing, then what can the phrase "all else being equal" possibly mean? How could any two theories ever be equal in all else, since obviously "probability of being correct" is just about the only relevant "all else" and you are deeming it exactly NOT equal?

IronUnkind
06-17-2007, 06:29 PM
[ QUOTE ]
We could make any sort of arbitrary selection we wish, but the only one that ISN'T arbitrary is the Razor. Its my understanding that it is just as justified to instead choose the most complicated one, but that it is, in practice, impossible to do so. We can always find a more complicated explanation.

[/ QUOTE ]

There are other non-arbitrary criteria which one might use. Cleverness, for example.

vhawk01
06-17-2007, 06:45 PM
[ QUOTE ]
[ QUOTE ]
We could make any sort of arbitrary selection we wish, but the only one that ISN'T arbitrary is the Razor. Its my understanding that it is just as justified to instead choose the most complicated one, but that it is, in practice, impossible to do so. We can always find a more complicated explanation.

[/ QUOTE ]

There are other non-arbitrary criteria which one might use. Cleverness, for example.

[/ QUOTE ]

Thats the same one I'm using.

Philo
06-17-2007, 10:20 PM
[ QUOTE ]
[ QUOTE ]
If justifying OR is a straightforward empirical matter . . .

[/ QUOTE ]

Why do you keep saying this, when that isn't what the people you are arguing with are saying?

[/ QUOTE ]

To quote you, you said earlier, "If the simpler explanation were not more likely to be true, what is the justification for the razor at all? The very point is that they are NOT equally likely to be true given the evidence. Hence the razor."

If you think that OR says something like "the simplest theory is more likely to be true," then you think the justification for OR is empirical, rather than, say, aesthetic, pragmatic, or deductive, which are all forms of justification for OR that philosophers have offered.