PDA

View Full Version : Diminishing certainty of probabilistic judgements.


Aleo
11-19-2006, 01:37 AM
I'd like some opinions about how best to refute the Diminishing certainty of probabilistic judgements argument.

I am speaking of Hume's Regress argument in the 'Treatise Concerning Human Understanding' Part IV, section i - Of Scepicism with regard to reason.

This argument basically states that for any judgement I make of a probabilistic nature, I can make a judgement about my methodology, or reasoning employed in making that judgement. This judgement will also be probabilistic in nature and will effectively refine my first judgement. I can then make a third judgement, and so on in infinitum. As each of these judgements will be probabilistic, no matter how large my first judgement, it will be reduced to near complete uncertainty.

Put another way, suppose I am 90% certain of x. I can ask myself how certain am I of this claim and may determine that I am 90% certain. I can then ask how certain I am of this second claim and may deteremine I am 90% certain. After only these 3 judgements my certainty of x has been reduced to (.9)(.9)(.9) = (.729) and will continue to fall as I question my methodology.

In this way, Hume seems to conclude that if we do not have 100% certainty or otherwise necessary, deductive knowledge of a claim, we have no certainty at all. Inductive reasoning approaches complete uncertainty no matter how high a probability we initially derive/assign.

In your opinion, where does this argument fail? Does it fail?

Regards
Brad S

thylacine
11-19-2006, 02:41 AM
What about the remaining 10% at each level.

Maybe at each level you are 10% sure that in the previous level you were 100% sure. Etcetera.

You can iterate probability distributions of probability distributions of probability distributions of probability distributions of probability distributions of probability distributions of probability distributions of .......... Etcetera. Etcetera. Etcetera.

There is nothing that forces the overall expected probability to go to zero.

Consider the 2 by 2 matrix below where p+q=1, p>0, q>0 and raise it to the n^th power and consider increasing values of n.

( p q )
( q p )

Got it?

arahant
11-19-2006, 03:17 AM
Well, my first answer would be 'just look at it...'
And bear in mind that i have precious little formal philosophy background. But......

I don't see the argument.
If I am 90% certain of X, i am 90% certain X...period.
You can ask how certain you are that you are 90% certain, but the answer is always 100%. Given that you have decided on a belief, it is certain that you believe it. Any doubt about the the certainty of the claim is included in your original estimate. A meta-estimate doesn't appear to me to have any meaning.

If you feel this is untrue, tell me why, and i'll provide another flaw.

Aleo
11-19-2006, 03:26 AM
[ QUOTE ]
What about the remaining 10% at each level.

Maybe at each level you are 10% sure that in the previous level you were 100% sure. Etcetera.

[/ QUOTE ]

I realize I may have somewhat misrepresented the argument.I'll quote David owen here:

[ QUOTE ]

We must remember that each successive judgment is a judgment based on doubts about the reliability of our cognitive faculties, on our awareness of the mistakes which we and others have made in the past in making judgments or forming beliefs of just this sort. When we reflect on our fallibility, the appropriate response is to increase the margin of error concerning the belief which we are considering. Suppose the first judgment results in a belief that p, which we hold at a very high level, say 0. 9. The second judgment might lead us to think that this judgment involves a margin of error so that in fact we ought to revise our confidence level to somewhere, exactly where we are not sure, between, say, 0.81 and 0.95. A third judgment, again based on consideration of the fallibility of our faculties, might lead us to revise it yet again to somewhere, again exactly where we are not sure, between 0.72 and 0.96. And so on, until the range in which our confidence level might fall is so great that it no longer makes sense to say that we have any confidence left in the belief at all. This is a total extinction of belief and evidence that results in a total suspension of belief.

[/ QUOTE ]

While I'm at it, I may as well quote Hume Himself so you can read the argument exactly as originally phrased:

[ QUOTE ]
In every judgment, which we can form concerning probability, as well as concerning knowledge, we ought always to correct the first judgment, derived from the nature of the object, by another judgment, derived from the nature of the understanding. It is certain a man of solid sense and long experience ought to have, and usually has, a greater assurance in his opinions, than one that is foolish and ignorant, and that our sentiments have different degrees of authority, even with ourselves, in proportion to the degrees of our reason and experience. In the man of the best sense and longest experience, this authority is never entire; since even such-a-one must be conscious of many errors in the past, and must still dread the like for the future. Here then arises a new species of probability to correct and regulate the first, and fix its just standard and proportion. As demonstration is subject to the controul of probability, so is probability liable to a new correction by a reflex act of the mind, wherein the nature of our understanding, and our reasoning from the first probability become our objects.

Having thus found in every probability, beside the original uncertainty inherent in the subject, a new uncertainty derived from the weakness of that faculty, which judges, and having adjusted these two together, we are obliged by our reason to add a new doubt derived from the possibility of error in the estimation we make of the truth and fidelity of our faculties. This is a doubt, which immediately occurs to us, and of which, if we would closely pursue our reason, we cannot avoid giving a decision. But this decision, though it should be favourable to our preceding judgment, being founded only on probability, must weaken still further our first evidence, and must itself be weakened by a fourth doubt of the same kind, and so on in infinitum: till at last there remain nothing of the original probability, however great we may suppose it to have been, and however small the diminution by every new uncertainty. No finite object can subsist under a decrease repeated IN INFINITUM; and even the vastest quantity, which can enter into human imagination, must in this manner be reduced to nothing. Let our first belief be never so strong, it must infallibly perish by passing through so many new examinations, of which each diminishes somewhat of its force and vigour. When I reflect on the natural fallibility of my judgment, I have less confidence in my opinions, than when I only consider the objects concerning which I reason; and when I proceed still farther, to turn the scrutiny against every successive estimation I make of my faculties, all the rules of logic require a continual diminution, and at last a total extinction of belief and evidence.

[/ QUOTE ]

arahant
11-19-2006, 03:38 AM
I refuse to read the original, but to address david owens synopsis (if that is what it is) of the argument...
'confidence intervals' are abstract constructions which already represent a probability distribution. Typically, the 95% 'confidence level'.

If we assume that these meta-judgements have some meaning (and i maintain they really don't) we still don't lose all information. We may end up with a distribution that is non-zero at all points between 0 and 1, but I don't see that it's the uniform distribution.

I still maintain that the original estimate is the end of the chain. People are computers. If I plug 2 numbers into excel and sum them, there is clearly potential error in the process. Still, my computer spits out a single answer. Just because we have the ability to observe our thoughts to some extent doesn't mean the answer changes.

Maybe the problem is just the definition of 'I' and 'certainty' though...that's why 'I' hate philosophy.

Aleo
11-19-2006, 03:39 AM
I see what you are saying, and this was my own first inclination, but it just seems like I can easily imagine scenarios where I can, in fact, make second order judgements.

For example, I might say I believe that a ten sided die will roll a 1-9 with a probability of .9, but I can refine this probability based on the small chance that the die is not truly symmetrical, then further refine this based on the small chamce it might be thrown unfairly, then further refine this based on the small chance that the faces are not marked correctly, etc...

another point is that it seems like we DO make second order judgements in probability when it comes to things like confidence of statistical predictions, or probabilities based on assumed factors that themselves have a degree of uncertainty.

Regards
Brad S

arahant
11-19-2006, 04:07 AM
The intial argument seems to presuppose that we have taken the external factors into account, and are only concerned with our own fallibility.

In the die example, you would have included those factors in your original estimate ('I believe it is .9 if fair, but because of X+Y+Z, I believe it is .85-.95). In the case of statistical analysis, I admit this process occurs, but it still stops with external factors ('My equipment tells me the distribution is X. This equipment has been shown to have error rate Y.'...published works do not continue with 'and I'm an idiot 5% of the time, so tack that on too...').

I guess hume makes the argument that we are questioning our own accuracy as measuring devices...repeatedly...and that this leads to complete uncertainty. I think the result of this is not a complete lack of certainty, but a complete lack of an 'I' to express that certainty.

The first opinion is the output of a cognitive calculation, and the 'I' that expresses it and believes it, to the extent there is an 'I', is just the mediator that communicates the outcome of this calculation to the outside world. I don't even know where Hume stands on the nature of mind, but to me, there is simply no recursion. 'I' can question the original cognitive calculation, and express a new answer i suppose, but that is then not the same 'I'.

I can see how this is a head spinner. I'm sure if you googled this, you could find a number of useful counterarguments. I suspect they all boil down to semantic questions.

Edit: Evidently a lot of people with too much time on their hands have thought about this...here's some wikipedia info on the refutations (http://en.wikipedia.org/wiki/Evidentialism#The_infinite_regress_argument) . Frankly, I find even the refutations lacking. I suggest you take a zen attitude toward this. If you are personally interested, take up zen. If this is for a class, find a nice sturdy kyosaku and give your prof or teacher a good smack.

NotReady
11-19-2006, 04:09 AM
[ QUOTE ]

In your opinion, where does this argument fail? Does it fail?


[/ QUOTE ]

I googled this and found 2 constructions of Hume's argument. Sorry I lost the link and forgot the first construction (which was false) but remember the second, also false, which goes something like this:

Why does each succeeding judgment have to be weaker than the preceding? In other words, .99 * .999 * .9999 etc, approaches 1 rather than 0.

When I first read it, my opinion was it's a false regression anyway. It confuses the idea of mathematical probability with intuitive probability, i.e., an equivocal use of the word probable.

BTW, the paper I read said almost all philosophers and logicians reject this argument of Hume. It also said Hume's meaning in this section is unclear, which is why he made 2 reconstructions of Hume's wording.

arahant
11-19-2006, 04:20 AM
[ QUOTE ]
Why does each succeeding judgment have to be weaker than the preceding? In other words, .99 * .999 * .9999 etc, approaches 1 rather than 0.


[/ QUOTE ]
Doesn't this still approach 0? Certainly, it doesn't approach 1.

I thought at first there might be some series of pathological distributions that wouldn't approach 0, but It doesn't seem to me that that can be the case without breaking the basic rules of probability.

Edit: ummm, ok, i guess there are lots of them, and they aren't even pathological /images/graemlins/smile.gif

NotReady
11-19-2006, 04:33 AM
Ok, found the link
here (http://www.humesociety.org/hs/issues/v11n2/dewitt/dewitt-v11n2.pdf)

I think my math was wrong, did this on the fly, but check out what he says, I think he's right about the logic.

NotReady
11-19-2006, 04:41 AM
P.S. - I'm not really that interested in the math here because I think it's wrong to put a percent probability on what is basically a subjective "feeling".

There's something in this that reminds me of Zeno's paradox proving that motion is impossible. It can probably be solved mathmatically (I've seen one for Zeno), but I think it's flawed conceptually as well.

arahant
11-19-2006, 04:47 AM
[ QUOTE ]
Ok, found the link
here (http://www.humesociety.org/hs/issues/v11n2/dewitt/dewitt-v11n2.pdf)

I think my math was wrong, did this on the fly, but check out what he says, I think he's right about the logic.

[/ QUOTE ]
Thanks. You're math was only half-wrong (it doesn't approach 1). I guess it's pretty obvious you can make a the sequence not approach 0...That's what I get for being up this late. That is sort of my favorite argument.

FWIW, his first reconstruction is basically a formal statement of my original argument about there being no meaningful regression at all. I feel smart for hitting that on the fly, and dumb for missing the second argument, even when you put it right in front of me...guess the night is a wash /images/graemlins/smile.gif

Aleo
11-19-2006, 11:25 AM
[ QUOTE ]
Why does each succeeding judgment have to be weaker than the preceding? In other words, .99 * .999 * .9999 etc

[/ QUOTE ]

This is interesting. That paper you linked to is pretty good. Thanks

Even if such sequences of successive judgements exist where the resulting certainty does not approach zero, can we be justified in saying that this is the case when we actually reason?

What I'm considering now is the idea that if I allow these higher order judgements, I could possibly enumerate them in a way that orders them from greatest possible error to lowest. Of course this would still be an infinite enumeration but might look something like what is described in the paper. I will discount the idea presented in the paper that there is a highest possible meaningful certainty like .999999999999999999999 as we are dealing with infinite regress anyways, and it seems silly to limit 'possible certainty' but not limit possible judgements.

The point is, if I start with my greatest possible errors, then it might make sense to say that I can move to higher order judgements with greater certainty and only reduce my overall certainty by a small amount (which really only expresses our usual philosophical scepticism).

Is this a fair move? Or will there still be higher order doubts that will infinitely regress to zero because our possible errors cannot be ordered from large to small. Could there be constant error possibilities that apply repeatedly and in a constant amount?

I don't think so. For example, going back to the dice case...

I might be .9 certain that the 10 sided die will roll a 1-9
I might refine this to .89-.91 considering the chance that the die is not symmetrical. I might refine this to .895 to .915 considering the chance that the die is unfairly thrown, etc...

With each successive refinement however, I will start to talk about more and more unlikely things. After a while I'll be refining things based on the chance that the die might quantum mechanically teleport across the universe, or that an meteor might crush the die before it stops rolling. In other words, things are getting more and more unlikely and it starts to look like the sequence of ever increasing higher order judgements.

Another thing I am thinking about... Does it really change my degree of belief to say

"I'm .9 certain of x"
"I'm .89 to .91 certain of x"
"I'm .87 to .93 certain of x"
etc...

In each case, if I had to gamble, I'd still pick .9 so it might not truly matter that my interval is widening. The only place it might matter is where the interval gets squashed near 1 or 0 or some other value, but there is probably still a meaningful criterion for picking one single value as the point at which we would gamble, And I suspect that this would still equal our initial probability.

So while these higher order judgements might widen the interval, my actions will still rationally correspond to the initial probability.

madnak
11-19-2006, 02:31 PM
I think the argument is valid if properly interpreted, but that's kind of a dodge. I don't really think it needs to be extended so far anyhow. If you believe there's a 90% chance of something happening, then you have to ask "based on what do I believe that," etc. You're ultimately going to come down to weaker and weaker assumptions, ie "the physical universe is real," "my observations are valid," "2+2=4" (though I doubt Hume would include that, it is a valid inclusion). Ultimately pure axiomatic assumption underlies any conclusion, but more than that, pure inductive assumptions underlie any empirical conclusion. Working from those pure assumptions toward the conclusion implies a series that approaches 0. There are an infinite number of intermediary steps, any one of which can be infinitely extended - for any 0<k<1, y=k^x, y will approach 0 as x approaches infinity.

tshort
11-20-2006, 11:39 AM
Aleo,

Regardless of how you order the probabilities, your certainty will approach zero when infinitely applied.

At some point, you have to have a base argument that is true and does not need justification from other evidence. If you are going to assume that all judgments or arguments must be justified from other evidence, then you can't refute the regress argument.

There has to be some base judgment that is assumed to be 100% correct and not based off of other potentially uncertain evidence.

At some point arguments are assumed to be true. I would like to think that the following could be assumed to be 100% true:

The earth revolves around the sun
Dogs have 4 legs
I am living and conscious
etc...

Aleo
11-20-2006, 04:42 PM
[ QUOTE ]
I would like to think that the following could be assumed to be 100% true:

The earth revolves around the sun
Dogs have 4 legs
I am living and conscious
etc...

[/ QUOTE ]

But, of course, Hume will strongly disagree here, and makes a pretty good argument for it. I think he is right.

At very least, he is right about obviously empirical claims like the earth revolving around the sun. Of them we cannot be completely certain.

Dogs having 4 legs might be different. Perhaps we could consider this just a part of the meaning of 'Dog' and merely true by definition, or true analytically. Putting aside my hesitation to accept analytic truths at all, even if this is so, these kinds of truths are empty. They are true just because we choose to define them in such a way. They don't really tell us anything meaningful about the world and will probably not be useful in grounding our other judgements. Actually maybe in the case of mathematical truths or the probability calculus itself I could take this line... I'm gonna think about this some more although my intuition is that this will be a tricky line to take.

[ QUOTE ]
Regardless of how you order the probabilities, your certainty will approach zero when infinitely applied.


[/ QUOTE ]

No, this took me a second to grasp as well, but it's not necessarily the case.

If, for example, each succeeding judgement gives me one order of magnitude higher certainty, I don't regress to zero.

ie (.9)(.99)(.999)(.9999)(.99999)....

doesn't approach zero. It approaches (.89)

I actually like this idea a lot because it's just a small decrease and seems only to reflect our usual philosophical scepticism about things, without completely undermining rationality. Additionally it makes some sense as it seems like our increasingly higher order judgements in the ordering I described get sillier and sillier and thus, we are more and more certain each time.

The paper linked to in this thread has a brief discussion of this I think, but also doesn't consider it a refutation for other reasons.

Regards
Brad S

jogsxyz
11-20-2006, 05:57 PM
I was 90% certain judgment was spelled with only one 'e'. Was shocked to find that it could be spelled with one, judgment, or two, judgement 'e's.

Aleo
11-20-2006, 06:17 PM
[ QUOTE ]
I was 90% certain judgment was spelled with only one 'e'. Was shocked to find that it could be spelled with one, judgment, or two, judgement 'e's.

[/ QUOTE ]

See, diminishing certainty. F*^%ing Hume.

Regards
Brad S

madnak
11-20-2006, 09:52 PM
[ QUOTE ]
If, for example, each succeeding judgement gives me one order of magnitude higher certainty, I don't regress to zero.

ie (.9)(.99)(.999)(.9999)(.99999)....

[/ QUOTE ]

But you can't increase the certainty as you approach your fundamentally unjustified assumptions - the implication of that would be that those raw assumptions have 100% certainty. And as the steps aren't discrete and finite, no curve is "steep" enough to diminish the trend toward 0. Between any two "steps" are an infinite number of intermediary steps and therefore in order to make even the smallest step one must uncouple oneself from any kind of rational shielding.

tshort
11-20-2006, 10:30 PM
[ QUOTE ]
But, of course, Hume will strongly disagree here, and makes a pretty good argument for it. I think he is right.

At very least, he is right about obviously empirical claims like the earth revolving around the sun. Of them we cannot be completely certain.

[/ QUOTE ]

You are right that Hume would make a good argument against basic truths.

I think we have to assume that what we know is based from experiences of our perception. You wouldn't need justification for an experience.

Hume might argue that your perception is not 100% percent reliable. Within our perception, we can gain knowledge through experiences. Although, we have no way to confirm that our perception is true. Or, we can't confirm knowing what we know.

I think regardless of how you argue for some set of "basic truths," Hume would have a good argument against it.

[ QUOTE ]
ie (.9)(.99)(.999)(.9999)(.99999)....

doesn't approach zero. It approaches (.89)

[/ QUOTE ]

Yes, I missed that it would approach .89.

Now here's the problem. For this to be true, your sequence of infinitely justifying probabilities is .9, .99, .999, and eventually .9999~ on to infinity.

Then, you would agree that .9999~ = 1. So, your probabilities converge to a probability that is 100% correct.

arahant
11-20-2006, 11:18 PM
hmmm...that's kind of how i messed up...so the question is, is there a sequence of probabilities that doesn't converge to 1, and whose product doesn't converge to 0? There probably is, no?

tshort
11-21-2006, 03:31 AM
[ QUOTE ]
hmmm...that's kind of how i messed up...so the question is, is there a sequence of probabilities that doesn't converge to 1, and whose product doesn't converge to 0? There probably is, no?

[/ QUOTE ]

No, all other cases should approach 0.

Also, realize that Hume is referring to the certainty that our original assumption is correct. i.e.:

I 1 will not land 90% of the time on a 10 sided dice.

I can't be 100% sure that the dice is weighted correctly.

I can't be 100% sure that if I measure the weight that the methods are correct.

etc...

Our original assumption could be anything, not necessarily the probability of an even happening.

So, the certainty with which a belief is true diminishes to 0. Hume is basically claiming that we can't know anything with absolute certainty including that knowledge exists. It is all a matter of convention.


Aleo,

You could argue that while you can't prove an assumption due to a series of potentially uncertain justifications, you can't disprove the assumption either.

PairTheBoard
11-21-2006, 05:02 AM
Here's a thought I don't see mentioned. Suppose you have evidence E which implies 90% probability for proposition P. That means, equivalently, that E implies 10% probability for (Not P). If 10% doubt about evidence E logically automatically forces a downward revision in the 90% probabilty it implies for P it should also force a downward revision in the 10% proability it implies for the proposiiton (Not P). But this is a contradiction.

To see the contradiction more easily, suppose E implies a 50% probabilty for proposition P - and therefore also a 50% probability for the proposition (Not P).


PairTheBoard

Aleo
11-21-2006, 01:03 PM
[ QUOTE ]
Here's a thought I don't see mentioned. Suppose you have evidence E which implies 90% probability for proposition P. That means, equivalently, that E implies 10% probability for (Not P). If 10% doubt about evidence E logically automatically forces a downward revision in the 90% probabilty it implies for P it should also force a downward revision in the 10% proability it implies for the proposiiton (Not P). But this is a contradiction.


[/ QUOTE ]

This will not work. For us to hang onto certainty, all the judgements need to be correct, and hence, we must employ the product theorem

P(A & B)= P(A) x P(B)

But in the case of our uncertainty, we just meed to be wrong once, and for 'or' events like this we use

P(A or B) = P(A) + P(B) - P(A) x P(B)

and in the end, no matter how many judgements are applied, our certainty + uncertainty = 1

Think of it like rolling dice. The more you roll, the odds or rolling a specific number go up if you only need to roll it once. On the other hand, they go way down if you have to roll that number every time.

Regards
Brad S

PairTheBoard
11-21-2006, 02:52 PM
[ QUOTE ]
[ QUOTE ]
Here's a thought I don't see mentioned. Suppose you have evidence E which implies 90% probability for proposition P. That means, equivalently, that E implies 10% probability for (Not P). If 10% doubt about evidence E logically automatically forces a downward revision in the 90% probabilty it implies for P it should also force a downward revision in the 10% proability it implies for the proposiiton (Not P). But this is a contradiction.


[/ QUOTE ]

This will not work. For us to hang onto certainty, all the judgements need to be correct, and hence, we must employ the product theorem

P(A & B)= P(A) x P(B)

But in the case of our uncertainty, we just meed to be wrong once, and for 'or' events like this we use

P(A or B) = P(A) + P(B) - P(A) x P(B)

and in the end, no matter how many judgements are applied, our certainty + uncertainty = 1

Think of it like rolling dice. The more you roll, the odds or rolling a specific number go up if you only need to roll it once. On the other hand, they go way down if you have to roll that number every time.

Regards
Brad S

[/ QUOTE ]

I don't see how what you're saying really addresses my point. An example with rolling a die would go more like this. The evidence E is that it's a fair six sided die which implies the numbers 1-5 will come up with probability 5/6. But then we have doubt cast on whether the die is fair. Maybe it's loaded. Does this doubt reduce the chance of rolling 1-5? Why? If it did why doesn't it just as logically follow that the 1/6 chance of rolling a six must be reduced? You can't reduce both probability estimates.

The doubt about the die's fairness makes us less certain about the probabilty estimate of 5/6 but it doesn't change the estimate.

PairTheBoard