PDA

View Full Version : Some Verification From Mathmeticians Please


David Sklansky
05-26-2007, 05:52 PM
Pair The Board has been insisting that the way I approach some problems have no right to be called athoritatively mathematical because, while the arithmetic is right, the underlying assumptions are just my opinion. I maintain that it is only in the most nitpicking sense can one maintain that any other assumption is worth considering. Sort of like why solipsism shouldn't be considered. But he says mathmeticians are on his side. Because one happened to chime in regarding a very specific point.

This general argument moved into the specific realm of jurors opinions of a defendents guilt. I thought it would be a good idea to make them think about this opinion in probability terms. And the following exchange ensued. I'm hoping some mathmeticians and scientists out there will take my side. Even if there are complex technical reasons why my position isn't logically flawless.

Quote:
--------------------------------------------------------------------------------


Quote:
--------------------------------------------------------------------------------

I gave a precise definition of the pseudo probability as you call it. The juror is asked to imagine that there are 100 trials with the exact same evidence. How many of those defendents IN HIS OWN PERSONAL OPINION will be innocent. As it is, the juror is expected to give the answer "not many" before he convicts. So all I am saying is that it would be nice for jurors to be told what number should be considered not many.


--------------------------------------------------------------------------------



I cannot imagine how there could possibly be 100 different trials with the exact same evidence. That scenario is a figment of your imagination. When I imagine 100 copies of the 1 situation that is in front of me, all I can see are either 100 guilty defendents or 100 innocent ones. I just don't know which. That is a philosophical difference in how we look at it. I am not bound by your philosophical view nor by the imaginary figment you have conjured.

PairTheBoard


--------------------------------------------------------------------------------



You are simply wrong. The thought experiment while contrived, is not unimaginable. And it has nothing to do with philosophy. To show this, imagine that there is a horse race picking contest based totally on the information in the Daily Racing form. I'll say the races were already run to avoid a nitpick about past events vs future events.

There are one hundred DIFFERENT races with totally different horses. Eight horses in each race. But amazingly the past performances for horses one through eight are EXACTLY the same as far as what is in the racing form. In other words all number ones look alike. All number twos look alike. Etc etc. But they are NOT IDENTICAL Horses. And there are differences among them as well as among the conditions of the races. Some relevant. Such as height and weight. Or what the track bias was that day. But that information isn't available to you. Just like all evidence is not available at the trial.

Anyway you are now asked to pick the winner of each race. Say number three looks much the best. So you pick him. But that means you would pick number three in ALL races. Now your contention, if translated to this example, would be that number three will either win all races or lose all races. But this would obviously not be the case even if they were running on a straightaway with no racing luck involved.

If you change all the races to two horses races between Innocent and Guilty, I would hope that you would now understand that my thought experiment does not presuppose some debatable "philosophy".

Shine
05-26-2007, 06:06 PM
Who cares about the "100 trials" argument. Is asking the juror his opinion of the probability of guilt really so hard? Would like to hear PTB here.

Shine
05-26-2007, 06:14 PM
I take your side, DS, but I don't like the way that you've presented this.

As I read the above, what I think (could be wrong) you guys are saying is: DS thinks that in the hypothetical trials, unreported facts are independent and free to change, while PTB thinks that the man is either guilty 100/100 or not guilty 100/100 because he thinks that it is not reasonable to consider some things "independent", and that the world exists in its current state only.

PairTheBoard
05-26-2007, 07:06 PM
[ QUOTE ]
Anyway you are now asked to pick the winner of each race. Say number three looks much the best. So you pick him. But that means you would pick number three in ALL races. Now your contention, if translated to this example, would be that number three will either win all races or lose all races. But this would obviously not be the case even if they were running on a straightaway with no racing luck involved.



[/ QUOTE ]

I understand your method works well for picking horses. From what I've heard you have perfected the method to such a degree that you are one of the rare successful handicapers. At least I've heard you make bets on the horses and I can't imagine you doing this over a long period of time if you weren't successful at it.

So I'm not really fundamentally opposed to the Baysian approach to probability when it has good applications. The real question is whether the court case is a good application.

A big problem in applying the approach in court cases is the way evidence is gathered. It's Not the kind of straightforward reporting of data like on the horse racing form. Police tend to find suspects and focus in on them. They then Look for circumstances that link the suspect to the crime. This means that not all evidence collected is independent. It's usually only the very initial evidence that is unbiased. What I can't imagine is how the Correlation of evidence so produced can possibly be automatically modeled. The Jury's human judgement for the relative strength of such evidence taken as a whole is vital in my opinion.

For example, in your Shoe Size thread you have people convinced that the appropriate model is 1 million identical cases with 800,000 guilty and 200,000 innocent based on pre-Shoe Size evidence. With say, 1% of the general population matching the new Shoe Size evidence you conclude the Innocent Percent now must be 2000/(2000+800,00) or about 1 quarter of 1 percent. Why do we have the feeling that something is wrong here?

The reason is because it's not an accurate model. Consider the following Model. The police investigate 1 billion similiar cases. They find pre-Shoe Size evidence in 1 million of those cases whereby 800,000 supects are guilty and 200,000 are innocent. Determined to nail the case down, the police find additional circumstances that match all 1 million suspects to the crime. The one and only Trial we are in just happens to be the one where the matching circumstance is the Shoe Size.

Not only do you not get the same Baysian conclusion for this Model but you really don't know how much of this kind of thing was happening for all the other evidence collected in the 1 Billion cases. We're just assuming the Police had 80% accuracy for that evidence. In reality this Correlation of evidence may be happening for a lot of the evidence.

Furthermore, we just happen to be able to see the flaw in your straightforward model for this situation. In general, the flaw may not be so apparent. But if we get the feeling something's wrong it's probably a good idea to pay attention to it.

PairTheBoard

bunny
05-26-2007, 08:05 PM
I'm an ex-mathematician but with only a fleeting interest in probability. Your interpretation seems obvious and "standard".

I think what PairTheBoard is making reference to is the difficulty in ascribing a probability to a one-off event. The guy's either guilty or he isnt. There is a valid philosophical question as to what it means to say "The probability that event X occurred is 40%" but I think he is making too much of it in this case.

Piers
05-26-2007, 09:38 PM
[ QUOTE ]
I gave a precise definition of the pseudo probability as you call it. The juror is asked to imagine that there are 100 trials with the exact same evidence. How many of those defendents IN HIS OWN PERSONAL OPINION will be innocent. As it is, the juror is expected to give the answer "not many" before he convicts. So all I am saying is that it would be nice for jurors to be told what number should be considered not many.

[/ QUOTE ]

I think the idea you are trying to express is well formed; I am not sure whether it is a good idea in practice.

Firstly, a lot of people have problems with numbers. So using a something fuzzy like “not many” is going cause less confusion than a number like 5%, or rather the people getting confused by “not many” are likely intelligent enough to sort things out for themselves.

Secondly using a number like 5% makes it clear to everyone that the legal system is designed so that 5% of the time there will be a miscarriage of justice. That’s likely to frighten a lot of people, and risk the legal system loosing the respect of the public. Using fuzzy language like almost certain allows a smokescreen to be put over the whole subject, and avoids alarming people too much.

[ QUOTE ]
I cannot imagine how there could possibly be 100 different trials with the exact same evidence. That scenario is a figment of your imagination. When I imagine 100 copies of the 1 situation that is in front of me, all I can see are either 100 guilty defendents or 100 innocent ones. I just don't know which. That is a philosophical difference in how we look at it. I am not bound by your philosophical view nor by the imaginary figment you have conjured.

[/ QUOTE ]

Possibly PairTheBoard is getting a little carried away with his contrariness. Although I cannot see anything logically invalid in his statement.

For instance, I am sure everyone agrees including you that the “100 different trials with the same evidence” is a figment of your imagination.

How I love, “the grass is green”, “no you’re wrong the sky is blue” arguments.

PairTheBoard
05-26-2007, 10:15 PM
[ QUOTE ]
[ QUOTE ]
I cannot imagine how there could possibly be 100 different trials with the exact same evidence. That scenario is a figment of your imagination. When I imagine 100 copies of the 1 situation that is in front of me, all I can see are either 100 guilty defendents or 100 innocent ones. I just don't know which. That is a philosophical difference in how we look at it. I am not bound by your philosophical view nor by the imaginary figment you have conjured.

[/ QUOTE ]


Possibly PairTheBoard is getting a little carried away with his contrariness. Although I cannot see anything logically invalid in his statement.


[/ QUOTE ]

I'm not fundamentally opposed to this kind of imaginary conjuring. I realize it can work well for things like making betting odds on horse races. However, I don't know that it always applies so well. There's a Baysian philosophy that says it always applies. That's the philosophy I'm not bound by. And I'm very dubious about its application for weighing evidence in a court case.

A problem I have trying to imagine the 100 court cases with identical evidence is that I can't imagine the larger pool of data from which that evidence was gathered in each of the 100 cases. How much of that data was looked at by police, how much was ignored, and how much was filtered to fit the suspect under investigation? How might the identical evidence in the 100 cases be corrolated differently with respect to the larger pool of data? If we keep it simple and assume the total universe of data is identical in all 100 cases then we are at my point where they're either all guilty or all innocent.

There's also some question in my mind if we are misconceptualizing things at an even more basic level. I'm not sure measuring things on a scale of [0,1] is even correct. There's a weighing of evidence on the scales of justice. The defendant is presumed innocent at the beginning. Suppose evidence is presented at the beginning that's exculpatory. Is he now considered more innocent than innocent? If he starts at 0 is he now still at 0? If additional evidence is added for guilt does he now go up the scale from 0 just like he would if there was no exculpatory evidence?

The whole Sklansky approach just looks very very iffy to me. I'm not just being contrarian. I know the mathematical probability better than Sklansky does. If he would study a lot more math himself he might come to realize how tricky it can sometimes be and the kind of absurd results you can get if you're not careful. When I see as many warning lights flashing as I see in this situation I'm certainly not going to allow myself to be rushed into rash conclusions based on misleading application of dubious math models.

PairTheBoard

PLOlover
05-26-2007, 11:05 PM
[ QUOTE ]
There's also some question in my mind if we are misconceptualizing things at an even more basic level. I'm not sure measuring things on a scale of [0,1] is even correct. There's a weighing of evidence on the scales of justice. The defendant is presumed innocent at the beginning. Suppose evidence is presented at the beginning that's exculpatory. Is he now considered more innocent than innocent? If he starts at 0 is he now still at 0? If additional evidence is added for guilt does he now go up the scale from 0 just like he would if there was no exculpatory evidence?

[/ QUOTE ]

The first thing I thought of was that
a) presumption of innocence (100% not guilty - 0% guilty)
b) no preconceptions/mind not made up (50% not guilty - 50% guilty)

PLOlover
05-26-2007, 11:22 PM
As a non math guy let me just throw something out there.
2 data items:

1) jury - 80% sure guilty ->80% of time guy is guilty, or as in DS post 800k/1mill are guilty

2) in model of 1million defendants 800k guilty and have shoe size, 200k innocent and 2k have shoe size

-----------------------
opinion: even though it is natural to think 1) and 2) are talking about the same data set, I wonder if this natural assumption is true.

I'm not really sure how to word it, sorry. It seems to me there's some crossover between 1) and 2) that kinda begs the question or something.

----

ok how's this. let me reword stuff to accentuate my point.

1) the jury will pick correctly 8/10 times.
2) in a 1,000,00 trials the a person is found guilty 800k times.

now given my new 1) and 2), does that change the problem outcome?
Notice that now some guilty men are found not guilty
and
some innnocent men are found guilty.

----------------------------------
another point maybe here is what I was trying to think of.
Let's assume that the jury guilt percent is not a linear function.
I realize in the original DS post it was defined so I'm not sure
how relevant this is.
For example, at
99% 9.5/10 are actually guilty
95% 9/10 are actually guilty
90% 8/10 are actually guilty
80% 5/10 are actually guilty
70% 2/10 are actually guilty

-------------------------------
I guess I'm rambling sorry just throwing stuff out there

jason1990
05-26-2007, 11:55 PM
Like it or not, you are sitting smack dab in the middle of what may be one of the most contentious philosophical debates of the 20th century. bunny is right that much of this centers around assigning probabilities to single events. The frequency philosophy says that probability only makes sense in the context of a long sequence of independent trials. For example, its adherents contend that it makes no sense to talk about the probability that Hillary Clinton will be the next president, since it is a one time event. On the other hand, Bayesian philosophy says that objective probabilities do not exist. According to them, the probability that Hillary Clinton will be the next president does not exist in any objective sense. You can talk about it, but only by asserting your own subjective view of the matter. All subjective views are equally valid. If you think the brand new quarter in my pocket has probability 1/3 of landing heads, then that is your view. Who am I to argue? In Bayesian philosophy, all that matters is that your subjective views are consistent, so that a Dutch book cannot be made against you. This debate relates to the jury discussion because an individual murder is a one-time event. Frequentists say probability concepts do not apply. Bayesians say they apply, but it is all subjective. Any consistent opinion on the matter is as valid as any other.

In my opinion, both philosophies are mostly useless. The Bayesian philosophy, taken literally, is absurd. For instance, if there is physical symmetry in a system, such as the rolling of a symmetric die, I am convinced that, at least approximately, all sides are equally likely and that this is an objective statement. The symmetry of the fair coin tells us, objectively, that the probability of heads is (at least approximately) 1/2. A Bayesian who says otherwise is deluded.

Frequentists can be just as ridiculous. Imagine this: I have a deformed coin. I am about to flip it twice. I will destroy it after two flips. I claim that the probability of flipping heads first, tails second is the same as the probability of flipping tails first, heads second. This claim is based on symmetry in time and I consider it an objective statement of fact. But a frequentist would tell me my claim is meaningless. Since the coin will be destroyed, there is no long run sequence, so the concept of probability does not apply.

Regarding the horses, I am imagining a scenario in which you come to me with the racing form and you ask me, "what is the probability horse 3 will win?" I would probably try to build a model and come up with a number for you. If pressed, I would freely acknowledge that this number represents my subjective opinion. But I would try to argue that it is a "good" opinion by appealing to whatever facts I uncovered and incorporated in my model.

On the other hand, I might just answer you by saying, "I don't know." I think this is a legitimate answer and, in this case, is probably the only completely objective answer. If I was intent on remaining 100% objective, I would simply refuse to answer your question. You might then ask, "do you believe, beyond a reasonable doubt, that horse 3 will win?" I think I could answer that question without deciding on a specific numeric probability. You might even ask, "do you think there is a preponderance of evidence indicating that horse 3 will win?" I think I could also answer that question without deciding on a specific numeric probability.

In other words, if I met a juror who refused to assign a numeric value to the probability of guilt, and justified it by claiming a desire to remain as objective as possible, then I would consider that a rational stance and I would not be concerned that this stance, in and of itself, would prevent him from doing his job as a juror.

But if met someone who refused to assign a numeric value to the probability that I win my next bet at the roulette wheel, and justified it by claiming a desire to remain as objective as possible, then I would consider that person to be in denial about the practical reality of probability.

luckyme
05-27-2007, 02:01 AM
[ QUOTE ]
You might even ask, "do you think there is a preponderance of evidence indicating that horse 3 will win?" I think I could also answer that question without deciding on a specific numeric probability.

[/ QUOTE ]

I was wondering what the answer would be if the situation was -
There are 3 races today, you must bet a years salary on one randomly drawn horse in one of them at the morning line, your choice of which one after the draw. Would a frequentist just shug and pick one blind or cave in and call DS?

luckyme

David Sklansky
05-27-2007, 02:21 AM
I don't think you are getting what I am arguing for. It is not that there is a good way to assign probabilities to a one time event. It is that there is a good way clarify what an individual means if he chooses to specify a probability. Or even if he chooses to more vaguely say a one time outcome is likely or not likely to be true. Which is that if this wasn't a one time event, and identical evidence is presented every time, different outcomes would occur with a certain frequencey in that person's mind.

It is similar to the idea that if someone lays you 6-5 on one coin flip he has given you 50 cents. You don't need repeated trials for that to be true.

luckyme
05-27-2007, 02:30 AM
[ QUOTE ]
According to them, the probability that Hillary Clinton will be the next president does not exist in any objective sense. You can talk about it, but only by asserting your own subjective view of the matter. All subjective views are equally valid.

[/ QUOTE ]

I raised half-in with AA today in a cash game, was raised all-in by a 10-3 suited. true.
I now realize I was likely up against a frequentist and I no longer think of his choice as ridiculous but merely an expression of a profound philosophical position. I hope he has tenure .. I need the money.

So, he'll bet me even money ( equally valid) on any of the candidates for the parties right now... pm me.

luckyme

David Sklansky
05-27-2007, 02:50 AM
Do you mind telling me your profession and your academic background, including the names of the schools. I need to see how many rusty cylinders in my brain I need to dust off before continuing the debate.

PairTheBoard
05-27-2007, 02:55 AM
I'm not sure the Sklansky Model can even handle "Presumed Innocence". It's been said that the Shoe Size evidence might be enough to convict by itself. I'm not sure the Sklansky Model implies this. In fact, I think the Sklansky Model would give No Weight to the Evidence from a 0 probability of Guilt level, much less move the line to 99% guilty.

Suppose we presume just a little guilt. Say we assume the defendant is from the same city of 1 million people as the murder. Then our imaginary group of defendants are the 1 million people in that city and the defendant is judged to have 1 chance in a million of being the murderer. Now the Shoe Size evidence is presented. That cuts the popluation of the imaginary model down to 10,000 which includes the defendant and the murder. That means the line has been moved from 1 chance in a million to 1 chance in 10,000 of Guilt. Not even close to 99%.

But that is not a real presumption of innocence. A real presumption of innocence should start out with 0% Guilt, not 1 in a million. So lets move it closer to 0%. Let's presume the defendant is from the same continent of 1 billion people as the murderer. After the Shoe Size evidence he then has 1 chance in 10 million of Guilt. Practically nothing. In fact, as we move the starting level for guilt closer and closer to 0% where it should be, the Shoe Size evidence Guilt also goes to 0%.

Presumption of Innocence under Sklansky's Model would mean that noone could ever be convicted on circumstantial evidence. Even if you had a chain of such evidence which Sklansky claims would parlay guilt up to 100%, if we start with a presumed innocence level of 0% none of the circumstantial evidence would move the line.

PairTheBoard

jason1990
05-27-2007, 11:42 AM
[ QUOTE ]
I don't think you are getting what I am arguing for. It is not that there is a good way to assign probabilities to a one time event. It is that there is a good way clarify what an individual means if he chooses to specify a probability. Or even if he chooses to more vaguely say a one time outcome is likely or not likely to be true. Which is that if this wasn't a one time event, and identical evidence is presented every time, different outcomes would occur with a certain frequencey in that person's mind.

[/ QUOTE ]
So suppose I choose to specify a probability. Defendant is guilty with probability 20%. If all you are saying is that there is a good way to clarify what I mean by that, then of course you are correct. The standard method is to use wagering. My probability statement means that if I were to bet on this, I would need at least a 4:1 payoff before I would consider it a favorable bet.

Of course, my opinions should be consistent with the laws of probability. One of those is the law of large numbers. So if you asked me about a hypothetical sequence of independent and identically distributed trials just like this one, I would have to tell you that I thought 20% of those trials involved a guilty defendant. This would typically be regarded as a consequence of my belief. But it is clearly equivalent, at least mathematically.

Non-mathematically, though, I prefer the current wagering standard to the hypothetical long run standard you are proposing. This is because hypothetical long runs can sometimes be difficult to imagine. Suppose you asked me what I think the probability of life on mars is. Am I supposed to imagine a sequence of alternate universes? These universes should clearly contain all the same factual elements that I know the real universe contains. But how should I fill in the unknown elements? Clearly, this is up to me to do subjectively and it is exactly what you are asking me to do. But it is so much easier for me to just tell you how much I would be willing to bet, rather than to try to cook up an infinite sequence of alternate imaginary universes.

David Sklansky
05-27-2007, 12:29 PM
"Presumption of Innocence under Sklansky's Model would mean that noone could ever be convicted on circumstantial evidence. Even if you had a chain of such evidence which Sklansky claims would parlay guilt up to 100%, if we start with a presumed innocence level of 0% none of the circumstantial evidence would move the line."

No evidence, eyewitness, or otherwise could move it. If there were a googolplex people on the planet who could not be logically eliminated, virtually no amount of evidence should convict. So what all that means is that the presumption of innocence shouldn't be defined as zero chance of guilt. Obviously. Because zero chance of guilt MEANS guilt is IMPOSSIBLE. Which would mean evidence indeed could never convict anybody.

PairTheBoard
05-27-2007, 12:46 PM
[ QUOTE ]
"Presumption of Innocence under Sklansky's Model would mean that noone could ever be convicted on circumstantial evidence. Even if you had a chain of such evidence which Sklansky claims would parlay guilt up to 100%, if we start with a presumed innocence level of 0% none of the circumstantial evidence would move the line."

No evidence, eyewitness, or otherwise could move it. If there were a googolplex people on the planet who could not be logically eliminated, virtually no amount of evidence should convict. So what all that means is that the presumption of innocence shouldn't be defined as zero chance of guilt. Obviously. Because zero chance of guilt MEANS guilt is IMPOSSIBLE. Which would mean evidence indeed could never convict anybody.

[/ QUOTE ]

Of course that's true. Which means that in order to apply your model we would have to agree on the correct intitial nonzero probability of guilt based on presumption of innocence. I see problems with that. The result of all your Baysian calculations for circumstantial evidence will be contingent on getting that initial probability right.

PairTheBoard

jason1990
05-27-2007, 01:06 PM
I am not sure which thread to reply in, but this thread seems as good as any. I think even a pure Bayesian would acknowledge that probabilities of 0 and 1 are qualitatively different from all others. They are potentially falsifiable, so they cannot be completely subjective. But your point is well taken and I think it raises an important question. If we want to use one all-encompassing probability model in which to do our deliberations, where should we start the line? At the beginning, in the absence of any evidence, what value should we assign to the probability of guilt? Personally, I think it is possible to presume someone is innocent while still acknowledging that there is some chance they are guilty. So I do not think presumption of innocence would require us to start our model at 0%.

Here is a definition of "presumption of innocence" from http://www.lectlaw.com/def/i047.htm :

[ QUOTE ]
INNOCENCE, PRESUMPTION OF - The indictment or formal charge against any person is not evidence of guilt. Indeed, the person is presumed by the law to be innocent. The law does not require a person to prove his innocence or produce any evidence at all. The Government has the burden of proving a person guilty beyond a reasonable doubt, and if it fails to do so the person is (so far as the law is concerned) not guilty.

[/ QUOTE ]

To me, this means two things. First, it means the obvious, which is that the default verdict, in the absence of a sufficient proof by the Government, is not guilty. But second, it means that the trial starts with "no information." We cannot consider the indictment itself to be information. It does not matter, for example, if 90% of all indictments in this particular court and with this particular prosecutor have historically resulted in conviction. We are not allowed to factor this information into our deliberations. So the trial really starts with a blank slate.

So the question is, if we want to use this global Bayesian model, then how do we translate "no information" into a probability of guilt? My answer is: we don't. "No information" really means no information. You cannot form conclusions without premises. If you do, you are just guessing. Granted, at some point, the jurors may have to do some "guessing," but I think they should avoid unnecessary guessing and this initial step is unnecessary in my opinion.

luckyme
05-27-2007, 01:49 PM
[ QUOTE ]
So I do not think presumption of innocence would require us to start our model at 0%.

[/ QUOTE ]

Before you mathematicians get to far down the track for me- If we grab a random person in the USA and charge him, wouldn't our starting point be 1/300Million. He is being treated as just as innocent as the other 299,999,999 which seems what 'presumed innocent' must mean. Surely he's not less innocent than they are to start with?

Does this break down in a small pool? If there is a murder in a closed system ( say a prison cell ) of 4 people and the only evidence is that the murderer has size 10 shoe, that is conclusive evidence if only one of the 4 has it. If the population of New York were the original suspects it would be minuscule evidence. Somehow the original possibility does seem to influence the worthiness of other evidence. At least, the model seems to work even if others would also.

Is the point PTB is making related to it seeming unfair that a cell defendant doesn't start with the same level of 'presumed innocent' as a New York resident? Wouldn't the 'present in the vicinity' be actual evidence against and we're just accepting it before it's been entered on the stand in these examples ( since it's in the premise and 'will be testified to' applies) ?

luckyme

PairTheBoard
05-27-2007, 02:27 PM
[ QUOTE ]
Before you mathematicians get to far down the track for me- If we grab a random person in the USA and charge him, wouldn't our starting point be 1/300Million. He is being treated as just as innocent as the other 299,999,999 which seems what 'presumed innocent' must mean. Surely he's not less innocent than they are to start with?


[/ QUOTE ]

The point I'm making is how this relates to our treatment of the first Baysian calculation in our Model. Suppose 10% of the population is black and our first evidence is the Witness saying a Black person did it. The defendant is black. Does that go in our model as a 10% probability of guilt? If we parcel out these questions to Jurors that's likely how they will think. But if we do a Baysian Calculation on an initial Guilt Probability of 1 in 300 million, our revised Probability of Guilt is 1 in 30 million. That's a huge difference in how the model is going to be working.

And I think it's clear that based on no other information the murderer is as likely to be any other of the 30 million black people in the country as the defendant. The 10% probability of Guilt is totally bogus. Yet in the minds of the Jurors they see evidence which has some kind of 10% weight. Asking them to parcel the totality of evidence and assign Probability Numbers to subsets of evidence is going to be a disaster.

And that doesn't even include the further complications for estimates that that the witness might be mistaken.

PairTheBoard

PLOlover
05-27-2007, 03:50 PM
[ QUOTE ]
resumption of Innocence under Sklansky's Model

[/ QUOTE ]

yet the stipulation that jurors be unbiased for either side points to starting more at 50-50, and if the prosecutor can't get it up to 95 or so then they acquit.

luckyme
05-27-2007, 03:57 PM
[ QUOTE ]
[ QUOTE ]
Before you mathematicians get to far down the track for me- If we grab a random person in the USA and charge him, wouldn't our starting point be 1/300Million. He is being treated as just as innocent as the other 299,999,999 which seems what 'presumed innocent' must mean. Surely he's not less innocent than they are to start with?


[/ QUOTE ]

The point I'm making is how this relates to our treatment of the first Baysian calculation in our Model. Suppose 10% of the population is black and our first evidence is the Witness saying a Black person did it. The defendant is black. Does that go in our model as a 10% probability of guilt? If we parcel out these questions to Jurors that's likely how they will think. But if we do a Baysian Calculation on an initial Guilt Probability of 1 in 300 million, our revised Probability of Guilt is 1 in 30 million. That's a huge difference in how the model is going to be working.

And I think it's clear that based on no other information the murderer is as likely to be any other of the 30 million black people in the country as the defendant. The 10% probability of Guilt is totally bogus. Yet in the minds of the Jurors they see evidence which has some kind of 10% weight. Asking them to parcel the totality of evidence and assign Probability Numbers to subsets of evidence is going to be a disaster.

And that doesn't even include the further complications for estimates that that the witness might be mistaken.

PairTheBoard

[/ QUOTE ]

I don't have a problem with you assuming incredibly stupid jurors, you're likely not far of the mark. It's your reliance on there not being a defense attorney in attendance that concerns me.

luckyme

LA_Price
05-27-2007, 04:03 PM
Am I the only who read this thread and pictured Sklansky about to threaten PairTheBoard with a poker?

PairTheBoard
05-27-2007, 04:06 PM
I think it's interesting to compare this idea of the kind of bet you would be willing to make to the instruction by the court on reasonable doubt. Notice the "bet" they are alluding to is probably not one that Jurors will interpret as monetary. It's more of a "bet your life" kind of bet, which I think is more than appropriate for the situation where the defendant's life is on the line.



[ QUOTE ]
Any doubt which would make a reasonable person hesitate in the most important of his or her affairs.

[/ QUOTE ]

The Juror is going to understand that according to a wide array of his own personal experiences in living. He would not "hesitate" to cross a bridge that's known to be safe. He would hesitate if there was reasonable doubt to its safety. He has a variety of things like this to relate it to.

That's a good thing. That relates to the situation the defendant is in much better than Mandating Numeric Odds the Juror would be willing to lay when betting say $100. A lot is bound to get lost in that translation.

Expecting the Juror to come up with his own Numeric estimates for subsets of the evidence is even more problematic.

PairTheBoard

David Sklansky
05-27-2007, 04:07 PM
"The point I'm making is how this relates to our treatment of the first Baysian calculation in our Model. Suppose 10% of the population is black and our first evidence is the Witness saying a Black person did it. The defendant is black. Does that go in our model as a 10% probability of guilt? If we parcel out these questions to Jurors that's likely how they will think."

What? Like you say, it is a one in 30 million chance. If all of those people had equal opportunity to do the crime. Who would think this information translated to a 10% chance of guilt? Perhaps some would. And you think they should be allowed to be jurors in whodoit crimes?

PairTheBoard
05-27-2007, 04:13 PM
[ QUOTE ]
[ QUOTE ]
resumption of Innocence under Sklansky's Model

[/ QUOTE ]

yet the stipulation that jurors be unbiased for either side points to starting more at 50-50, and if the prosecutor can't get it up to 95 or so then they acquit.

[/ QUOTE ]

No. That would make no sense. That would say that presumption of innocence equates to a coin flip that he's guilty. The fact you would say that just shows how iffy this whole idea of equating levels of credence with numbers that work like probabilities would be in this situation.

PairTheBoard

Phil153
05-27-2007, 04:20 PM
[ QUOTE ]
Of course that's true. Which means that in order to apply your model we would have to agree on the correct intitial nonzero probability of guilt based on presumption of innocence. I see problems with that. The result of all your Baysian calculations for circumstantial evidence will be contingent on getting that initial probability right.

PairTheBoard

[/ QUOTE ]
The presumption of innocence relates to mindset necessary to avoid confirmation bias. It is in no way mathematical.

PairTheBoard
05-27-2007, 04:34 PM
[ QUOTE ]
"The point I'm making is how this relates to our treatment of the first Baysian calculation in our Model. Suppose 10% of the population is black and our first evidence is the Witness saying a Black person did it. The defendant is black. Does that go in our model as a 10% probability of guilt? If we parcel out these questions to Jurors that's likely how they will think."

What? Like you say, it is a one in 30 million chance. If all of those people had equal opportunity to do the crime. Who would think this information translated to a 10% chance of guilt? Perhaps some would. And you think they should be allowed to be jurors in whodoit crimes?

[/ QUOTE ]

Not if they're going to be forced to work with the Sklansky Model. But they do just fine working on this problem normally and wait to judge the totality of evidence to see if it's something that would cause them to hesitate in the most important affairs of their lives. Then make a yes, no decision.

I have my doubts that professional evidence evaluators plugging numbers for each piece of evidence into a machine and then instructing the machine on the correct way to correlate the numbers would come anywhere near the Jury of my Peers in finding justice.

Could they do a better job if they knew a little conditional probability theory? Maybe. Although I'm starting to have my doubts even about that for this Jury Deliberation situation. If knowing a little conditional probability theory is going to produce sophomoric misapplications of it, the Jurors might be better off without it.

PairTheBoard

Phil153
05-27-2007, 04:39 PM
[ QUOTE ]
I have my doubts that professional evidence evaluators plugging numbers for each piece of evidence into a machine and then instructing the machine on the correct way to correlate the numbers would come anywhere near the Jury of my Peers in finding justice.

[/ QUOTE ]
This triggers a memory of something I read once. I'm pretty sure they did just that, and found that they do a better job than juries.

Does anyone remember this?

PairTheBoard
05-27-2007, 04:40 PM
[ QUOTE ]
[ QUOTE ]
Of course that's true. Which means that in order to apply your model we would have to agree on the correct intitial nonzero probability of guilt based on presumption of innocence. I see problems with that. The result of all your Baysian calculations for circumstantial evidence will be contingent on getting that initial probability right.

PairTheBoard

[/ QUOTE ]
The presumption of innocence relates to the effect of confirmation bias. It is in no way mathematical.

[/ QUOTE ]

I agree. But you have to convert it into a Number to even get the Sklansky Model started. If you can't convert that into a number, how can you expect the Jurors to convert a subset of the evidence into a number? Or even the totality of evidence for that matter.

PairTheBoard

PairTheBoard
05-27-2007, 04:45 PM
[ QUOTE ]
Am I the only who read this thread and pictured Sklansky about to threaten PairTheBoard with a poker?

[/ QUOTE ]

I don't know. But it looks like he's now threatening to bore jason1990 to death with a debate.

PairTheBoard

Divad Yksnal
05-27-2007, 05:15 PM
Verification from matheticians? This can worked out from pure thought. I could have done it at about 8 years old. Well before I fully grasped general relativity.

If you guys can't solve this, I don't what to say.

DY

PLOlover
05-28-2007, 12:45 AM
[ QUOTE ]
No. That would make no sense. That would say that presumption of innocence equates to a coin flip that he's guilty. The fact you would say that just shows how iffy this whole idea of equating levels of credence with numbers that work like probabilities would be in this situation.

PairTheBoard

[/ QUOTE ]

Well what I mean is that suppose you come to a fork in the road and person A wants you to come his way and person B wants you to come his way. In order to be fair you can't be predisposed one way or another which way you want to go. that's what I mean by 50-50. I mean, it is an adversarial system.

Note that this does not mean you're gonna find the guy guilty 50% of the time, although in civil cases it might.

Piers
05-28-2007, 09:08 AM
There are two ways you can approach probability. One as a pure maths theory with only a vague awareness that the stuff could be used in anything, the other a tool to model real situations.

If you are using probability to model a real world situation, the only real measure of success is how well your model works in practise.

Creating a useful probabilistic model can be difficult especially if the data to base the model upon is small or singular. It is possible to argue about the best way of doing this, which is what you guys seem to be doing. The problem appears to be that you seem to be implying you are doing something more fundamental than that, implying depth where there is none.

DS’s “The juror is asked to imagine that there are 100 trials with the exact same evidence” is just mental technique for understanding the process better, DS could just have easily said, “The juror is asked to read a book on basic applicable probability.” In fact that would be a better idea, when someone gets picked for jury service have them do a course in probability theory and only allow the ones who pass to sit on whodunit trails. Might as well add law, psychology plus anything else that looks useful. Although just using expert systems instead of jurors would be much ‘cheaper’.

Now back to David’s little problem.

[ QUOTE ]
Suppose a man is on trial for murder and the jury is on the verge of acquitting him in spite of their strong suspicion of guilt because the evidence leaves room for reasonable doubt. But at the last minute a footprint is uncovered at the murder scene. It is definitely the murderer's. And it is the same size as the defendent. If it wasn't, its instant acquittal. But since it is the jury is now contemplating a conviction.

[/ QUOTE ]

I think any reasonable model of the situation will conclude that the rarer the shoe size the more likely the suspect is guilty. Remember a model is not a real thing in any sense, just a tool to help us understand the situation.

By how much will the chance the suspect is guilty increase? Well one can model the situation by considering the proportion of people in the population with that shoe size, and assuming that the previous evidence in the trial and the suspects shoe size are independent, or making some guess at the level of dependence from a casual inspection of the evidence so far. Which should be good enough for a rough order of magnitude estimate.

I think (Its possible I am confused here) a lot PairTheBoard’s complaints are about DS not taking the dependency between various pieces of evidence in a trial seriously enough? A subject that you would need to study very carefully before designing an expert system to replace jurors, but probably beyond the mental capabilities of most jurors.

PairTheBoard
05-28-2007, 03:45 PM
It's not just the disconnect between the model and reality that I'm worried about. It's the attempt to standardize the concept of Reasonable Doubt by way of a Number. Look at the Human terms the court puts it in.

Reasonable Doubt -
"Any doubt which would make a reasonable person hesitate in the most important of his or her affairs."

Now look at the the responses people gave when Sklansky asked people to translate that into a probability of Guilt. He did this in another thread. The Numbers are all over the place. The best Number given by a couple of posters, ZERO, was mostly scoffed at. Yet numbers like 95% were considered reasonable. Why not just leave it in the human terms above that people understand.

If I am getting ready to cross a bridge - an important matter for me if it's unsafe - and I know that the bridge is going to fail and I'm going to die 1 time in 20 that I cross it, do you think I just MIGHT Hestitate to cross it?

That's the kind of thing the court wants people to think about when they have the Life of the Defendant on the line. That's the kind of thing I want the Jurors to think about if they have My Life in their hands. Not some high falootin theoretical Number.

PairTheBoard

PLOlover
05-28-2007, 04:32 PM
[ QUOTE ]
Yet numbers like 95% were considered reasonable.

[/ QUOTE ]

I think it is very very difficult for people to differentiate between 1/20 and 1/50, heck for some people 1/10 and 1/100 seem pretty much the same.

I think most people would agree that reasonable doubt is in the 90-99 range somewhere, maybe mostly because in real human terms people can only tell the difference between 80-99 and 90-99.

Divad Yksnal
05-29-2007, 12:31 PM
Ok, none of you have proven of capable this. David Sklansky requires a "mathemetican" to clarify it for him. I'll point you in the right direction, for now. Herein lies the crux of the problem, quoting Jason1990. You don't need to be a math guy to solve it. If you did that would mean it isn't an thinking problem but a math nerd problem. Why eliminate the smartest? But you will need to be able to think. Something few of the posters on this forum seem to be capable to do.

Good luck. I don't expect any of you to get this.

"Regarding the horses, I am imagining a scenario in which you come to me with the racing form and you ask me, "what is the probability horse 3 will win?" I would probably try to build a model and come up with a number for you. If pressed, I would freely acknowledge that this number represents my subjective opinion. But I would try to argue that it is a "good" opinion by appealing to whatever facts I uncovered and incorporated in my model."

"But if met someone who refused to assign a numeric value to the probability that I win my next bet at the roulette wheel, and justified it by claiming a desire to remain as objective as possible, then I would consider that person to be in denial about the practical reality of probability."

PairTheBoard
05-29-2007, 03:10 PM
[ QUOTE ]
Ok, none of you have proven of capable this. David Sklansky requires a "mathemetican" to clarify it for him. I'll point you in the right direction, for now. Herein lies the crux of the problem, quoting Jason1990. You don't need to be a math guy to solve it. If you did that would mean it isn't an thinking problem but a math nerd problem. Why eliminate the smartest? But you will need to be able to think. Something few of the posters on this forum seem to be capable to do.

Good luck. I don't expect any of you to get this.

"Regarding the horses, I am imagining a scenario in which you come to me with the racing form and you ask me, "what is the probability horse 3 will win?" I would probably try to build a model and come up with a number for you. If pressed, I would freely acknowledge that this number represents my subjective opinion. But I would try to argue that it is a "good" opinion by appealing to whatever facts I uncovered and incorporated in my model."

"But if met someone who refused to assign a numeric value to the probability that I win my next bet at the roulette wheel, and justified it by claiming a desire to remain as objective as possible, then I would consider that person to be in denial about the practical reality of probability."

[/ QUOTE ]

I think you underestimate the ability of some of us here to understand. And it looks like you have missed the point. Our objections are to the Model. Does it fit reality? Can it be tested? What are the reasons given to justify it? Often times, Sklansky omits mention of the model entirely. We are supposed to accept his conclusions with no chance to investigate it. When pressed to present us with one he often can't. When he tries, his reasons for why it should be accepted are debatable. Then he starts another thread and claims we misunderstood him.

PairTheBoard

hasugopher
05-29-2007, 03:38 PM
David, the correct answer is so blatantly obvious. You know this.

note: only read the OP

vhawk01
05-29-2007, 06:14 PM
[ QUOTE ]
Verification from matheticians? This can worked out from pure thought. I could have done it at about 8 years old. Well before I fully grasped general relativity.

If you guys can't solve this, I don't what to say.

DY

[/ QUOTE ]
SAR?