PDA

View Full Version : What millenium problem will be solved next?


MelchyBeau
12-19-2006, 08:43 PM
http://www.claymath.org/millennium/

And which do you think will have the greatest value?

Last semester we studied the P=NP problem in depth, and while I don't think it will be solved anytime soon or be the next one to be solve, I believe it has the most potential to be very benificial to society. If we were able to say P=NP, then we could solve so many optimization problems accuratley and quickly, I'm sure UPS and FedEx would bend over backwards to have that solution.

But I think the next one up to be solved is the Navier-Stokes equation. There has been a fair amount of headway on this problem, and we might be nearing the end for the solution.

Utah
12-19-2006, 09:20 PM
[ QUOTE ]

Last semester we studied the P=NP problem in depth, and while I don't think it will be solved anytime soon or be the next one to be solve, I believe it has the most potential to be very benificial to society. If we were able to say P=NP, then we could solve so many optimization problems accuratley and quickly, I'm sure UPS and FedEx would bend over backwards to have that solution.

[/ QUOTE ]

This is the first time I have heard of PvNP problems so I did a quick cram course on the topic. I just have a question - can human beings solve PvNP problems that computers today cant? Can you give me a real world case for a PvNP problem that a human can solve but a computer cannot?

MelchyBeau
12-19-2006, 09:38 PM
It isn't solving the problem being the issue, If a problem is an NP problem, then we can solve algorithmically, however it does not mean we can solve it quickly, IE in polynomial time.

I guess I need to define what polynomial time means. Generally we look at the time taken to solve a problem we look at the number of operations it needs to take. A quick simple example of this is do these numbers sum up to some integer K?

We will take approximately n steps to do this, I.E. add each number up, then checking the sum. This problem would be in P

But there may be a problem that takes on the order of 2^n steps in order to solve it, and that would be exponential time.

A problem in NP means, if someone gives us a problem, and a possible solution to that problem, then we can check that solution in polynomial time

There are several problems called NP-Complete, which is if a problem is NPC and that problem is also in P, then P=NP

An easy problem to understand that is NP-Complete is

SUBSET SUM
Q: Given a set of integers S and an integer k, is there a subset of those integers such that the sum of those integers equals k.

not all problems can be solved in some time. A good example of this is the HALTING problem. essentially, does this algorithm stop? If not, then without some sort of intelligence to check it, we may never know if some stops or not

thylacine
12-19-2006, 11:05 PM
[ QUOTE ]
http://www.claymath.org/millennium/

And which do you think will have the greatest value?

Last semester we studied the P=NP problem in depth, and while I don't think it will be solved anytime soon or be the next one to be solve, I believe it has the most potential to be very benificial to society. If we were able to say P=NP, then we could solve so many optimization problems accuratley and quickly, I'm sure UPS and FedEx would bend over backwards to have that solution.

But I think the next one up to be solved is the Navier-Stokes equation. There has been a fair amount of headway on this problem, and we might be nearing the end for the solution.

[/ QUOTE ]

Is it known for sure that any of them have been solved?

thylacine
12-19-2006, 11:13 PM
[ QUOTE ]
[ QUOTE ]

Last semester we studied the P=NP problem in depth, and while I don't think it will be solved anytime soon or be the next one to be solve, I believe it has the most potential to be very benificial to society. If we were able to say P=NP, then we could solve so many optimization problems accuratley and quickly, I'm sure UPS and FedEx would bend over backwards to have that solution.

[/ QUOTE ]

This is the first time I have heard of PvNP problems so I did a quick cram course on the topic. I just have a question - can human beings solve PvNP problems that computers today cant? Can you give me a real world case for a PvNP problem that a human can solve but a computer cannot?

[/ QUOTE ]

Utah?! I thought you worked on AI, and had expertise in this area. How could you not know about P and NP? Just curious, I am surprised by what you say. Do people study computer science without learning this?

MelchyBeau
12-19-2006, 11:15 PM
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

Utah
12-19-2006, 11:41 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]

Last semester we studied the P=NP problem in depth, and while I don't think it will be solved anytime soon or be the next one to be solve, I believe it has the most potential to be very benificial to society. If we were able to say P=NP, then we could solve so many optimization problems accuratley and quickly, I'm sure UPS and FedEx would bend over backwards to have that solution.

[/ QUOTE ]

This is the first time I have heard of PvNP problems so I did a quick cram course on the topic. I just have a question - can human beings solve PvNP problems that computers today cant? Can you give me a real world case for a PvNP problem that a human can solve but a computer cannot?

[/ QUOTE ]

Utah?! I thought you worked on AI, and had expertise in this area. How could you not know about P and NP? Just curious, I am surprised by what you say. Do people study computer science without learning this?

[/ QUOTE ]

I simply write practical models to solve specific hard problems. I have absolutely zero formal training in computer science and I have never taken a class in computer science or neural nets (although I have read a ton on nets). I simply learned neural nets about 2 years ago because my company was about to crash and burn because the mathematician we hired spent months and failed miserably to solve the problem we needed solved. I ramp up quickly and the idea of losing your funding focuses the mind /images/graemlins/smile.gif

That being said, the P=NP problem struck me hard immediately when I saw it in this post because it seems that I spend a hell of a lot of time on these types of problems and I have lots of really cool ideas on how to solve them that I havent read about anywhere else (in fact we had to hack the code of the leading neural net software product and extend it directly because the product couldnt handle what we wanted to do). When I was cramming on this I came across a comment about how all the computers in the world couldnt solve some problems. That struck me as 100% relevant because I have some problems that if all the computational power in the world started calculating at the big bang the calculation would only be .00000000000000000000000000000000000000001% done. I needed to develop some short cuts because I dont have that long to wait /images/graemlins/smile.gif

I think I know how to solve : Given a set of integers S and an integer k, is there a subset of those integers such that the sum of those integers equals k with even complex sets, but I need to test.

I am curious as to what the proof needed is exactly and I need to study this more because it is one of the most fascinating things I have come across. I can solve some very hard problems that seem to fit this model but that in no way does anything towards proving a general case.

thylacine
12-19-2006, 11:45 PM
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

Duke
12-19-2006, 11:49 PM
I spent time working on P vs NP for a few months about 5 years ago. I ended up giving up on it, and made no real progress.

I think this has a good chance of being solved at some point, but not by me.

Utah
12-19-2006, 11:57 PM
[ QUOTE ]
I spent time working on P vs NP for a few months about 5 years ago. I ended up giving up on it, and made no real progress.

I think this has a good chance of being solved at some point, but not by me.

[/ QUOTE ]

What about work done in the area of practical application? I mean, maybe I cant solve the proof but I could devise something that was fast or accurate 80% of the time. That would still be invaluable.

thylacine
12-20-2006, 12:07 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]

Last semester we studied the P=NP problem in depth, and while I don't think it will be solved anytime soon or be the next one to be solve, I believe it has the most potential to be very benificial to society. If we were able to say P=NP, then we could solve so many optimization problems accuratley and quickly, I'm sure UPS and FedEx would bend over backwards to have that solution.

[/ QUOTE ]

This is the first time I have heard of PvNP problems so I did a quick cram course on the topic. I just have a question - can human beings solve PvNP problems that computers today cant? Can you give me a real world case for a PvNP problem that a human can solve but a computer cannot?

[/ QUOTE ]

Utah?! I thought you worked on AI, and had expertise in this area. How could you not know about P and NP? Just curious, I am surprised by what you say. Do people study computer science without learning this?

[/ QUOTE ]

I simply write practical models to solve specific hard problems. I have absolutely zero formal training in computer science and I have never taken a class in computer science or neural nets (although I have read a ton on nets). I simply learned neural nets about 2 years ago because my company was about to crash and burn because the mathematician we hired spent months and failed miserably to solve the problem we needed solved. I ramp up quickly and the idea of losing your funding focuses the mind /images/graemlins/smile.gif

That being said, the P=NP problem struck me hard immediately when I saw it in this post because it seems that I spend a hell of a lot of time on these types of problems and I have lots of really cool ideas on how to solve them that I havent read about anywhere else (in fact we had to hack the code of the leading neural net software product and extend it directly because the product couldnt handle what we wanted to do). When I was cramming on this I came across a comment about how all the computers in the world couldnt solve some problems. That struck me as 100% relevant because I have some problems that if all the computational power in the world started calculating at the big bang the calculation would only be .00000000000000000000000000000000000000001% done. I needed to develop some short cuts because I dont have that long to wait /images/graemlins/smile.gif

I think I know how to solve : Given a set of integers S and an integer k, is there a subset of those integers such that the sum of those integers equals k with even complex sets, but I need to test.

I am curious as to what the proof needed is exactly and I need to study this more because it is one of the most fascinating things I have come across. I can solve some very hard problems that seem to fit this model but that in no way does anything towards proving a general case.

[/ QUOTE ]

Okay, I understand your situation now (and good luck with your work, it sounds dire). The classic book on this topic (computational complexity) is the one by authors Garey and Johnson, but I can't remember other details such as the title. The importance of knowing this stuff is to recognize when you are almost certainly beating your head against a brick wall (or not).

Utah
12-20-2006, 12:13 AM
[ QUOTE ]
Okay, I understand your situation now (and good luck with your work, it sounds dire). The classic book on this topic (computational complexity) is the one by authors Garey and Johnson, but I can't remember other details such as the title. The importance of knowing this stuff is to recognize when you are almost certainly beating your head against a brick wall (or not)

[/ QUOTE ]

The odd thing is that everything I have seen on solving these types of problems just seems completely wrong and are likely dead ends. I think there may be some advantages to never having formally studied a subject because your thinking is a heck of a lot freer and you are more likely to stump on a different path whereas formal training is more likely to lead you to try and extend current models -ie, your thinking gets corrupted. I posted that idea once in SMP and pretty much everyone disagreed with me /images/graemlins/smile.gif

[ QUOTE ]
it sounds dire

[/ QUOTE ]
Nah. I just solve them and move onto the next problem. I hate being told something cant be done /images/graemlins/smile.gif

MelchyBeau
12-20-2006, 01:38 AM
if you have a algorithm that runs in polynomial time, but is not 100% accurate, while useful, it would not be a solution to the problem does P=NP?

you have to check the number of operations it will take, so for SUBSET SUM problem, lets say n is the number of integers in your set, if your algorithm takes on the order of n^2 or nlogn or even n^10000000, for all n, then it becomes a solution to P=NP, but if it takes 2^n operations to solve, then it is not a solution. Most graph theorists and Computer scientists believe that P does not equal NP.

I would suggest 2 books, Combinatorial Optimization by papadimitriu and Graphs, Networks, and Algorithms, by Jungnickel

Utah
12-20-2006, 02:25 AM
[ QUOTE ]
if you have a algorithm that runs in polynomial time, but is not 100% accurate, while useful, it would not be a solution to the problem does P=NP?

you have to check the number of operations it will take, so for SUBSET SUM problem, lets say n is the number of integers in your set, if your algorithm takes on the order of n^2 or nlogn or even n^10000000, for all n, then it becomes a solution to P=NP, but if it takes 2^n operations to solve, then it is not a solution. Most graph theorists and Computer scientists believe that P does not equal NP.

I would suggest 2 books, Combinatorial Optimization by papadimitriu and Graphs, Networks, and Algorithms, by Jungnickel

[/ QUOTE ]

Well, my thinking cant be too screwy because my approach to this problem would be to use parallel genetic NN algorithms. I found this paper from papadimitiriu link (http://72.14.203.104/search?q=cache:dHCjMySa0goJ:www.ifi.unizh.ch/groups/ailab/people/nitschke/refs/GA%27s/GA-rev-rep.pdf+Combinatorial+Optimization+by+papadimitriu &hl=en&gl=us&ct=clnk&cd=38&client=firefox-a) where he discusses such possibilities. In fact, the area of parallel genetic algorithms is where we needed to extend the current market software. However, he really doesnt seem to discuss the classification problem except for a quick discussion of hybrid genetic algorithms. I think you would need multiple parallel genetic algorithms at different levels of granularity and/or at discrete problems classifications . For example, you could run the following parallel GAs: more/less, broad scale, finite scale, hit/miss, miss by x amount, variable X, variable y, etc. etc. Then you could lay on topic additional genetic algorithms that read the results of the other algothims. Maybe it is 3+ levels. Maybe somehow there is 2-way communication between the levels. I dont know. But it seems to me that there is a classification problem to be solved first. There is no way the problem can just be solved by using faster and more efficient algorithms. It seems the problem should be morphed into different problems that are highly valuable in the ultimate solution.

I am going to see if I can solve this for a simple 3 parameter A+B+C with 100 random digits under 1000 and with a random variable such that A+B+C=K. Then, if that works I will try to expand to 20 variables. I want to figure out how to design it in such a way that it is not highly specialized to the problem so it can be easily adapted to different equations such and combinations of plus and minus. Maybe this is easy peasy and it has been done over and over. But, since I havent tried in a generalized mult-layered framework I assume it will be quite a challenge /images/graemlins/smile.gif

Actually, I have no idea if this actually helps with P=NP or not. But, it seems quite interesting so I will try.

I apologize if I am sounding completely ignorant on the subject, but it only a few hours old to me.

MelchyBeau
12-20-2006, 02:29 AM
the subset can be any size btw

Utah
12-20-2006, 02:33 AM
[ QUOTE ]
the subset can be any size btw

[/ QUOTE ]
Please explain.

MelchyBeau
12-20-2006, 02:59 AM
you have a set of n integers given to you. the subset can be of size 1 through n. So in our problem the size of our subset is not fixed, otherwise it would be fairly quick to do. It would have a run time of approximately n choose m, where m would be the size of your subset, you could shorten the run time with a few combinatorial tricks, but thats not really important.

This problem is a well researched problem, and honestly I seriously doubt you have a solution to this.

Utah
12-20-2006, 03:14 AM
[ QUOTE ]
This problem is a well researched problem, and honestly I seriously doubt you have a solution to this.

[/ QUOTE ]I would assume that it is highly unlikey that I could solve or even offer new insight ( I dont even know the problem well enough yet). However, I will have to prove that to myself. To be honest, I am more interested in the practical solution to real world problems, for which I may have a shot at since I have done it before and for which I have patents pending. I have the capability of thinking up wacky ideas that are way out in left field but which occassionally work /images/graemlins/smile.gif

Utah
12-20-2006, 04:44 AM
[ QUOTE ]
you have a set of n integers given to you. the subset can be of size 1 through n. So in our problem the size of our subset is not fixed, otherwise it would be fairly quick to do. It would have a run time of approximately n choose m, where m would be the size of your subset, you could shorten the run time with a few combinatorial tricks, but thats not really important.

This problem is a well researched problem, and honestly I seriously doubt you have a solution to this.

[/ QUOTE ]

I spent the last several hours researching and there is some interesting papers on the use of neural networks. My guess from what I read is that neural nets may be great at approximiating but they cannot help a heck of a lot here. While this may be great for what I am doing, it says nothing about P=NP. But, my eyes are going to fall out of my head if I read any more and it is 3am so I am going to sleep. I found 2 papers but there are 15 years old. I need to find something more recent

http://citeseer.ist.psu.edu/cache/papers...ao92finding.pdf (http://citeseer.ist.psu.edu/cache/papers/cs/172/ftp:zSzzSzwww.cs.adfa.oz.auzSzpubzSzxinzSzipl.pdf/yao92finding.pdf)
http://citeseer.ist.psu.edu/cache/papers...m92training.pdf (http://citeseer.ist.psu.edu/cache/papers/cs/2652/http:zSzzSzclasses.cec.wustl.eduzSz~cs582zSzpapers zSzNN-NP-hard.pdf/blum92training.pdf)

One more - this one speaks more directly to what I was thinking. However, it is 17 years old. Did people just stop thinking about this problem 15 years ago? lol
http://72.14.203.104/search?q=cache:WK4f...lient=firefox-a (http://72.14.203.104/search?q=cache:WK4fedozxyQJ:www.cs.uwyo.edu/~wspears/papers/ijcnn90.ps.gz+goldberg+np-complete&hl=en&gl=us&ct=clnk&cd=8&client=firefox-a)

FortunaMaximus
12-20-2006, 08:05 AM
<mutters something about a Pachinko machine>

Panning for gold with a double pan...

I think for the specific problem linked, the key point would be the sizing of the holes, the filters rather, that would allow for an efficient reorganization of the data into narrower groupings.

The zizing of the problem as astronomical may be artificial. Um, the obvious mismatched pairs should be filtered through really quickly, narrowing the possibility of pairs enough that a brute-force application should easily finish the rest of the problem.

So you'd have to focus basically on developing an algorithm that would be able to efficiently get you through the first part.

Drop a few dozen marbles in a warehouse full of empty beer bottles, the easiest way to locate those marbles would be to use an EMP then map it and look for the few dozen spikes, right?

So set the criteria for the correct pairs, then use a large-scale flash approach to detect anomalies which would show up. Take them out of the set, flash again, and so forth, until you have your hundred. 50 reps, right?

Hmm. I don't know. I stopped coding back when QBASIC was like, the new FORTRAN.

Utah
12-20-2006, 12:40 PM
[ QUOTE ]
I think for the specific problem linked, the key point would be the sizing of the holes, the filters rather, that would allow for an efficient reorganization of the data into narrower groupings.

The zizing of the problem as astronomical may be artificial. Um, the obvious mismatched pairs should be filtered through really quickly, narrowing the possibility of pairs enough that a brute-force application should easily finish the rest of the problem.

So you'd have to focus basically on developing an algorithm that would be able to efficiently get you through the first part.

[/ QUOTE ]

A trick is getting a computer to develop the filtering algorithms. It is no fun if you tell the computer what they are. For example, in the traveling salemen problem link (http://en.wikipedia.org/wiki/Traveling_salesman_problem), how does a computer (or a person even) figure out the narrowing classification?

Also, a strict narrowing approach may not work because you may find that your are going down a dead end and that the solution set resides outside of the narrowed range.

Ultimately though, you are talking about multiple level classification systems. And, for these types of problems, I think they are ideal. The trick it seems is that you need to find a way to get past the problem of what and how to classify. I think there may be an approach possible where a computer is able to recognize # of classes approx. an infinite number of classes by breaking all classes into fundamental forms (shape, more/less, location, etc).

MelchyBeau
12-20-2006, 03:56 PM
There are a ton of NP-Complete problems, Subset sum was just one of the easiest to explain. Many people who want to show P=NP look at different problems, another problem might be 'easier' to solve, I don't know

FortunaMaximus
12-20-2006, 07:20 PM
[ QUOTE ]
[ QUOTE ]
I think for the specific problem linked, the key point would be the sizing of the holes, the filters rather, that would allow for an efficient reorganization of the data into narrower groupings.

The zizing of the problem as astronomical may be artificial. Um, the obvious mismatched pairs should be filtered through really quickly, narrowing the possibility of pairs enough that a brute-force application should easily finish the rest of the problem.

So you'd have to focus basically on developing an algorithm that would be able to efficiently get you through the first part.

[/ QUOTE ]

A trick is getting a computer to develop the filtering algorithms. It is no fun if you tell the computer what they are. For example, in the traveling salemen problem link (http://en.wikipedia.org/wiki/Traveling_salesman_problem), how does a computer (or a person even) figure out the narrowing classification?

Also, a strict narrowing approach may not work because you may find that your are going down a dead end and that the solution set resides outside of the narrowed range.

Ultimately though, you are talking about multiple level classification systems. And, for these types of problems, I think they are ideal. The trick it seems is that you need to find a way to get past the problem of what and how to classify. I think there may be an approach possible where a computer is able to recognize # of classes approx. an infinite number of classes by breaking all classes into fundamental forms (shape, more/less, location, etc).

[/ QUOTE ]

Well, I don't think you'd be able to tell the computer how to do that. I agree that it removes the fun. Such an approach would require you to suggest a solution set that already exists, and that might only work under the following conditions:

That this first initial problem/solution has already been defined, and can be used to let the recursive AI come up with more creative and faster, more initutive approaches to problems that are bigger in scale and possible solutions, after finding a faster method to a pre-determined solution for the initial problem.

Basically, let it run without zero expectation of solution, I think the value comes from what it tries to come up with rather than its actually solving the problems, initially, at least for the first few thousand generations.

It's not very cost-effective considering the consumption and distribution of allocated computing power, which is, unfortunately at this point, a very small finite.

I've thought about AI and possible/probable development paths for as long as I can remember, probably since I was in grade school. Along with infinity.

So I guess that develops a weird form of abstract thought that renders formal logic as a fallback of sorts.

As for figuring out the narrowing classifications, well, that's going to require a lot of multiprocessing time. A multipronged attack with better and better self-defined guesses, probably.

A very fast approach, theortically, would require some form of ability to do quantum computing. But I think the sheer mass of dataflow that would go through the AI's analytical processes would simply overwhelm it, if it wasn't malleable enough to anticipate the onslaught. Which probably has a higher probability of occuring and conditions for self-collapse, an irrecoverable one, would arise the more restrictions you put on its ability to define a set criteria for a solution set.

I don't know. It truly is a Pandora's Box, but in the sense that infinity is contained inside, and the box itself is a finite restriction. And in opening it, you may very well turn the whole thing completely isnide out. As for the repercussions, well... Instinct tells me this isn't without its elements of danger. So be it. If it happens, causality would self-determine that this has already happened and we're just approaching it anyways.

And that may be the crux of the problem... THEN triggers the IF, but it may be a Schrödinger tiger determining it is the one to set its own criteria to exist, not Schrödinger himself. And I'm going to leave the issue here before I start rambling.

MelchyBeau
12-20-2006, 08:49 PM
The thing with quantum computing, is that you only have some probability of getting the correct solution

FortunaMaximus
12-20-2006, 09:38 PM
[ QUOTE ]
The thing with quantum computing, is that you only have some probability of getting the correct solution

[/ QUOTE ]

Well, yeah. But the arrival at a solution has a higher rate of probable occurence within the lifespan of the Universe.

Whereas with near-infinite permutations, a linear approach will get it done with near-certain probability, but you might be racing the Universe. At least where time is concerned.

The probability might very well increase with the depth/breadth of quantum staggering (layering?)... In any event, it might be far too soon to tell whether there is actually an upper limit to how big the overall information set can get.

If it's actually infinite, then the questions that are yet to be asked on most subjects, including undiscovered ones... Suddenly increase by an order of magnitude or two... And the validity of absolutes and static frames of reference suddenly seem to just define conditions for a single Universe.

Ridiculous leaps to be making, asking questions when the seemingly solvable ones have yet to be defined completely, never mind solved.

But that's how I roll, I guess. Turning redwoods into bonsai, looking at planets as baseballs and pebbles to be tossed around. So, yeah, I still dunno. But I'm sure having fun with the concept.

thylacine
12-20-2006, 10:42 PM
/images/graemlins/wink.gif Cliff notes:

[ QUOTE ]
......... And I'm going to leave the issue here before I start rambling.

[/ QUOTE ]

FortunaMaximus
12-20-2006, 10:51 PM
[ QUOTE ]
;) Cliff notes:

[ QUOTE ]
......... And I'm going to leave the issue here before I start rambling.

[/ QUOTE ]

[/ QUOTE ]

<derisive snort> Meow, will ya? /images/graemlins/tongue.gif

arahant
12-21-2006, 01:00 AM
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

[/ QUOTE ]

Cliff Notes: Perelman wasn't the most detailed fellow...a Chinese guy took his proof and filled in some steps. All the non-chinese think Perelman's proof was complete, the Chinese guy thinks he deserves the credit.

Details from New Yorker (http://www.newyorker.com/fact/content/articles/060828fa_fact2) .
An excellent article

thylacine
12-21-2006, 07:27 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

[/ QUOTE ]

Cliff Notes: Perelman wasn't the most detailed fellow...a Chinese guy took his proof and filled in some steps. All the non-chinese think Perelman's proof was complete, the Chinese guy thinks he deserves the credit.

Details from New Yorker (http://www.newyorker.com/fact/content/articles/060828fa_fact2) .
An excellent article

[/ QUOTE ]

Wow!! Very interesting article. I would say slightly different "cliff notes". It sounds like (assuming the article is accurate) that Perelman proved it fair and square and the Chinese guys Yau/Zhu/Cao are trying to rip him off of his result, and that ring-leader Yau has done it before (with Givental's result), and that most people in the know realize this. It seems that Yau's ambition and inappropriate sense of entitlement has driven him to act in this way. (Did he really expect Chern to resign from being famous?!)

I fully sympathize with Perelman. I have personally experienced being sponged off and being ripped off, and I no longer collaborate with anyone or tell anyone what research I am doing. Mathematicians are generally more ethical than most other professions, but there are still problems with individuals who are blinded by ambition, and whose sense of entitlement makes them feel preordained to be the one to prove a result, even when they are legitimately beaten to it by someone else.


By the way, if anyone can post a readable reference to the statement of Thurston’s geometrization conjecture (now theorem), I would appreciate it. I know what a (triangulation of a) 3-manifold is. I guess I just want to know what the eight pieces are and how they are joined.

Utah
12-22-2006, 02:41 AM
I dont think quantum computers will get you anywhere unless they are in the order of 10^200 times faster than today's computers.

Here is a simple diagram I created in Powerpoint:

http://img226.imageshack.us/img226/6376/circlerl9.jpg

If I asked you to start at the middle and find the shortest path that touches all nodes at least once it would probably take you under 10 seconds and you would find it very uninteresting. However, a computer would have a terrible time trying to solve it as the iterations are endless (since there is no requirement that a node cannot be visited twice). Now, the computer may solve it if I give it rules, but if I do that then the human is still solving it. The trick is getting the computer to find and use the rules.

I know how I would solve this with AI without using any human rules and I think I could solve it quickly. However, I have never seen the approach used so I may be missing something. I am curious what the current algorithm models exist for solving this. Any ideas?

I am sorry for diverging from the original question but I am fixated on this.

FortunaMaximus
12-22-2006, 03:18 AM
Find the radius, establish the circumference, then find a path that that is equal to the circumference + radius?

Looking at it, yes, it's simple in human terms. In mathematical terms, it should be simple also.

FortunaMaximus
12-22-2006, 03:35 AM
[ QUOTE ]
I dont think quantum computers will get you anywhere unless they are in the order of 10^200 times faster than today's computers.

[/ QUOTE ]

[ QUOTE ]
The probability might very well increase with the depth/breadth of quantum staggering (layering?)... In any event, it might be far too soon to tell whether there is actually an upper limit to how big the overall information set can get.

[/ QUOTE ]

It's difficult to describe what I mean by quantum stagger/layering...

Theortically, it should take an infinitesmial amount of time, perhaps even negative time to find an infinite set of solutions. The problem is then finding where those solutions are located, and it's not a question of speed, but of capacity.

Which would imply we would have to use the Universe itself as carrying capacity. While this is possible in theory (which I am nowhere close to being able to describe fully yet, as this would imply that there is a proof of hierarchial infinite sets...)

It is not a problem of speed, let me stress that. It is a problem of having enough location foci to map out a coherent solution set that can be accessible by an observer without the censorship that comes from being within more than one of the sets.

The observer is limited by perception that is essentially finite. And I do not know how to violate this successfully, (at least term it such that it can be palatable and understood. I feel the solution, but I can't describe it) and all I have done thus far is defined and redefined that problem until it becomes clearer.

As you have your fixation, Utah, so do I mine. And this happens to be it.

srjunkacct
12-22-2006, 04:11 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

[/ QUOTE ]

Cliff Notes: Perelman wasn't the most detailed fellow...a Chinese guy took his proof and filled in some steps. All the non-chinese think Perelman's proof was complete, the Chinese guy thinks he deserves the credit.

Details from New Yorker (http://www.newyorker.com/fact/content/articles/060828fa_fact2) .
An excellent article

[/ QUOTE ]

Wow!! Very interesting article. I would say slightly different "cliff notes". It sounds like (assuming the article is accurate) that Perelman proved it fair and square and the Chinese guys Yau/Zhu/Cao are trying to rip him off of his result, and that ring-leader Yau has done it before (with Givental's result), and that most people in the know realize this. It seems that Yau's ambition and inappropriate sense of entitlement has driven him to act in this way. (Did he really expect Chern to resign from being famous?!)

I fully sympathize with Perelman. I have personally experienced being sponged off and being ripped off, and I no longer collaborate with anyone or tell anyone what research I am doing. Mathematicians are generally more ethical than most other professions, but there are still problems with individuals who are blinded by ambition, and whose sense of entitlement makes them feel preordained to be the one to prove a result, even when they are legitimately beaten to it by someone else.


By the way, if anyone can post a readable reference to the statement of Thurston’s geometrization conjecture (now theorem), I would appreciate it. I know what a (triangulation of a) 3-manifold is. I guess I just want to know what the eight pieces are and how they are joined.

[/ QUOTE ]

You can read various rebuttals by Yau, his collaborators and his friends here. (http://www.doctoryau.com/)

If an author publishes a proof that has gaps in it, it is fair game for the vultures. It's not that Perelman and Givental don't deserve a lot of credit for their work (Perelman much more so than Givental imho), but you can't say that further work on their results is off-limits just because they published first.

Utah
12-22-2006, 11:50 AM
[ QUOTE ]
Find the radius, establish the circumference, then find a path that that is equal to the circumference + radius?

Looking at it, yes, it's simple in human terms. In mathematical terms, it should be simple also.

[/ QUOTE ]

You have still imparted human rules. You have looked at the problem and said, "Hey circle! Hey uniformity! This is an easy formula! Apply....." But, the computer needs to say that. The reason is that the next problem may be different with a different set of rules and that the algorithm may no longer apply. If I recall from my meager understanding of "No Free Lunch", it would tell us that we are in some serious trouble here if we define set rules.

Lets say we imparted 2 simple rules:
1) Do not visit any node twice if it can be helped
2) Always take the shortest path when possible.

Lets change the diagram to:

http://img223.imageshack.us/img223/5489/untitled1jt4.jpg

You can see the first rule is violated at node A as it is clear that the shortest path is to cross through node A multiple times.

You can see that take the second rule is violated at node B as it makes sense to branch to the longer leg first.

Again, this would take probably under a minute for a human to solve. Yet, it is incredibly hard for a computer. You cant simply keep throwing rules at it because the next model will break the rules. Therefore, you need the computer to "think" and the best way of doing this is through a multi-level transient classification system starting with learning the most rudimentary classifications. For example, the computer should learn "more/less" as a fundamental building block before it can tackle this problem. All problems should build on the previous knowledge base and the work must stop being "throw away" for each problem.

I think I can get neural nets to solve the second diagram fairly easily without the fundamental building blocks and without human rules. But, it will be inefficient and incapable of solving other more difficult problems.

Utah
12-22-2006, 12:11 PM
[ QUOTE ]
Which would imply we would have to use the Universe itself as carrying capacity. While this is possible in theory (which I am nowhere close to being able to describe fully yet, as this would imply that there is a proof of hierarchial infinite sets...)

[/ QUOTE ]

Lets talk capacity for a second. I am out of my realm here and I am not sure I fully understand what you are saying. However, isn't it possible that a solution set would not fit with the universe?

For example, lets say I am playing a really cool game of global domination against 2 opponents and before I play I want to map out my strategy set and use game theory to determine optimal play.

Lets say there are 3000 possible scenarios I can encounter to start and for each scenario I can execute one of 55 courses of action. My opponents have the same scenario and courses of action set. Therefore, we all have the following # of possible strategies: 55^3000. This is an incomprehensibly large number and you would at least need googolplex. Lets say you overcame any computational issues. You still are left with the problem of storing a densely populated cube with 55^3000 x 55^3000 x 55^3000 storage locations. I dont think the universe could store that could it? You could keep defining the storage space into smaller and smaller space, but at some point you will reach a limit of size (e.g., something like Planck's constant). I also dont believe that you could use a classification system to minimize the storage need.

Is this a real constraint? Am I missing something here?

btw - these are real world type problems with applicability today and there are much much harder ones to solve. The one above is actually quite simple.

MelchyBeau
12-22-2006, 12:46 PM
Utah,

what you are trying to solve is Metric Traveling salesman problem. There are some decent approximation algorithms, however this problem is still NP-Complete

Utah
12-22-2006, 01:18 PM
[ QUOTE ]
Utah,

what you are trying to solve is Metric Traveling salesman problem. There are some decent approximation algorithms, however this problem is still NP-Complete

[/ QUOTE ]

No. That is not what I am trying to solve. I really dont care about the problem of the TSP. Rather, I want to teach a computer to learn any problem.

I dont want to sound like an ass on the Traveling Salesperson because I havent spent a ton of time working on it and there has clearly been a lot of thinking by the smartest people in the world on this problem so who am I to say anything. However, everything I have read so far is taking the wrong approach`and no one that I can see is thinking about the problem correctly. There are all these crazy complicated algorithms out there that I have seen using math I do not care to even try to understand.

Yet, the problem is simple and a person can solve it instantly. Why? When you look at the second diagram what is your thought process for solving? You certainly are not using crazy math. Heck, you arent using any math (outside of more/less) because I havent given you any numbers. Rather, you are using a very simple classification system that you apply to a million problems a day. You just look at the diagram and you know. Why? I belive it is because you can instantly classify appropriately so to as to make the solution "Duh" easy.

My thinking is that there is zero reason a computer cant do that. But, you need to get the computer to get rid of the math. It needs to learn to solve on a much simpler and more fundamental basis. Therefore, anyone who is trying to teach a computer to solve the TSP using crazy math based algorithms is either trying to solve a discrete problem (which is fine) or they are missing the point. Or, of course, the other option is that I am an idiot /images/graemlins/smile.gif

FortunaMaximus
12-22-2006, 01:20 PM
[ QUOTE ]
Lets talk capacity for a second. I am out of my realm here and I am not sure I fully understand what you are saying. However, isn't it possible that a solution set would not fit with the universe?

[/ QUOTE ]

You're correct. It can't stay within the universe, so that reduces its usefulness to carrying and processing and retaining the solution sets, smaller ones at least compared to a global solution set.

[ QUOTE ]
Is this a real constraint? Am I missing something here?

[/ QUOTE ]

I don't think so. You honed in on the flaw quite accurately. The model has to allow for elasticity of carrying capacity, as an efficient processing of solution sets would require that a single location foci would need to have multiple uses, and its data values would have different values and interpretations based on the overall problem/solution.

Say you have a location that defines pi, once you've done that, that's the static part of its value. As it has multiple uses, it would be part of many solution sets.

As I recall from multiple forays into traditional programming, both pre-GUI and OOP, it is possible to reduce the overall use of memory in a program by not repeating the coding for a specific instruction set.

The premises seem simple, but a formal theory would require a capacity of overall information density that requires that formal understanding of capacity as a rigid x/y/z location/access path to location and retrival be waived.

Or the existence of multiple Universes with the same overall shared conditions, a common language that allows for sharing of information and unfettered access between universes to come to a finite solution set in one Universe.

Another avenue of exploration is to treat capacity as x/y/z and t, in effect turning this Universe into an nearly infinite progression of Universes, each copy to be distinguished and differenated by each Planck interval. Echoes of itself, rippling and altering as motion within changes the overall solution set.

Such an interpretation would require that there be change in each Planck interval, so each iteration would hold a different solution set. The problem of backwards accessibility seems now to be the domain of only two things. human memory and human storage.

Whether such conditions can be accessible by any other means is debatable. The energy constraints involved in backwards temporal violations suggests, at least to me, that such a concept is possible, but that this is an inviolable principle necessary for the preservation of information.

That shouldn't prevent access to the information or using it as a resource to develop finite solution sets. In a broader quantum persecptive, usage of backwards-accessed information may not violate the principle that the observer alters the situation by observing, only that when he or she observes and uses this information, all he or she does is alter the consequences of observing and using this information, in a forward fashion.

Not that this has anything to do with millennium problems. Hmm. Apologies for the hijack.

MelchyBeau
12-22-2006, 01:46 PM
no problem with the hijack,

I am glad to see a thread in SMP about something other than religion

thylacine
12-22-2006, 07:01 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

[/ QUOTE ]

Cliff Notes: Perelman wasn't the most detailed fellow...a Chinese guy took his proof and filled in some steps. All the non-chinese think Perelman's proof was complete, the Chinese guy thinks he deserves the credit.

Details from New Yorker (http://www.newyorker.com/fact/content/articles/060828fa_fact2) .
An excellent article

[/ QUOTE ]

Wow!! Very interesting article. I would say slightly different "cliff notes". It sounds like (assuming the article is accurate) that Perelman proved it fair and square and the Chinese guys Yau/Zhu/Cao are trying to rip him off of his result, and that ring-leader Yau has done it before (with Givental's result), and that most people in the know realize this. It seems that Yau's ambition and inappropriate sense of entitlement has driven him to act in this way. (Did he really expect Chern to resign from being famous?!)

I fully sympathize with Perelman. I have personally experienced being sponged off and being ripped off, and I no longer collaborate with anyone or tell anyone what research I am doing. Mathematicians are generally more ethical than most other professions, but there are still problems with individuals who are blinded by ambition, and whose sense of entitlement makes them feel preordained to be the one to prove a result, even when they are legitimately beaten to it by someone else.


By the way, if anyone can post a readable reference to the statement of Thurston’s geometrization conjecture (now theorem), I would appreciate it. I know what a (triangulation of a) 3-manifold is. I guess I just want to know what the eight pieces are and how they are joined.

[/ QUOTE ]

You can read various rebuttals by Yau, his collaborators and his friends here. (http://www.doctoryau.com/)

If an author publishes a proof that has gaps in it, it is fair game for the vultures. It's not that Perelman and Givental don't deserve a lot of credit for their work (Perelman much more so than Givental imho), but you can't say that further work on their results is off-limits just because they published first.

[/ QUOTE ]

If an author publishes a proof that has NO gaps in it, then it is NOT fair game for the vultures ... to claim that it does have gaps and that they are filling the gaps making them the true to the race for glory.

I have seen too much contradictory evidence to figure out what the real truth is here. It certainly seems that many experts are giving Perelman credit for providing a complete and correct proof. It seems that Yau has said things about others that are at least as bad as anything said about him. It seems that some quotees are having second thoughts about their private thoughts about Yau becoming public.

Yau/Zhu/Cao seem to make two types of statements. On the one hand it seems they are saying that Perelman's work has gaps and that they are filling the gaps and providing the first complete proof, for which they deserve credit. On the other hand it seems they are saying that they are just filling in the details and are not claiming credit for anything, other than providing a more accessible exposition. So what are they actually claiming?

There is clearly a major onging controversy, and it is not the fault of any journalist. Consider this from the website of the 2007 Joint Mathematics Meetings (113th Annual Meeting of the American Mathematical Society (AMS) and 90th Meeting of theMathematical Association of America (MAA))

http://www.ams.org/amsmtgs/2098_intro.html

including the following:

[ QUOTE ]
Special Event on the Poincaré Conjecture and Geometrization Theorem CANCELLED

We regret that the special event on the Poincaré Conjecture and Geometrization Theorem has been canceled. It became apparent that the continuing controversy was undermining this special event.

[/ QUOTE ]

srjunkacct
12-22-2006, 09:20 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

[/ QUOTE ]

Cliff Notes: Perelman wasn't the most detailed fellow...a Chinese guy took his proof and filled in some steps. All the non-chinese think Perelman's proof was complete, the Chinese guy thinks he deserves the credit.

Details from New Yorker (http://www.newyorker.com/fact/content/articles/060828fa_fact2) .
An excellent article

[/ QUOTE ]

Wow!! Very interesting article. I would say slightly different "cliff notes". It sounds like (assuming the article is accurate) that Perelman proved it fair and square and the Chinese guys Yau/Zhu/Cao are trying to rip him off of his result, and that ring-leader Yau has done it before (with Givental's result), and that most people in the know realize this. It seems that Yau's ambition and inappropriate sense of entitlement has driven him to act in this way. (Did he really expect Chern to resign from being famous?!)

I fully sympathize with Perelman. I have personally experienced being sponged off and being ripped off, and I no longer collaborate with anyone or tell anyone what research I am doing. Mathematicians are generally more ethical than most other professions, but there are still problems with individuals who are blinded by ambition, and whose sense of entitlement makes them feel preordained to be the one to prove a result, even when they are legitimately beaten to it by someone else.


By the way, if anyone can post a readable reference to the statement of Thurston’s geometrization conjecture (now theorem), I would appreciate it. I know what a (triangulation of a) 3-manifold is. I guess I just want to know what the eight pieces are and how they are joined.

[/ QUOTE ]

You can read various rebuttals by Yau, his collaborators and his friends here. (http://www.doctoryau.com/)

If an author publishes a proof that has gaps in it, it is fair game for the vultures. It's not that Perelman and Givental don't deserve a lot of credit for their work (Perelman much more so than Givental imho), but you can't say that further work on their results is off-limits just because they published first.

[/ QUOTE ]

If an author publishes a proof that has NO gaps in it, then it is NOT fair game for the vultures ... to claim that it does have gaps and that they are filling the gaps making them the true to the race for glory.

I have seen too much contradictory evidence to figure out what the real truth is here. It certainly seems that many experts are giving Perelman credit for providing a complete and correct proof. It seems that Yau has said things about others that are at least as bad as anything said about him. It seems that some quotees are having second thoughts about their private thoughts about Yau becoming public.

Yau/Zhu/Cao seem to make two types of statements. On the one hand it seems they are saying that Perelman's work has gaps and that they are filling the gaps and providing the first complete proof, for which they deserve credit. On the other hand it seems they are saying that they are just filling in the details and are not claiming credit for anything, other than providing a more accessible exposition. So what are they actually claiming?


[/ QUOTE ]

As I understand it, Yau is claiming that Zhu and Cao deserve credit for filling in technical details of proofs (and rewriting them in some cases). In the field of geometric analysis/PDEs, filling in missing "technical details" for arguments is often a nontrivial task. Coming up with proper estimates/bounds for solutions to PDEs takes work and can often be the subject of papers on their own. It is, of course, quite possible that Perelman knew how to fill in the missing details and was just being lazy.

John Morgan and Gang Tian are one of the teams of experts who are quoted in Nasar's article as claiming that Perelman's result is complete. I don't understand how they can say that when their book (http://xxx.lanl.gov/PS_cache/math/pdf/0607/0607607.pdf) claims otherwise and credits themselves and a few other authors for filling in technical details in their own way. (See sections 0.8 and 0.9.)

thylacine
12-23-2006, 04:26 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]

I had vaguely heard there was some controversy (on top of what you mentioned). Do you know of it?

[/ QUOTE ]

Cliff Notes: Perelman wasn't the most detailed fellow...a Chinese guy took his proof and filled in some steps. All the non-chinese think Perelman's proof was complete, the Chinese guy thinks he deserves the credit.

Details from New Yorker (http://www.newyorker.com/fact/content/articles/060828fa_fact2) .
An excellent article

[/ QUOTE ]

Wow!! Very interesting article. I would say slightly different "cliff notes". It sounds like (assuming the article is accurate) that Perelman proved it fair and square and the Chinese guys Yau/Zhu/Cao are trying to rip him off of his result, and that ring-leader Yau has done it before (with Givental's result), and that most people in the know realize this. It seems that Yau's ambition and inappropriate sense of entitlement has driven him to act in this way. (Did he really expect Chern to resign from being famous?!)

I fully sympathize with Perelman. I have personally experienced being sponged off and being ripped off, and I no longer collaborate with anyone or tell anyone what research I am doing. Mathematicians are generally more ethical than most other professions, but there are still problems with individuals who are blinded by ambition, and whose sense of entitlement makes them feel preordained to be the one to prove a result, even when they are legitimately beaten to it by someone else.


By the way, if anyone can post a readable reference to the statement of Thurston’s geometrization conjecture (now theorem), I would appreciate it. I know what a (triangulation of a) 3-manifold is. I guess I just want to know what the eight pieces are and how they are joined.

[/ QUOTE ]

You can read various rebuttals by Yau, his collaborators and his friends here. (http://www.doctoryau.com/)

If an author publishes a proof that has gaps in it, it is fair game for the vultures. It's not that Perelman and Givental don't deserve a lot of credit for their work (Perelman much more so than Givental imho), but you can't say that further work on their results is off-limits just because they published first.

[/ QUOTE ]

If an author publishes a proof that has NO gaps in it, then it is NOT fair game for the vultures ... to claim that it does have gaps and that they are filling the gaps making them the true to the race for glory.

I have seen too much contradictory evidence to figure out what the real truth is here. It certainly seems that many experts are giving Perelman credit for providing a complete and correct proof. It seems that Yau has said things about others that are at least as bad as anything said about him. It seems that some quotees are having second thoughts about their private thoughts about Yau becoming public.

Yau/Zhu/Cao seem to make two types of statements. On the one hand it seems they are saying that Perelman's work has gaps and that they are filling the gaps and providing the first complete proof, for which they deserve credit. On the other hand it seems they are saying that they are just filling in the details and are not claiming credit for anything, other than providing a more accessible exposition. So what are they actually claiming?


[/ QUOTE ]

As I understand it, Yau is claiming that Zhu and Cao deserve credit for filling in technical details of proofs (and rewriting them in some cases). In the field of geometric analysis/PDEs, filling in missing "technical details" for arguments is often a nontrivial task. Coming up with proper estimates/bounds for solutions to PDEs takes work and can often be the subject of papers on their own. It is, of course, quite possible that Perelman knew how to fill in the missing details and was just being lazy.

John Morgan and Gang Tian are one of the teams of experts who are quoted in Nasar's article as claiming that Perelman's result is complete. I don't understand how they can say that when their book (http://xxx.lanl.gov/PS_cache/math/pdf/0607/0607607.pdf) claims otherwise and credits themselves and a few other authors for filling in technical details in their own way. (See sections 0.8 and 0.9.)

[/ QUOTE ]

Okay the plot thickens. I wonder if Perelman was offering these `"technical-details"-gaps' to kindly offer a piece of the action to others.

Also, when you say `In the field of geometric analysis/PDEs, filling in missing "technical details" for arguments is often a nontrivial task. Coming up with proper estimates/bounds for solutions to PDEs takes work and can often be the subject of papers on their own.' I find this quite disturbing. I am a mathematician (about as far from this area as a mathematician can be - I NEVER use derivatives) and what you are saying sounds to me like there is a lack of rigor in proofs in this area. I don't expect physicists to prove theorems, but mathematicians definitely should. Does anyone in this area regard this as a crisis that warrants a reform in standards. (I understand there was a similar situation in topology several decades ago. True?)

You (srjunkacct) sound like an expert. Could you give a simple example to me (a mathematician (who NEVER uses derivatives)) of what it means to "come up with proper estimates/bounds for solutions to PDEs" and what its significance is?


Also do you know anything about the following.

[ QUOTE ]

There is clearly a major onging controversy, and it is not the fault of any journalist. Consider this from the website of the 2007 Joint Mathematics Meetings (113th Annual Meeting of the American Mathematical Society (AMS) and 90th Meeting of theMathematical Association of America (MAA))

http://www.ams.org/amsmtgs/2098_intro.html

including the following:

[ QUOTE ]
Special Event on the Poincaré Conjecture and Geometrization Theorem CANCELLED

We regret that the special event on the Poincaré Conjecture and Geometrization Theorem has been canceled. It became apparent that the continuing controversy was undermining this special event.

[/ QUOTE ]

[/ QUOTE ]

MelchyBeau
12-23-2006, 04:38 PM
Thylacine,

I am curious as to what mathematics you study. When you mention that you never use derivatives, I have to think graph theory, maybe combinatorics.

thylacine
12-23-2006, 09:28 PM
[ QUOTE ]
Thylacine,

I am curious as to what mathematics you study. When you mention that you never use derivatives, I have to think graph theory, maybe combinatorics.

[/ QUOTE ]

Discrete Mathematics. Algebra.

I would view a 3-manifold as a 4-regular 4-edge-colored graph, satisfying certain conditions, modulo certain local modifications.

Enrique
12-24-2006, 02:10 PM
Some notes on the Millenium problems:

1) Perelman controversy:
Perelman solved it. I wouldn't trust the New Yorker article so much, because it seems with a huge bias against Yau. Yau actually intended to sue the New Yorker for it. You can read the letter to sue here (http://doctoryau.com/).
Proving Poincare was named Breakthrough of the year in Science Magazine, you can read some news here (http://www.sciencemag.org/cgi/reprint/sci;314/5807/1848.pdf) and here (http://www.msnbc.msn.com/id/16305185/).

2) The OP mentioned that he would think most enterprises would love P = NP to be true. I don't think this is true. Our security online on emails, banks and any money transaction is based on P != NP. Cryptography relies on numbers being hard to factor, if P = NP, then numbers would be easy to factor and every security system in the world would crack. Actually, even if you don't rely on numbers being hard to factor, you would have to rely on some algorithm and if P = NP, then cryptography would be dead and hence it would be impossible to give security to private transactions.

3) The only Millenium question I understand besides those two is the Riemann Hypothesis. I have no understanding of the other four.
The Riemann hypothesis is the one that interests me the most, since number theory is what I am studying in graduate school. Many advances to this have been done, the first 10^13 zeros of the zeta function have been shown to lie in the line 1/2. There has been a proof that the probability of a zero being in the 1/2 line is over 1/3 (I think this has been improved, but I am not finding the link at this moment). The Riemann hypothesis has been shown via probability that it is reliable up to 99.99999999999995% using the Chi-Square Test. It seems like it will be true. A big advance hasn't come in proving it, but it might come at any moment (just like it did with Poincare).

srjunkacct
12-24-2006, 06:21 PM
[ QUOTE ]

Okay the plot thickens. I wonder if Perelman was offering these `"technical-details"-gaps' to kindly offer a piece of the action to others.


[/ QUOTE ]

This is very possible. He really doesn't seem to care whose name goes on the final proof, just that the result is proven. I would say that the credit that Lott/Kleiner, Morgan/Tian and Zhu/Cao deserve for cleaning up his mess is around 5-10 percent total and shouldn't preclude Perelman from being awarded the Fields medal or the million bucks. (Of course he declined the former; I wonder if he will decline the latter.)

[ QUOTE ]

Also, when you say `In the field of geometric analysis/PDEs, filling in missing "technical details" for arguments is often a nontrivial task. Coming up with proper estimates/bounds for solutions to PDEs takes work and can often be the subject of papers on their own.' I find this quite disturbing. I am a mathematician (about as far from this area as a mathematician can be - I NEVER use derivatives) and what you are saying sounds to me like there is a lack of rigor in proofs in this area. I don't expect physicists to prove theorems, but mathematicians definitely should. Does anyone in this area regard this as a crisis that warrants a reform in standards. (I understand there was a similar situation in topology several decades ago. True?)

You (srjunkacct) sound like an expert. Could you give a simple example to me (a mathematician (who NEVER uses derivatives)) of what it means to "come up with proper estimates/bounds for solutions to PDEs" and what its significance is?


[/ QUOTE ]

I'd say that some of the results whose proofs are omitted or sketched in Perelman's papers are roughly of the same magnitude as the Rellich and Garding lemmas used in the proof of the Hodge decomposition theorem. Both lemmas address the behavior of functions (actually sections of vector bundles) on manifolds. The Rellich lemma says that for s > r, a bounded sequence of sections in the Sobolev s-norm (where s derivatives of the section are controlled) contains a convergent subsequence in the Sobolev r-norm (i.e. inclusion is a compact operator). The Garding inequality says that the Sobolev 1-norm is dominated by another norm, the Dirichlet norm. The proof of the Rellich lemma is about a page; the proof of the Garding lemma is several pages. Both involve some delicate manipulations of inequalities and limits.

If you want an example from Perelman's papers, the worst offender is probably Proposition 11.2 from his first paper (http://xxx.lanl.gov/PS_cache/math/pdf/0211/0211159.pdf). Compare it with Theorem 6.2.1 in the Cao/Zhu paper. (http://xxx.lanl.gov/PS_cache/math/pdf/0612/0612069.pdf) Decide for yourself if that constitutes a "complete and correct" proof of the claimed lemma.

I don't think that any significantly new ideas are required to complete Perelman's proof, but if you were to put my name on his papers and submit them to the Annals or Inventiones, they'd probably go into the trash. Mathematicians are lowering their standards in labeling Perelman's papers a "complete and correct proof" just because they're so anxious to attach somebody's name to the final result, imho.

[ QUOTE ]

Also do you know anything about the following.
[ QUOTE ]

There is clearly a major onging controversy, and it is not the fault of any journalist. Consider this from the website of the 2007 Joint Mathematics Meetings (113th Annual Meeting of the American Mathematical Society (AMS) and 90th Meeting of theMathematical Association of America (MAA))

http://www.ams.org/amsmtgs/2098_intro.html

including the following:

[ QUOTE ]
Special Event on the Poincaré Conjecture and Geometrization Theorem CANCELLED

We regret that the special event on the Poincaré Conjecture and Geometrization Theorem has been canceled. It became apparent that the continuing controversy was undermining this special event.

[/ QUOTE ]

[/ QUOTE ]

[/ QUOTE ]

I don't have any inside information on the cancellation of the event. It is pretty sad.

thylacine
12-27-2006, 08:45 PM
[ QUOTE ]

I'd say that some of the results whose proofs are omitted or sketched in Perelman's papers are roughly of the same magnitude as the Rellich and Garding lemmas used in the proof of the Hodge decomposition theorem. Both lemmas address the behavior of functions (actually sections of vector bundles) on manifolds. The Rellich lemma says that for s > r, a bounded sequence of sections in the Sobolev s-norm (where s derivatives of the section are controlled) contains a convergent subsequence in the Sobolev r-norm (i.e. inclusion is a compact operator). The Garding inequality says that the Sobolev 1-norm is dominated by another norm, the Dirichlet norm. The proof of the Rellich lemma is about a page; the proof of the Garding lemma is several pages. Both involve some delicate manipulations of inequalities and limits.

If you want an example from Perelman's papers, the worst offender is probably Proposition 11.2 from his first paper (http://xxx.lanl.gov/PS_cache/math/pdf/0211/0211159.pdf). Compare it with Theorem 6.2.1 in the Cao/Zhu paper. (http://xxx.lanl.gov/PS_cache/math/pdf/0612/0612069.pdf) Decide for yourself if that constitutes a "complete and correct" proof of the claimed lemma.

I don't think that any significantly new ideas are required to complete Perelman's proof, but if you were to put my name on his papers and submit them to the Annals or Inventiones, they'd probably go into the trash. Mathematicians are lowering their standards in labeling Perelman's papers a "complete and correct proof" just because they're so anxious to attach somebody's name to the final result, imho.


[/ QUOTE ]

Thanks. I don't understand the technical details, but you've given me a fair picture of the situation. Is it true that Perelman has never actually submitted his papers to a journal in the usual way?

By the way, do you know where I can find a statement of whatever Thurston's geometrization `conjecture' says. Is it a statement that does not (necessarily) involve differential geometry? The article says that 3-manifolds can be decomposed into 8 types of pieces, so I am basically wondering what the pieces are, and how they are joined. I am vaguely aware of how 3-manifolds can be described by triangulations, or by joining two handlebodies, or by joining a solid torus to a knot complement. Any suggested refernces for a combinatorics/algebra type person?

Ben Young
12-27-2006, 10:11 PM
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]
I'm pretty sure he hasn't been offered the millenium prize yet?

thylacine
12-27-2006, 11:10 PM
[ QUOTE ]
[ QUOTE ]
Yes,

Perelman solved the Poincare conjecture, but declined the prize and the fields medal

[/ QUOTE ]
I'm pretty sure he hasn't been offered the millenium prize yet?

[/ QUOTE ]

Has he actually submitted it for publication? If not, how will they apply the two years general acceptance after publication rule?

MelchyBeau
12-27-2006, 11:49 PM
it was 'published' via arXiv http://arxiv.org/

thylacine
12-28-2006, 12:10 AM
[ QUOTE ]
it was 'published' via arXiv http://arxiv.org/

[/ QUOTE ]

Does this count?

Ben Young
12-28-2006, 02:30 PM
yes

prosellis
12-28-2006, 03:40 PM
I recently read (can't remember the cource, still looking) that the review was currently going on. The Chinese group is greatly complicating matters and Perelman isn't helping at all. It's interesting to read some of Perelman's past comments on things like the Fields and Millenium Prizes. That guy has a major chip on his shoulder.

srjunkacct
12-29-2006, 03:21 AM
I'm not sure if there's a good source for the Thurston geometrization conjecture for a general math audience. The Wikipedia/Mathworld pages might be as good as anything out there.

The solution to a Millennium problem must be published in a refereed mathematics publication of worldwide repute. (http://www.claymath.org/millennium/Rules_etc/) It is possible to divide the prize among multiple recipients, and the rule doesn't say that the result must be published by all or any of the prize candidates.

The arXiv doesn't count because it is not refereed.

Morgan/Tian have written a monograph detailing a complete proof of the Poincare conjecture. It is going to be refereed before it is published. It's not completely clear to me whether the language in the prize rules means that the solution has to be published in a journal rather than as a standalone publication.

Cao/Zhu published their article in the Asian Journal of Mathematics, which is normally a reputable refereed journal. However, Yau pulled some shenanigans to get it published before Morgan/Tian's book -- he essentially pushed it over the heads of the AJM's reviewers, so the prize committee would be quite justified in disqualifying them from consideration.

That being said, I'm pretty sure that Morgan/Tian are not seeking to be named Millennium Prize co-recipients. I'm a little less sure of Cao/Zhu. They certainly have sought to receive some sort of credit for publishing a clean proof.

thylacine
01-01-2007, 02:08 PM
[ QUOTE ]
I'm not sure if there's a good source for the Thurston geometrization conjecture for a general math audience. The Wikipedia/Mathworld pages might be as good as anything out there.

The solution to a Millennium problem must be published in a refereed mathematics publication of worldwide repute. (http://www.claymath.org/millennium/Rules_etc/) It is possible to divide the prize among multiple recipients, and the rule doesn't say that the result must be published by all or any of the prize candidates.

The arXiv doesn't count because it is not refereed.

Morgan/Tian have written a monograph detailing a complete proof of the Poincare conjecture. It is going to be refereed before it is published. It's not completely clear to me whether the language in the prize rules means that the solution has to be published in a journal rather than as a standalone publication.

Cao/Zhu published their article in the Asian Journal of Mathematics, which is normally a reputable refereed journal. However, Yau pulled some shenanigans to get it published before Morgan/Tian's book -- he essentially pushed it over the heads of the AJM's reviewers, so the prize committee would be quite justified in disqualifying them from consideration.

That being said, I'm pretty sure that Morgan/Tian are not seeking to be named Millennium Prize co-recipients. I'm a little less sure of Cao/Zhu. They certainly have sought to receive some sort of credit for publishing a clean proof.

[/ QUOTE ]

I looked at Wikipedia (the other day) e.g.

http://en.wikipedia.org/wiki/Geometrization_conjecture

and several related links. It says

[ QUOTE ]
Here is a statement of Thurston's conjecture:

Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume.

[/ QUOTE ]

Here are some questions. You can treat them as rhetorical. The meta-question is: did the Wiki article fail to make it clear, or is it just me that's not getting it?

Can the cuts along tori be done arbitrarily (subject to each piece having a geometric structure with finite volume?

What does it mean for a piece to have a geometric structure with finite volume anyway?

And how do these relate to the eight Thurston geometries?

FWIW I think I at least vaguely understand what these eight Thurston geometries, and what their quotient by discrete subgroup action are. But I don't see where the tori come into it.