PDA

View Full Version : Can Humans Truly "Think"?


Utah
12-15-2006, 04:46 PM
I am interested in peoples thoughts on this, especially rduke55 as he has expertise in this area. I apologize in advance if this is a bit scattered but I have been up for almost 24 hours thinking about it.

Yesterday, I got into a valuable but heated argument with my head of technical development. I had made some very cool new AI improvements to a neural net based expert system and I was talking a bit of smack to him as we are very competitive. He then floored me with a simple challenge: "Make a neural net learn addition". I said, "piece of cake" and I made a quick net that took about 2 seconds to derive the simple function. But, then I realized that it hadnt learned addition at all and it had no "concept" of addition. Worse, it was assuming it already knew the number system and it was using addition itself to learn addition. I had accomplished nothing.

So, we started digging into it and we realized we couldnt do addition until we first learned to count. But we couldnt learn to count until we learned what an object was. We played with it a bit and came up with the first concept being "greater/less than". We decided to start there and we are going to teach it to first recognize greater/less than using images of irregular dots only. Then, we would teach it to count and then teach it addition using nothing but images with no numbers.

We realized that everything we were talking about is simply classification. That fact is really Neural Net 101 stuff. But I never really thought how far that concept extended. Formulas are merely classification. Numbers are classifications. The problem is that greater/less than is also merely classification. So, it must be possible to reduce the problem further.

After thinking about it almost non-stop, I think the answer is symmetry. We, at birth, are simply given a main imperative to find symmetry as symmetry can define any discrete situation - ie, once you understand it it looks the same from all directions. Basically, this means that once you know the rules of the situation they will always yield the same results. Examples, E=MC^2, solving a puzzle, playing poker, etc.

So, if this is correct, the mind doesnt truly think or create anything. Rather, it simply classifies the environment within which it lives. Thus, while we have more power to classify situations than a computer, we are still just as "dumb" as we are simply operating a BIOS command - classify everything you can.

Where is the flaw if my thinking? If it is flawed, how does a human learn starting from ground zero? How does a baby go from womb to counting to 10?

Is there anything worth reading on this as this may have all been thought through before?

almostbusto
12-15-2006, 04:55 PM
creating a classification is creation.
classifying something is thinking.

i 'think' you have a very strange definition of 'truly thinking' and i can't really comment on your post until you flush out what you mean by that.

arahant
12-15-2006, 04:59 PM
[ QUOTE ]
So, if this is correct, the mind doesnt truly think or create anything. Rather, it simply classifies the environment within which it lives.

[/ QUOTE ]

I think you need to clarify what you mean by 'think'. I consider 'thinking' precisely what the mind does, pretty much by definition.

What is it, precisely, that you believe the mind ISN'T doing, that we previously would have thought it WAS doing?

yukoncpa
12-15-2006, 05:27 PM
Hi
Michio Kaku, in his book Visions has a chapter on Machines that Think. He talks about how things such as human emotions are just sophisticated methods of classification.
Link to one page of book found at Amazon.com (http://www.amazon.com/gp/reader/0385484992/ref=sib_dp_pt/103-6222814-5623020#reader-link)

edit - well my link doesn't go to the page. Not surprisng. But you can look on page 91 to get an idea of where he is coming from by doing a "search inside this book" and typing "Machines that think"

Utah
12-15-2006, 05:40 PM
[ QUOTE ]
[ QUOTE ]
So, if this is correct, the mind doesnt truly think or create anything. Rather, it simply classifies the environment within which it lives.

[/ QUOTE ]

I think you need to clarify what you mean by 'think'. I consider 'thinking' precisely what the mind does, pretty much by definition.

What is it, precisely, that you believe the mind ISN'T doing, that we previously would have thought it WAS doing?

[/ QUOTE ] Creativity. Thinking of ideas that are unique and invented from whole cloth. Being able to extrapolate correct responses with incomplete or new sets of parameters without using hardcoded rules. We are simply a biocomputer drone with an instruction set with sophisticated input/output capabilities.

We are no different than a computer than can read a single digit input. It can read that input and execute some output. Given that it knows 1 and 2 it may be able to understand 1.5 because it can use previous classifications. However, the computer has no concept of ice cream.

The human operates the same way. The human mind has no ability to conceptualize anything that is not gathered from inputs - sight, smell, touch, etc or is originally hardcoded. In fact, a human cant conceptualzie at all. It can merely recall classifactions from memory. We have no more ability to break out of our environment than the computer does.

Note: I may be 100% wrong. I just dont see the flaws yet in the argument. But, there may be lots of them /images/graemlins/smile.gif

Rduke55
12-15-2006, 05:45 PM
Unfortunately I just got back from a wine lunch (Holiday season and all /images/graemlins/grin.gif ) so I'm fairly worthless.
But that's a great question. A lot of people debate that.
If I had to give a one word answer though it would be "gestalt". We have this huge web of ideas,associations, and relationships for a term or object that goes beyond that objects properties or where we classify it - and we classify it one way in one situation or another in another.
I think people often separate what you are talking about from "Thinking" as differences in process thinking vs. gestalt thinking. Some of the people working with computer vision are beating there heads againt the wall because they are having huge problems getting computers to see the forest instead of the trees.
As to your classification idea, I think Russell and Whitehead has a proposal where a number n is the set of all possible sets with n numbers. It eventually collapsed for a number of reasons.
It's a great discussion though. For some reason we have a subjective, unified whole of an idea of what "nine" is.
I'm rambling. Sorry.

Rduke55
12-15-2006, 05:49 PM
[ QUOTE ]
The human operates the same way. The human mind has no ability to conceptualize anything that is not gathered from inputs - sight, smell, touch, etc or is originally hardcoded. In fact, a human cant conceptualzie at all.

[/ QUOTE ]

I disagree with that. Why do you say this?

Also, there was a thread that started talking about Kurzweil in SMP that may be relevant.

I'm late for a party so I'll be back later (probably less rational though)

Utah
12-16-2006, 12:43 PM
[ QUOTE ]
The human operates the same way. The human mind has no ability to conceptualize anything that is not gathered from inputs - sight, smell, touch, etc or is originally hardcoded. In fact, a human cant conceptualzie at all.

I disagree with that. Why do you say this?

[/ QUOTE ]

I say this because I couldn't come up with a concept that wasn't a derivative of known classifications gathered through observation. Sure, I can mix and match and combine and mold. But, nothing new is created.

I can't come up with a concept that is akin to a computer understanding ice-cream.

I could be wrong though. But, I can't think I am wrong until I can see an example of an idea out of whole cloth that cannot be broken down and reduced to earlier classifications.

Can you give some examples?

madnak
12-16-2006, 02:17 PM
You really should read through the Kurzweil thread.

Utah
12-16-2006, 02:39 PM
[ QUOTE ]
You really should read through the Kurzweil thread.

[/ QUOTE ]I just did a quick read and it is very interesting. After more thought and a deeper reading I am sure I will have comments. There seems to be some big misconcpetions about AI in that thread. Additionally, is a new technical computer architecture that can solve the CPU/complexity problem. The CPU model is a huge current hurdle and my company runs into it every day.

However, the thread does not answer my fundamental question of this thread.

Utah
12-16-2006, 02:46 PM
[ QUOTE ]
The human operates the same way. The human mind has no ability to conceptualize anything that is not gathered from inputs - sight, smell, touch, etc or is originally hardcoded. In fact, a human cant conceptualzie at all.

I disagree with that. Why do you say this?

[/ QUOTE ]

I thought of an example that may shed some light. Not sure.

If my theory is correct, one needs to simply shut off one of the senses and see it a person can invent anything in that realm without prior classification.

So, my question is this: Can a person, born 100% deaf, truly conceive of sound? They may be able to read on the topic and they may see others using sound as a mechanism to communicate. But could they conceive of it? Lets say the person lived in an isolation cell from birth. Would their mind ever conceive of sound on its own? Could they play music in their head and write a composition?

The same type of experiment could be done with eye sight as well.

There are a few problems with the experiment, such as we could have the ideas incoded in our initial BIOS. However, I think it illustrates the point. Additionally, there must be observational evidence available.

Skidoo
12-16-2006, 02:54 PM
Just because a concept is expressible after the fact in terms of preexisting classifications doesn't mean it was implicit in them beforehand in a way that was discoverable without novel conceptualization.

Utah
12-16-2006, 03:01 PM
[ QUOTE ]
Just because a concept is expressible after the fact in terms of preexisting classifications doesn't mean it was implicit in them beforehand in a way that was discoverable without novel conceptualization.

[/ QUOTE ]

I think that is an interesting point that the classification itself is a discovery and it is thus thinking. However, I still cannot come up with an example. To qualify and destroy my argument, a classification method must be conceived of whole cloth. It must be equivalent to "sight" "sound" and the classification cannot come from input sensors.

Please provide example.

Skidoo
12-16-2006, 05:01 PM
It's not so easy to produce an obvious example of a discrete step, because that sort of novelty is more an effect of process, but I'll try.

When calculus was invented, it used the same basic symbol set as algebra with trivial additions, yet the determination of slope at a point etc was made precise through new concepts, including the infinitesimal as they used it, which were themselves defined using algebra.

Rduke55
12-16-2006, 06:51 PM
[ QUOTE ]
[ QUOTE ]
Just because a concept is expressible after the fact in terms of preexisting classifications doesn't mean it was implicit in them beforehand in a way that was discoverable without novel conceptualization.

[/ QUOTE ]

I think that is an interesting point that the classification itself is a discovery and it is thus thinking. However, I still cannot come up with an example. To qualify and destroy my argument, a classification method must be conceived of whole cloth. It must be equivalent to "sight" "sound" and the classification cannot come from input sensors.

Please provide example.

[/ QUOTE ]

I think that you are using criteria of only the senses - which are that is bound to fail along with using a "classification method"
I still think of concepts and the relationships between these concepts are what we are going to here. Initially some of the concept might have been due to input. For example, my wife. I see her, etc. but then I also have a concept of her that is not only sensory data, but also composed of relationships of internal states such as emotions, memories, other people associated with her and emotions an memories associated with them which combine to give a gestalt of my wife.

Also, what about generating language? I'm not talking about the sensory or motor components but the actual creation of abstract concepts in the brain.

Hopefully someone will come along that can talk about this more.

madnak
12-16-2006, 07:25 PM
Because the mind is so networked and interdependent, I don't know if it's possible to consider the subject without "raising a brain in a jar" or something. I don't believe we can know now.

To clarify and follow up on what RDuke said, our concepts seem almost to be like "webs" through our brains. And if there were one web completely interdependent of sensory input, and another completely dependent on it, the two webs would quickly intertwine and tangle together. If you had a fully functioning coherent brain in the absence of sensory input, exposure to sensory input would immediately result in its incorporation throughout.

Nielsio
12-17-2006, 05:44 AM
Thought is an extremely complex system, based on extremely simple basic operations. Does that make us dumb or smart? No, we remain the same as ever. But don't expect some magic in the workings of the brain. When you brake things down, it get's simpler and simpler.

Like, think about the human body, as a system. Pretty impressive huh? But it's still a collection of pumps, drains, sensors, etc, etc.


In my experience, understanding how nature operates on the small scale and 'accomplishing' these powerful, wonderful things on the big scale is really beautiful.

Utah
12-17-2006, 11:07 AM
[ QUOTE ]
It's not so easy to produce an obvious example of a discrete step, because that sort of novelty is more an effect of process, but I'll try.

When calculus was invented, it used the same basic symbol set as algebra with trivial additions, yet the determination of slope at a point etc was made precise through new concepts, including the infinitesimal as they used it, which were themselves defined using algebra.

[/ QUOTE ]

I dont think this qualifies but I am not sure yet. First, the idea of trying to determine something such as the area of space under a curve is merely an exercise in classification using previously known classifications based on sensory input. Second, I believe a "dumb" computer could theoretically "invent" calculus.

I still want to find something that fits the form:

Ice cream is to computers and {blank} is to humans. I want the human to be able to conceptualize {blank}.

Utah
12-17-2006, 11:37 AM
[ QUOTE ]
I think that you are using criteria of only the senses - which are that is bound to fail along with using a "classification method"
I still think of concepts and the relationships between these concepts are what we are going to here. Initially some of the concept might have been due to input. For example, my wife. I see her, etc. but then I also have a concept of her that is not only sensory data, but also composed of relationships of internal states such as emotions, memories, other people associated with her and emotions an memories associated with them which combine to give a gestalt of my wife.

Also, what about generating language? I'm not talking about the sensory or motor components but the actual creation of abstract concepts in the brain.

Hopefully someone will come along that can talk about this more.

[/ QUOTE ]

The language question is interesting. Of course, language in of itself is a classification system. However, the concept of communication is a unique concept devoid of classification. My MIT brother-in-law is very interested in languages and this weekend we were talking about this "thinking" problem and be commented:

....you might be interested to know (or might enjoy reflecting on the fact) that in most languages, the word for one part in three is the same as the word for number three in a list ("third" in English), but the word for one part in two is not the same as the word for number two in a list ("half" vs. "second"). I read something that suggested that this difference tells us about the evolution of the concepts of "two," "three," and order/division, with two predating order, but three not...

The gestalt comment is also intesting but why can't a computer have a gestalt of your wife? Is not a gestalt simply a classification of the current situation using new and old classification values and nothing more? Is your reaction to these classifications, ie, your emotions, simply akin to a computer output....{if X=14, Y=5, Z=3 then emotion = "annoyed" and action to take equals "go to the bar with friends to get away from nagging"}. You may be mixing and matching all kinds of classifications and some of them will seem abstract to bring together. But, that fact doesnt really change the model (i dont think it does anyway).

We think we can "conceptualize" and we think we are self aware. But, is that true? Is it possible we are simply classifying ourselves since our sensory mechanisms allow us to do so? (e.g., we see object=us in the mirror).

Is a computer today not self-aware? If I ask one of my 5th generation proliant servers what it is it will tell me, "I am super fast machine with x processors and x amount of memory and components x,y,z." Lets say I was able to attach an eye to the computer and the computer could see itself. Could the computer not recognize itself when it views a location (given it could see itself from the placement of the eye)? I think a computer could very quickly learn to become self aware to a degree with are self aware given the proper sensory input mechanisms.

I think a computer brain could be developed fairly easily using the concept of a "net computer" (as opposed to a cpu based computer). However, I think the big problem is a matter of sensory input.

madnak
12-17-2006, 11:48 AM
Self-awareness is another subject.

But the brain really isn't an I/O machine, or at least it's not useful to describe it as such. Of course it's possible to represent it that way, which I think is a major source of confusion. But it's possible to represent any system as an I/O machine - take any region of space and you can represent it as a "black box" processing its "inputs" and turning them into "outputs" according to some system. But while the computer functionally corresponds very strictly to that model, a brain doesn't correspond to it any more than a hurricane or a honeybee.

And virtually no mechanism of the brain is "simple" as Nielso is describing. This is because no mechanism of the brain functions independently (but also because even a single protein can be very complex despite an apparently simple function).

The interconnectedness of the brain is one of the big reasons why it doesn't function like a computer. In a computer, areas of memory are by definition bounded, program instructions are by definition discrete. You can call a computer "interconnected" in the sense that its discrete elements are linked to one another in various patterns, but in the brain there are no discrete elements in the first place! That's why the only way to fully understand the brain is through either a gestalt perspective or a perspective of fundamental particles because beyond the level of molecules it becomes impossible to divide the brain into discrete units - molecules are the smallest "component parts" - and moreover, the molecules themselves aren't quite as "stable" as they may seem, particularly polypeptide molecules.

BenzeneBird
12-17-2006, 11:52 AM
[ QUOTE ]
I still think of concepts and the relationships between these concepts are what we are going to here. Initially some of the concept might have been due to input. For example, my wife. I see her, etc. but then I also have a concept of her that is not only sensory data, but also composed of relationships of internal states such as emotions, memories, other people associated with her and emotions an memories associated with them which combine to give a gestalt of my wife.

[/ QUOTE ]

I can't imagine that your internalised recollection of senses in relation to your wife, such as your memory of emotions associated with her or the inference of other people's, come to exist as some independent greater whole (or gestalt). Does your concept of her exist in any other form but those of senses, whether emotional or otherwise?

Rduke55
12-17-2006, 11:52 AM
[ QUOTE ]
Of course, language in of itself is a classification system. However, the concept of communication is a unique concept devoid of classification.

[/ QUOTE ]

I think language and communication this may be the best example for you.

[ QUOTE ]
Is not a gestalt simply a classification of the current situation using new and old classification values and nothing more?

[/ QUOTE ]

I would say not. While things you have a gestalt of can be classified sometimes the gestalt itself defies classification.

[ QUOTE ]
Is your reaction to these classifications, ie, your emotions, simply akin to a computer output....{if X=14, Y=5, Z=3 then emotion = "annoyed" and action to take equals "go to the bar with friends to get away from nagging"}.

[/ QUOTE ]

That would be the action that results from the emotion - not the emotion itself.

Also, what about internal drives or motivations?

[ QUOTE ]
Lets say I was able to attach an eye to the computer and the computer could see itself. Could the computer not recognize itself when it views a location (given it could see itself from the placement of the eye)? I think a computer could very quickly learn to become self aware to a degree with are self aware given the proper sensory input mechanisms.

[/ QUOTE ]

I think you could get it to react to itself in someway but I think it's a pretty big jump from there to "self"

[ QUOTE ]
I think the big problem is a matter of sensory input.

[/ QUOTE ]

It's not the input part that's the problem - it's the processing part.

Rduke55
12-17-2006, 11:56 AM
[ QUOTE ]
I can't imagine that your internalised recollection of senses in relation to your wife, such as your memory of emotions associated with her or the inference of other people's, come to exist as some independent greater whole (or gestalt). Does your concept of her exist in any other form but those of senses, whether emotional or otherwise?

[/ QUOTE ]

I think it may be very difficult to imagine or describe something you have a gestalt of in other terms but I think that's kind of the point. It defies classification in those terms.
I think the concept does exist in other terms but it was originally dependent on sensory input to some degree (can't communicate without hearing, etc.)

BenzeneBird
12-17-2006, 12:03 PM
Is there a purpose for a gestalt if you can only conceive of it in its basic components, as i do when i think of someone?

i suppose i may not be completely self aware in my own recollections or thoughts. Perhaps also this idea of gestalt eludes to the structure of the brain itself, previously mentioned as being undefined. Although that seems a bad assumption, maybe you should have stopped reading this.

Brainwalter
12-17-2006, 12:29 PM
[ QUOTE ]
You can call a computer "interconnected" in the sense that its discrete elements are linked to one another in various patterns, but in the brain there are no discrete elements in the first place!

[/ QUOTE ]

I'm no expert on neurology: is the neutron not discrete? I know in CS they do algorigthmic neural modelling sometimes, with the "neurons" as discrete processing centers. Obviously that is an abstraction/simplification, and I don't think human neurons follow an algorithmic process that we could encode given the present knowledge of neurology.

Can you show that the neuron is not a discrete processing unit which could (eventually/theoretically) be represented as a Turing machine?

madnak
12-17-2006, 12:40 PM
It could in theory, just as any region of space could be in theory, but it would be impractical. See the post on the other thread where I talked about cell membranes. There is really no line for where the neuron begins or ends - most people would want to use the membrane as that line, which is why I talked about why it doesn't work.

Also the interactions of a neuron with its environment aren't always simple, and sometimes the interactions with the environment involve both the internal and external envirnoments as well as other factors. Finally, the brain isn't just a connection of neurons - glial cells have been shown to influence things like neurotransmitter concentrations, for example, and a recent study performed by undergrads at my very school showed that the vision of some insects is affected by the removal of "white matter." And there's stuff like circulation, etc...

Treating neurons as discrete units doesn't strike me as a very useful approach in terms of simulating them - it's useful conceptually, of course, but computationally? I doubt it.

Zygote
12-17-2006, 12:52 PM
[ QUOTE ]
Some of the people working with computer vision are beating there heads againt the wall because they are having huge problems getting computers to see the forest instead of the trees.

[/ QUOTE ]

not numenta's (http://numenta.com/) technology. Hierchal temporal memory with inference algorithims can easily accomplish these tasks.

Their programs are based on the findings and forumlations of the
Redwood Center for Theoritcal Neuroscience (http://redwood.berkeley.edu/) 's work

Their mission and research: http://redwood.berkeley.edu/wiki/Mission_and_Research

According to Tony Bell, they should have a full unified theory for the brain within 5 yeras.

Utah
12-17-2006, 01:58 PM
[ QUOTE ]
[ QUOTE ]
Some of the people working with computer vision are beating there heads againt the wall because they are having huge problems getting computers to see the forest instead of the trees.

[/ QUOTE ]


not numenta's (http://numenta.com/) technology. Hierchal temporal memory with inference algorithims can easily accomplish these tasks.

Their programs are based on the findings and forumlations of the
Redwood Center for Theoritcal Neuroscience (http://redwood.berkeley.edu/) 's work

Their mission and research: http://redwood.berkeley.edu/wiki/Mission_and_Research

According to Tony Bell, they should have a full unified theory for the brain within 5 yeras.

[/ QUOTE ]

Interesting stuff. I thought their comments on unsupervised learning was interesting. It seems to me that most learning is supervised. So, intuitively I would disagree with them. However, since I am guessing they have spent more than 5 minutes thinking about the issue that there must be some validity to their approach /images/graemlins/smile.gif

Rduke55
12-18-2006, 12:33 PM
[ QUOTE ]
Hierchal temporal memory with inference algorithims can easily accomplish these tasks.

[/ QUOTE ]

I think "easily accomplished" is way off.
Last I saw, they are still having big problems with that stuff. Things like combinatorial problems, problems with learning and memory scaling, it still classifies images based on small details, and it has trouble IDing a rotated or different location image as one of the originals, etc.

Maybe someone more knowledgable on these problems could weigh in on this.

(Although I do need the disclaimer that I really like these people's approach)

[ QUOTE ]
According to Tony Bell, they should have a full unified theory for the brain within 5 yeras.

[/ QUOTE ]

Good for him. I look foward to it /images/graemlins/tongue.gif

P.S. I meant to get back to you on this in the Kurzweil thread - sorry.

Utah
12-18-2006, 01:23 PM
[ QUOTE ]
it still classifies images based on small details, and it has trouble IDing a rotated or different location image as one of the originals, etc.

[/ QUOTE ]

I know nothing about the real work in these areas. However, just playing around with it the last couple of days and running some experiments with the wife, I keep coming back to the idea that everything must start with object identification. Is that correct?

I am looking at my lamp now and I can easily classify it in about 50 ways. But, I dont think it is discrete classification. Rather, it seems to me that it I am constantly switching what I view the lamp as based on the need I have for the classification. In fact, the lamp can simply be a component of other objects - the table it is on, the room it is in, the center of the room, etc. So, to try and classify the lamp seems to be wrong as the object is transient.

Also, all my classifications break down to simple classification systems - size, location, color, shape, etc. Given that, I think a computer could very easily "paint" a room with all possible classifications if it simply avoided trying to define objects. There are merely states so to speak. Thus, rotating the object to see from the other side becomes quite easy because I am not rotating "lamp" I am rotating "circle"

I can think of some cool experiments to run with object identification and neural nets using pictures of my lamp if I can find some time. However, I could run into huge performance issues as a while back I wrote some simple face recognition nets that could identify my daughters but when I expanded scope even a little bit it crashed my computer. However, I have some very powerful computers now that I didnt have then. But, it the problem is exponential they may not help.

Again, I am way out of my realm here and I have never really thought of this problem before. So, I may be treading old ground or missing something obvious.

Rduke55
12-18-2006, 02:22 PM
[ QUOTE ]

I know nothing about the real work in these areas. However, just playing around with it the last couple of days and running some experiments with the wife, I keep coming back to the idea that everything must start with object identification. Is that correct?

[/ QUOTE ]

I would say sometimes. That's another tough question.

[ QUOTE ]
I am looking at my lamp now and I can easily classify it in about 50 ways. But, I dont think it is discrete classification. Rather, it seems to me that it I am constantly switching what I view the lamp as based on the need I have for the classification. In fact, the lamp can simply be a component of other objects - the table it is on, the room it is in, the center of the room, etc. So, to try and classify the lamp seems to be wrong as the object is transient.

[/ QUOTE ]

I think this is getting to what we were trying to flesh out earlier.

[ QUOTE ]
Also, all my classifications break down to simple classification systems - size, location, color, shape, etc.

[/ QUOTE ]

But then, here's the problem. That's what computers have been doing and that's not just what the brain does (although it gathers that information).

[ QUOTE ]
Given that, I think a computer could very easily "paint" a room with all possible classifications if it simply avoided trying to define objects. There are merely states so to speak. Thus, rotating the object to see from the other side becomes quite easy because I am not rotating "lamp" I am rotating "circle"

[/ QUOTE ]

But the problem the computer vision folk have been having is that it can rotate it fine but then it doesn't recognize it as a lamp, say.

[ QUOTE ]
I can think of some cool experiments to run with object identification and neural nets using pictures of my lamp if I can find some time. However, I could run into huge performance issues as a while back I wrote some simple face recognition nets that could identify my daughters but when I expanded scope even a little bit it crashed my computer.

[/ QUOTE ]

Faces are the holy grail of this research. I'm not surprised that that crashed your system.

[ QUOTE ]
However, I have some very powerful computers now that I didnt have then. But, it the problem is exponential they may not help. [ QUOTE ]


While the problem may be exponential, a big problem is not just processing power, but connectivity. The same problem is studied in brain evolution. After all, for bigger brains, you can't just slap a bunch of tissue up there and expect it to work. You have to look at things like absolute and proportional connectivity, etc.
These ideas are very much why brains are the way they are (with the subdivisions of the cortex for example). And I think this is the scaling problem some of these programs have.
I don't remember who said it, but someone was discussing AI and processing power and he said something to the effect of "you're not making smarter computers, you're just making dumb computers faster."

Utah
12-18-2006, 04:19 PM
Just some stream of thought rambling....

[ QUOTE ]
But then, here's the problem. That's what computers have been doing and that's not just what the brain does (although it gathers that information)

[/ QUOTE ]

Do we know what the brain does? How does the brain learn and handle non-continuous functions? I figured out a way to handle this in AI but the brain must have a mechanism as well. I keep thinking the brain must have some parallel learning methods. Or, oddly, it may have discrete learning units that are not directly connected in the learning process - ie, it learns "red" discretely from "blue" and the two are not connected.

Here is one of the simple experiments I ran on my wife. I tapped her and said “2” I tapped her a little harder and said “4”. I tapped her much harder and said, “10”. I then said “6” and I asked her what she expected. She said a tap between 4 and 10. I then tapped her at a 6 level.

I then did it with colors. I tapped her lightly and said, “green”. I tapped her harder at about the “10” before and said “red”. I then said “purple” and asked her what she expected.

In the second instance, she cannot value purple directly because it is not continuous. Knowing “green” and “red” do not help her at all. What she can do is reduce the problem to a different classification form, which is what she did. She said that she expected a hit in the middle since that is what I did with the numbers. I tapped her inbetween the two previous strengths.

Now, if I repeated the first experiment and finished with “15” she would likely be scared because she might think she was going to be tapped really really hard. She would disregard the fact that the last two times the final tap was inbetween. Or, she may be confused because she has conflicting classifications – one based on a number scale and one based on pattern.

2 questions:

1) Do we know how the brain operates when confronted with the above situation?
2) How does the brain handle non-continuous functions? How does it learn when confronted with them? It cannot borrow directly from other values because a scale is impossible, although it can reduce and make assumptions (such as set general expectations, ranges, etc). It is in these types of problems where I think the brain shines and where the power lies.

Strange thought out of the blue – do people of other races all look alike because we dont know how to classify them as well so we reduce their form in our minds?

If I broke up the learning into millions of discrete/dynamic neural nets and I set them up in some wickedly cool hierarchical or interconnected pattern then I would think I could blow away anything done in AI today. While we focus on technical data yield problems and not theoretical AI issues, my little company can already run 300,000+ simultanious discrete neural nets (seperate data sets, net architectures, etc) in real-time. However, we dont interconnect them as there is no need for our problem. However, it would be really cool to set each one up to some small learning component of the lamp problem (color, location relationship, shapes, size, etc) and then to run nets on top of nets with some sort of instruction set.

[ QUOTE ]
But the problem the computer vision folk have been having is that it can rotate it fine but then it doesn't recognize it as a lamp, say.

[/ QUOTE ]

But, isnt it possible that the problem is in the rotation itself? Rotating assumes continuity and to assume continuity I believe I need to have scalable functions to continue to keep the lamp “in focus”. But, if I break that need of continuity then I have a lot more flexibility. The lamp on the other side is a different classification problem when the lamp is viewed from a different side.

Now, that leads to the question of being able to identify “the same lamp”. But, that is heavily tied to the environment. If I drive to my friend’s house right now and see an identical lamp I will not assume that it is the lamp I just looked at.

One last thought – there seems to be an interesting problem when looking at the issue of “the same object”. I have a log sitting next to me on the fireplace and it has definite shape and size and I can identify it very easily from all the other objects in the room. Yet, if someone took it out and placed it somewhere in the woodpile I could no longer identify it, even if I were able to pick up every single piece in the pile and examine it. Even though I have a very strong concept of the log now, something is lost when it is placed placed in the wood pile. Sure, all the other logs look somewhat similar. However, why do I lose the one that I can so clearly see and conceptualize now?