PDA

View Full Version : Futurists/Ray Kurzweil


CrayZee
11-29-2006, 07:38 PM
So I bumped into an online lecture (http://www.ted.com/tedtalks/) by Ray Kurzweil. Seemed interesting enough.

Tell me what you think of this guy. Is his stuff worth reading or is he kooky?

While the robot chickens probably won't dominate the earth by 2203, I still have a hard time believing the short time frame he believes people will be able to achieve much longer life spans. Perhaps this is a misunderstanding/underappreciation of the exponential gains in technology on my part.

Rduke55
11-29-2006, 08:01 PM
He's pretty kooky.
He's making HUGE uninformed assumptions about a lot of things because of his knowledge of technology. He's pretty weak on how the brain actually works.
Actually it seems his knowledge of biology as a whole is shaky.

I think there's a bunch of criticism on the net on a bunch of his ideas from some experts in various fields.

Metric
11-29-2006, 08:32 PM
Some of his specific prediction are a bit "out there", but I think he has an excellent point about the exponential growth of technology -- it is virtually guaranteed to have profound effects on humanity. I agree though -- some of the specific timetables he outlines may be a bit, shall we say, "optimistic."

Phil153
11-29-2006, 08:50 PM
Only the speed and storage of technology has been growing exponentially, and at some point in the not too distant future it will reach fundamental phsyical barriers.

Computers technology today is identical to the first big vacuum tube model from the 1950s, just faster and smaller. There has been virtually no progress in a computer's ability to think or learn, just to solve certain very narrow computational problems more rapidly.

Ray Kurzweil (and all the singularists, who crack me up) are way off track.

CrayZee
11-29-2006, 09:31 PM
[ QUOTE ]

He's making HUGE uninformed assumptions about a lot of things because of his knowledge of technology. He's pretty weak on how the brain actually works.
Actually it seems his knowledge of biology as a whole is shaky.

[/ QUOTE ]

I'm supremely weak when it comes to chemistry, biology, etc., so I could be a sucker for unwittingly agreeing with this stuff. I mean his lecture seemed logical, but yeah, bad assumptions lead to garbage in, garbage out problems.

Speaking of which, I wonder if a lot of technology guys have this black box-type of thinking bias that extends to areas they know less about. It is tempting to infer things when you don't look inside the box...

vhawk01
11-29-2006, 10:38 PM
[ QUOTE ]
Only the speed and storage of technology has been growing exponentially, and at some point in the not too distant future it will reach fundamental phsyical barriers.

Computers technology today is identical to the first big vacuum tube model from the 1950s, just faster and smaller. There has been virtually no progress in a computer's ability to think or learn, just to solve certain very narrow computational problems more rapidly.

Ray Kurzweil (and all the singularists, who crack me up) are way off track.

[/ QUOTE ]

He does make a good point in his book along these lines, although I don't know for sure if its what you are saying exactly. What type of thing would a computer have to be able to do for you to consider it some fundamental leap? And if some computer was able to do this, how likely do you think it would be that people would say what you just said about THAT accomplishment and move the bar further?

I dont really know enough about the way the mind works to have any useful opinion on this. But it seems like raw computing power increases may, at least theoretically, be enough.

Metric
11-29-2006, 10:40 PM
Actually, if you plot human technological milestones on a logarithmic graph (almost independent of your choice of milestones, within reason), you find that the exponential trend goes back much farther than computer technology alone. This, I believe, is probably Kurzweil's most convincing and important point.

madnak
11-29-2006, 11:03 PM
[ QUOTE ]
He's pretty kooky.

[/ QUOTE ]

And "technological progress" can't be quantified, nor does it fit the curve his proponents suggest.

John Feeney
11-30-2006, 12:40 AM
[ QUOTE ]
Actually, if you plot human technological milestones on a logarithmic graph (almost independent of your choice of milestones, within reason), you find that the exponential trend goes back much farther than computer technology alone. This, I believe, is probably Kurzweil's most convincing and important point.

[/ QUOTE ]

Metric -- What makes the trend "exponential"? I mean, it's easy enough to see how populations and some other things do grow exponentially, but how do we define it for technology? Or does he just mean big, impressive leaps and progress?

vhawk01
11-30-2006, 12:47 AM
I am pretty sure Kurzweil measures computations/s versus time in most of his graphs.

However, for the ones I think you guys are thinking of, he basically (unscientifically to say the least) polled a bunch of experts on paradigm shifts, and measured them versus time. So while its definitely not like cps, he was trying to show that the rate of groundbreaking discoveries is increasing exponentially.

Metric
11-30-2006, 06:26 AM
The exponential graphs referred to are basically the rate of achieving big technological milestones vs. time. So you might think this depends a bit on what you select as the most important technological milestones to include in your graph -- but apparently Kurzweil has taken something like a dozen different lists (from different authors) of "most important technological milestones" and they all fit on the same (slightly fattened) exponential curve.

Phil153
11-30-2006, 06:33 AM
[ QUOTE ]
What type of thing would a computer have to be able to do for you to consider it some fundamental leap? And if some computer was able to do this, how likely do you think it would be that people would say what you just said about THAT accomplishment and move the bar further?

[/ QUOTE ]
A computer would have to be made of a material that can form a self organizing system, building up layers of meaningful abstraction from some input. At the moment computers are a bunch of switches that people can flick on and off. Nothing more. There is no inate capacity for intelligence or learning and there never can be with this structure. A capacity for learning needs to be tied to the material itself to become possible.

Building this material will likely prove to be as difficult as designing a bird's brain from carbon, oxygen, nitrogen and hydrogen. We are centuries away from the level of complex understanding required to build such a structure, let alone the engineering skills.

As soon a computer can satisfy the above, a singularity becomes theoretically possible. Though it may well be limited by other constraints.

Metric
11-30-2006, 07:53 AM
I'm still not quite clear on why you believe this:

[ QUOTE ]
At the moment computers are a bunch of switches that people can flick on and off. Nothing more. There is no inate capacity for intelligence or learning and there never can be with this structure. A capacity for learning needs to be tied to the material itself to become possible.

[/ QUOTE ]

If a "bunch of switches" computer can simulate and reproduce all the relevant functions of a single neuron, why shouldn't a sufficiently large computer be able to simulate all the emergent and hard to understand functions of a human brain? Is it your position that a single neuron does something inherently non-computable?

John21
11-30-2006, 01:44 PM
Robot Discovers Itself, Adapts to Injury:
So Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. First, it teaches itself to walk. Then, when damaged, it teaches itself to limp.

Article (http://www.physorg.com/news82910066.html)

vhawk01
11-30-2006, 02:52 PM
[ QUOTE ]
Robot Discovers Itself, Adapts to Injury:
So Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. First, it teaches itself to walk. Then, when damaged, it teaches itself to limp.

Article (http://www.physorg.com/news82910066.html)

[/ QUOTE ]
Fascinating, thank you John.

Rduke55
11-30-2006, 05:55 PM
[ QUOTE ]
If a "bunch of switches" computer can simulate and reproduce all the relevant functions of a single neuron, why shouldn't a sufficiently large computer be able to simulate all the emergent and hard to understand functions of a human brain? Is it your position that a single neuron does something inherently non-computable?

[/ QUOTE ]

I think a the issue is that the people doing the models (or anyone really) don't really know all the details. The word "relevant" is the big problem here.
Neurons often process information in ways very different than computers (among other gigundous differences). I think it would make an interesting discussion on whether or not computers can simulate it closely enough for some of these problems but right now they can't IMO.
I think this also relates to vhawk's ealier processing power post in this thread.

Rduke55
11-30-2006, 05:57 PM
[ QUOTE ]
Speaking of which, I wonder if a lot of technology guys have this black box-type of thinking bias that extends to areas they know less about. It is tempting to infer things when you don't look inside the box...

[/ QUOTE ]

I think this is a very good statement.

oneeye13
11-30-2006, 05:58 PM
if the predicted exponential growth in world population had kept its pace, there would be more people to read his nonsense

Metric
11-30-2006, 06:08 PM
I'm not arguing that "all the details" or "every relevant function" is known. I'm merely arguing that whatever a neuron does, it ought to be simulatable on a computer composed of a "bunch of switches," assuming that nothing inherently non-computable is going on. Is this what you are objecting to?

Rduke55
11-30-2006, 06:14 PM
[ QUOTE ]
I'm not arguing that "all the details" or "every relevant function" is known. I'm merely arguing that whatever a neuron does, it ought to be simulatable on a computer composed of a "bunch of switches," assuming that nothing inherently non-computable is going on. Is this what you are objecting to?

[/ QUOTE ]

Yes, because people seem to think that because of the action potential's digital nature that that's its input and output. It's waaaaay more than that. Neuron-to-neuron communication has a lot of other stuff going on that can't be broken down to these switch analogies IMO.
You'd need a kind of analog computer maybe.

Rduke55
11-30-2006, 06:20 PM
Separately, I'd imagine that the high level of connectivity in the brain is a physical barrier to silicon based computers.

Skidoo
11-30-2006, 06:24 PM
[ QUOTE ]
Robot Discovers Itself, Adapts to Injury

[/ QUOTE ]

What a robot hasn't come close to doing is produce a creative output that can't be accounted for in terms of its inputs. As far as possessing anything like a mind, these machines are scarcely more advanced than a pile of rocks.

soon2bepro
11-30-2006, 06:25 PM
[ QUOTE ]
Only the speed and storage of technology has been growing exponentially, and at some point in the not too distant future it will reach fundamental phsyical barriers.

Computers technology today is identical to the first big vacuum tube model from the 1950s, just faster and smaller. There has been virtually no progress in a computer's ability to think or learn, just to solve certain very narrow computational problems more rapidly.

Ray Kurzweil (and all the singularists, who crack me up) are way off track.

[/ QUOTE ]

You don't understand.

Exponential growth is because knowledge adds on knowledge just as technology adds on technology. We wouldn't be able to do some of the latest research on any field if it wasn't for the accumulated knowledge in those fields. Plus the average scientist becomes smarter and more capable.

etc, etc.

Certainly there can be a peak point, and there will be periods with more advances and periods with less but there's no reason to believe we will reach the peak anytime soon. There's just so much about the universe we can't even begin to understand.

Open your mind. Understand that what SEEMS impossible today may be possible tomorrow. Think about what someone from the 19th century would say if you mentioned cellphones, internet, QM and whatnot.

Metric
11-30-2006, 06:26 PM
So you should probably mention what these processes are that you believe to be non-simulatable on a digital computer. I'm well aware of the fact that a neuron does more than just depolarize, but I see no reason to believe that what it does is inherently non-predictable by a digital computer.

Borodog
11-30-2006, 06:31 PM
The idea that computers will not be able to achieve the level of complexity of the human brain is silly on the face of it. The brain itself is prima facie evidence that it is possible.

Rduke55
11-30-2006, 06:38 PM
[ QUOTE ]
So you should probably mention what these processes are that you believe to be non-simulatable on a digital computer. I'm well aware of the fact that a neuron does more than just depolarize, but I see no reason to believe that what it does is inherently non-predictable by a digital computer.

[/ QUOTE ]

OK, I'm starting to go afield of my expertise but things like the strength of connections betwen neurons are dynamic and would have to be represented by continuous real numbers is one snag at that level.
Also, the brain has amazing network processing that's not been modelled (not for a lack of trying). While the brain does have dedicated processing centers for some aspects of - say - visual information, it's the blending and integration of really diverse information in the networks in a way that parallel processing, etc. in modern computers can't do. There's some literature on these kinds of problems, both hypothetical and experimental, in the vision processing modelling literature. I can't recommend the best papers in that field but searching for some modelling of perception stuff may get you there.

Skidoo
11-30-2006, 06:39 PM
[ QUOTE ]
The idea that computers will not be able to achieve the level of complexity of the human brain is silly on the face of it. The brain itself is prima facie evidence that it is possible.

[/ QUOTE ]

The existence of a brain is not evidence that a brain can create a brain.

Rduke55
11-30-2006, 06:39 PM
[ QUOTE ]
The idea that computers will not be able to achieve the level of complexity of the human brain is silly on the face of it. The brain itself is prima facie evidence that it is possible.

[/ QUOTE ]

But we're talking about digital computers here.

P.S. Logic. (Pffft)

Borodog
11-30-2006, 06:48 PM
[ QUOTE ]
[ QUOTE ]
The idea that computers will not be able to achieve the level of complexity of the human brain is silly on the face of it. The brain itself is prima facie evidence that it is possible.

[/ QUOTE ]

But we're talking about digital computers here.

[/ QUOTE ]

No, we're not. We're simply talking about artificial constructs. We don't need to limit it to any particular technology. You could claim that artificial constructs could never fly if you limited the discussion to things built out of bricks and mortar.

Edit: Sorry, I should add that *I'm* not restricting my thinking about artifical brains to digital computers; not anybody else in the discussion that I stepped into the *middle* of.

But I'd also like to add that I see no reason whatsover that a digital computer would be somehow inherently incapable of simulating the workings of a neuron, or the connection between neurons. Basically, if the workings of something can be described, they can be simulated.

Metric
11-30-2006, 06:50 PM
Contiuous numbers are no problem to model to whatever accuracy level is required -- it's just a matter of sheer computing power. As for the rest, you are referring again to emergent behavior and telling me that we don't understand exactly how it is emergent. That's not an argument that it can't be simulated on a digital computer simply because it's a digital computer.

vhawk01
11-30-2006, 06:55 PM
[ QUOTE ]
[ QUOTE ]
Robot Discovers Itself, Adapts to Injury

[/ QUOTE ]

What a robot hasn't come close to doing is produce a creative output that can't be accounted for in terms of its inputs. As far as possessing anything like a mind, these machines are scarcely more advanced than a pile of rocks.

[/ QUOTE ]

Show me that a human has.

Rduke55
11-30-2006, 06:56 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The idea that computers will not be able to achieve the level of complexity of the human brain is silly on the face of it. The brain itself is prima facie evidence that it is possible.

[/ QUOTE ]

But we're talking about digital computers here.

[/ QUOTE ]

No, we're not. We're simply talking about artificial constructs. We don't need to limit it to any particular technology. You could claim that artificial constructs could never fly if you limited the discussion to things built out of bricks and mortar.

[/ QUOTE ]

From Metric's post that you responded to:

[ QUOTE ]
So you should probably mention what these processes are that you believe to be non-simulatable on a digital computer.

[/ QUOTE ]

Borodog
11-30-2006, 06:57 PM
Sorry; I fixed my post in the edit.

Rduke55
11-30-2006, 07:01 PM
[ QUOTE ]
Contiuous numbers are no problem to model to whatever accuracy level is required -- it's just a matter of sheer computing power. As for the rest, you are referring again to emergent behavior and telling me that we don't understand exactly how it is emergent. That's not an argument that it can't be simulated on a digital computer simply because it's a digital computer.

[/ QUOTE ]

I think what I'm trying to say (unfortunately I'm not being clear) is that if we are able to simulate a human brain, that we need something other than a digital computer because of the differences in types of processing. I'm not explaining that well. I will try and find some papers for you. (holy crap Borodog, I did forget about those papers I promised you).
Although I will think about your digital one with unlimited processing power that you mentioned.
Until I get the literature, can we at least agree that it is an enormous hurdle? Much bigger than most people are making it out to be?

Rduke55
11-30-2006, 07:02 PM
[ QUOTE ]
Sorry; I fixed my post in the edit.

[/ QUOTE ]

np, end of day error.

Borodog
11-30-2006, 07:08 PM
Rduke55,

I would still appreciate those sources.

Metric
11-30-2006, 07:10 PM
I have no problem admitting to enormous hurdles -- my argument is one of principle. Kurzweil's argument, on the other hand, is that enormous hurdles don't look quite so enormous when you take into account exponential growth of understanding and technology. But I'm also very aware of the concept that an exponential trend shouldn't be relied upon on to remain exponential forever.

Metric
11-30-2006, 07:24 PM
[ QUOTE ]
But I'd also like to add that I see no reason whatsover that a digital computer would be somehow inherently incapable of simulating the workings of a neuron, or the connection between neurons. Basically, if the workings of something can be described, they can be simulated.

[/ QUOTE ]
Hopefully this goes without saying (given my other posts to the thread), but this is precisely my position, and what I'm hoping to see a counter-argument to from the people that appear to believe differently.

Skidoo
11-30-2006, 07:43 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Robot Discovers Itself, Adapts to Injury

[/ QUOTE ]

What a robot hasn't come close to doing is produce a creative output that can't be accounted for in terms of its inputs. As far as possessing anything like a mind, these machines are scarcely more advanced than a pile of rocks.

[/ QUOTE ]

Show me that a human has.

[/ QUOTE ]

Nothing, or very close to it, that is peculiar to humans has been described by functions on inputs.

vhawk01
11-30-2006, 08:04 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Robot Discovers Itself, Adapts to Injury

[/ QUOTE ]

What a robot hasn't come close to doing is produce a creative output that can't be accounted for in terms of its inputs. As far as possessing anything like a mind, these machines are scarcely more advanced than a pile of rocks.

[/ QUOTE ]

Show me that a human has.

[/ QUOTE ]

Nothing, or very close to it, that is peculiar to humans has been described by functions on inputs.

[/ QUOTE ]

...yet, and nor has it been described in any other way?

madnak
11-30-2006, 08:14 PM
[ QUOTE ]
Exponential growth is because knowledge adds on knowledge just as technology adds on technology.

[/ QUOTE ]

And just as when you add another book to a stack, it adds to the height of the stack? Wow, we can build a stack of books all the way to the Andromeda Galaxy!

[ QUOTE ]
We wouldn't be able to do some of the latest research on any field if it wasn't for the accumulated knowledge in those fields. Plus the average scientist becomes smarter and more capable.

[/ QUOTE ]

And the tasks we attempt become more and more difficult. Each new innovation is harder and harder to come by - so far it does seem that human ingenuity has more than kept up with this exponential "learning curve," but a peak isn't necessary for that trend to reverse.

[ QUOTE ]
Certainly there can be a peak point, and there will be periods with more advances and periods with less

[/ QUOTE ]

Then to suggest that technological progress will continue in any specific path, or at a constant exponential rate, is illogical.

[ QUOTE ]
but there's no reason to believe we will reach the peak anytime soon.

[/ QUOTE ]

Right, there's no reason to believe we'll OMGOMGTECHSINGULARITYWHOA anytime soon. I totally agree.

[ QUOTE ]
There's just so much about the universe we can't even begin to understand.

[/ QUOTE ]

Then it's really impressive Kurzweil is able to make so many grandiose predictions about it.

[ QUOTE ]
Open your mind. Understand that what SEEMS impossible today may be possible tomorrow.

[/ QUOTE ]

And it may not be, and it may take until next Wednesday.

[ QUOTE ]
Think about what someone from the 19th century would say if you mentioned cellphones, internet, QM and whatnot.

[/ QUOTE ]

Are you aware of the predictions made by the Kurzweils of the world since as far back as rationalism? To clue you in, very few of them have actually come true, even today. And certainly, in the middle of the twentieth century, particularly right after the moon landing, plenty of people made predictions about our time that never came close to being true.

In fact, the technologies you've referenced were almost never predicted in the past. Throughout history, nobody has been able to predict the progress of technology, especially the overzealous and the overoptimistic. And yet, you're asking us now to accept the predictions of the most overzealous and overoptimistic thinker of our time as irrefutable fact? Honestly. Maybe you like getting swept up in the moment, and maybe you really cling to fantasies of immortality, but there's absolutely no logic in that.

madnak
11-30-2006, 08:23 PM
[ QUOTE ]
Contiuous numbers are no problem to model to whatever accuracy level is required -- it's just a matter of sheer computing power.

[/ QUOTE ]

So I guess we don't need calculus - we can just take power series to an arbitrary degree of precision?

The ability to approximate something closely may be very far from the ability to model it accurately. 3.141592 is no closer to pi than 9*10^99,999,999,999. The fact that "close enough" works for most human applications is beside the point. Newtonian mechanics works for most human application, does that mean QM is useless?

Of course, I personally think it's unlikely that a high degree of fidelity can be achieved digitally. It's certainly mathematically possible that this isn't the case, that the continuous mechanics of the brain are essential. But it seems unlikely to me. And other than the uncanny valley, what do we have to fear?

The problem is that computers and brains work in very different ways. Simulating a brain would be like simulating a weather system. In fact, it's entirely possible you'd need to run a full simulation at the molecular level - weather system? Those are a piece of cake in comparison.

Even assuming that computing power breaks all the physical bounds that it seems to be approaching (and it has never faced such a challenge in the past, which is largely why Moore's law has held true), assume it even goes orders of magnitude beyond that. Assume complex circuits smaller than basic particles.

It would still take a computer so large it might not fit in the state of Texas, and so slow that a hundred years of simulation wouldn't match a minute of "real-time" for the simulated entity. So yeah, we might iron the bugs out of a prototype within, hmm, a few hundred millenia?

Heuristics is probably a better approach - don't try to simulate something inherently "uncomputer," instead try to create a new kind of intelligence. The problem is that heuristics is murky. We don't have any way to predict where it will go, or how fast, except to say that right now it seems to be a bit paralyzed. At any rate, predicting a fully-functioning AI within any of our lifetimes is unjustified.

soon2bepro
11-30-2006, 08:49 PM
You're really funny madnak /images/graemlins/smile.gif

Anyway, I didn't say Kurzweil predictions are correct, just that all those things don't seem so far fetched, maybe the time needed to reach that point of advancement would be much more, but im not so sure. It may even be much less. It depends on many factors. Especially you have to consider that just because we possess the ability to produce a certain technology, doesn't mean we'll be applying it, or that everyone will have access to it.

Phil153
11-30-2006, 09:08 PM
[ QUOTE ]
I'm still not quite clear on why you believe this:

[ QUOTE ]
At the moment computers are a bunch of switches that people can flick on and off. Nothing more. There is no inate capacity for intelligence or learning and there never can be with this structure. A capacity for learning needs to be tied to the material itself to become possible.

[/ QUOTE ]

If a "bunch of switches" computer can simulate and reproduce all the relevant functions of a single neuron, why shouldn't a sufficiently large computer be able to simulate all the emergent and hard to understand functions of a human brain? Is it your position that a single neuron does something inherently non-computable?

[/ QUOTE ]

Disclaimer: I'm not an expert in this field; this is just what common sense tells me.

The problem is one of complexity. To create an AI that can act as a singularity (i.e. a self improving intelligence superior to a human mind), you have to simultaneously solve the problems of

- Time complexity
- Space complexity
- Circuit complexity

With computers being made of a large number of unchangeable on/off switches, this is not possible. A computer such as this could not solve a large polynomial hard problem, for example, in a reasonable timespan. The structure of the processing units themselves would have to change dynamically as solutions to previous problems are found, or you'd eventually run into exponentially (or greater) decreasing levels of efficiency due to issues such as round trip travel times.

The structure of a brain appears to solve this problem well, because it actually changes structure and function in response to the needs of the user and interaction with the outside world. Optimizations are built into the circuitry as needed.

This is why I think rigid switching devices are incapable of producing a singularity, or an intelligence sufficient to design a singularity capable device.

Metric
11-30-2006, 09:18 PM
You're great at hyperbole. So we agree that it's just a matter of computing power. I also tend to think that the process could be simplified a great deal -- i.e. not 100% of the information contained in the statistical microstate describing the brain at any given moment is directly essential for describing what is going on.

vhawk01
11-30-2006, 09:24 PM
[ QUOTE ]
[ QUOTE ]
I'm still not quite clear on why you believe this:

[ QUOTE ]
At the moment computers are a bunch of switches that people can flick on and off. Nothing more. There is no inate capacity for intelligence or learning and there never can be with this structure. A capacity for learning needs to be tied to the material itself to become possible.

[/ QUOTE ]

If a "bunch of switches" computer can simulate and reproduce all the relevant functions of a single neuron, why shouldn't a sufficiently large computer be able to simulate all the emergent and hard to understand functions of a human brain? Is it your position that a single neuron does something inherently non-computable?

[/ QUOTE ]

Disclaimer: I'm not an expert in this field; this is just what common sense tells me.

The problem is one of complexity. To create an AI that can act as a singularity (i.e. a self improving intelligence superior to a human mind), you have to simultaneously solve the problems of

- Time complexity
- Space complexity
- Circuit complexity

With computers being made of a large number of unchangeable on/off switches, this is not possible. A computer such as this could not solve a large polynomial hard problem, for example, in a reasonable timespan. The structure of the processing units themselves would have to change dynamically as solutions to previous problems are found, or you'd eventually run into exponentially (or greater) decreasing levels of efficiency due to issues such as round trip travel times.

The structure of a brain appears to solve this problem well, because it actually changes structure and function in response to the needs of the user and interaction with the outside world. Optimizations are built into the circuitry as needed.

This is why I think rigid switching devices are incapable of producing a singularity, or an intelligence sufficient to design a singularity capable device.

[/ QUOTE ]
But isnt this really just the creationist argument of "I cannot believe that all of this beauty and complexity could come about without God!"?

Phil153
11-30-2006, 09:36 PM
[ QUOTE ]
So we agree that it's just a matter of computing power.

[/ QUOTE ]
No, we do not. You have not understood my point.

Phil153
11-30-2006, 09:38 PM
[ QUOTE ]
But isnt this really just the creationist argument of "I cannot believe that all of this beauty and complexity could come about without God!"?

[/ QUOTE ]
Care to elaborate?

Borodog
11-30-2006, 09:54 PM
[ QUOTE ]
[ QUOTE ]
Contiuous numbers are no problem to model to whatever accuracy level is required -- it's just a matter of sheer computing power.

[/ QUOTE ]

So I guess we don't need calculus - we can just take power series to an arbitrary degree of precision?

[/ QUOTE ]

Calculus is a great thing. But when it gets too complex, we turn to digital calculation. Trust me, I know. /images/graemlins/wink.gif

[ QUOTE ]
The ability to approximate something closely may be very far from the ability to model it accurately. 3.141592 is no closer to pi than 9*10^99,999,999,999.

[/ QUOTE ]

I can't imagine in what way you mean this statement, because it seems clearly incorrect. The % error between 3.141592 and pi is approximately 2x10^-5%. The percent difference between pi and 9*10^99,999,999,999 is approximately 2.9*10^99,999,999,997%. Clearly 3.141592 is closer to pi.

[ QUOTE ]
The fact that "close enough" works for most human applications is beside the point. Newtonian mechanics works for most human application, does that mean QM is useless?

[/ QUOTE ]

This isn't really an apt analogy. Newtonian mechanics is an approximation to QM (and relativity too, which is sort of scary, because QM and relativity are currently mutually exclusive). We use Newtonian mechanics when it is "good enough" because it is. In those regimes when it is not good enough, we use more accurate models. What you are essentially claiming, seemingly completely without justification, is that the brain may have no level of approximation which is "good enough", which simply seems unjustifiable. For example, there is almost certainly nothing about the brain that depends on the fact that protons and neutrons are composite particles and not made of quarks. Hence any simulation of a brain that could simulate it down the level of all fundamental particles, but neglected the substructure of protons and neutrons would almost certainly be "good enough." Clearly there is some level of simulation that would be "good enough". My guess is that level would be quite high. In fact, I daresay that if the complex structure of a neuron could not be "reduced" to some (relatively) simple functional model, it wouldn't be any good for what it does. In other words, if neurons cannot "count" on other neurons behaving in some sense "predictably", like some sort of algorythmic black box, they would be of no use to each other or themselves for their jobs.

[ QUOTE ]
Of course, I personally think it's unlikely that a high degree of fidelity can be achieved digitally. It's certainly mathematically possible that this isn't the case, that the continuous mechanics of the brain are essential. But it seems unlikely to me. And other than the uncanny valley, what do we have to fear?

[/ QUOTE ]

I'm not sure I understand this part.

[ QUOTE ]
The problem is that computers and brains work in very different ways. Simulating a brain would be like simulating a weather system. In fact, it's entirely possible you'd need to run a full simulation at the molecular level - weather system? Those are a piece of cake in comparison.

[/ QUOTE ]

This is what I'm talking about. We can simulate weather systems using the mechanics of continuos media, and not the molecular level, because the mechanics of the moledular level reduce to the equation of continuous mechanics. Similarly, a neuron is a vast and huge thing compared to the molecular level, and it is almost certain that the neuron doesn't "care" what happens at the molecular level, it only "cares" what that reduces to at it's level, which is almost certainly a (relatively) simpler functional set.

[ QUOTE ]
Even assuming that computing power breaks all the physical bounds that it seems to be approaching (and it has never faced such a challenge in the past, which is largely why Moore's law has held true), assume it even goes orders of magnitude beyond that. Assume complex circuits smaller than basic particles.

[/ QUOTE ]

But don't you see this is silly? We already have computers that can simulate brains in the same space as a brain; they're called brains! If "digital" computers prove limiting, we'll switch to some other technology. We know such a technology can exist because it already does.

[ QUOTE ]
It would still take a computer so large it might not fit in the state of Texas, and so slow that a hundred years of simulation wouldn't match a minute of "real-time" for the simulated entity. So yeah, we might iron the bugs out of a prototype within, hmm, a few hundred millenia?

[/ QUOTE ]

Argument from incredulity?

[ QUOTE ]
Heuristics is probably a better approach - don't try to simulate something inherently "uncomputer," instead try to create a new kind of intelligence. The problem is that heuristics is murky. We don't have any way to predict where it will go, or how fast, except to say that right now it seems to be a bit paralyzed. At any rate, predicting a fully-functioning AI within any of our lifetimes is unjustified.

[/ QUOTE ]

Poo-pooing it is just as unjustified, and counterproductive.

Think about it this way.

A) We know computers as complex and sophisticated as the human brain can exist because they do; they're called human brains.

B) Most of the complexity of the component parts of the human brian have absolutely nothing to do with it's function, thinking. The vast majority of the complexity of the subcomponents comes from the fact that the brain has to be built out of the same thing everything else is built out of: living cells. Living cells that have to be complex enough, in each and every one, to serve the function of every other kind of cell, and all the complexity that entails, including metabolizing, waster removal, reproduction, etc, etc, etc.

The task as I see it is to design and build component parts that do away with all that baggage, while reproducing what my gut tells me is the relatively simple functional set of any single component, and then assembling those components.

I see no fundamental technological difficulty in such a process. Whether or not it can be done "in our lifetimes" will only be settled with time, but my estimate on the over-under is 30 years.

Borodog
11-30-2006, 09:56 PM
[ QUOTE ]
You're great at hyperbole. So we agree that it's just a matter of computing power. I also tend to think that the process could be simplified a great deal -- i.e. not 100% of the information contained in the statistical microstate describing the brain at any given moment is directly essential for describing what is going on.

[/ QUOTE ]

Bingo.

Borodog
11-30-2006, 09:57 PM
[ QUOTE ]
[ QUOTE ]
So we agree that it's just a matter of computing power.

[/ QUOTE ]
No, we do not. You have not understood my point.

[/ QUOTE ]

He wasn't talking to you.

vhawk01
11-30-2006, 10:06 PM
[ QUOTE ]
[ QUOTE ]
But isnt this really just the creationist argument of "I cannot believe that all of this beauty and complexity could come about without God!"?

[/ QUOTE ]
Care to elaborate?

[/ QUOTE ]

I dont want to put words in your mouth. But the way I read your post, it seems to just be an argument from incredulity. There are many complex things and I cant imagine any computer being able to accomplish all of these complex things. Therefore it wont happen.

Rduke55
11-30-2006, 10:20 PM
[ QUOTE ]
[ QUOTE ]
You're great at hyperbole. So we agree that it's just a matter of computing power. I also tend to think that the process could be simplified a great deal -- i.e. not 100% of the information contained in the statistical microstate describing the brain at any given moment is directly essential for describing what is going on.

[/ QUOTE ]

Bingo.

[/ QUOTE ]

Another problem I have is that neurons aren't the only players here. Glial cells (the nonneuronal cells in the brain) are also heavily involved in processing aspects of the brain. Not only do they outnumber neurons (granted many of those are strictly support, etc.) but they don't have action potentials which bumps up the information content and processing problems immensely.

(I will concede if you have a digital computer with monstrous processing power - and I don't know if many of the people in this thread understand how big I mean - with near perfect information on how each cell in the brain interacts with others, and the rules for how they change when they interact with others, thwn it is at least hypothetically possible for a digital computer to simulate a brain)

This may be for another thread, but what about the other side of this equation? How do you get all that information? The human brain has on the order of 100 billion neurons (with any where up to a few thousand connections per neuron - and these connections are anything but static), and glia in excess of that. How do you determine how all those cells interact with each other?
Scientists spend their entire careers trying to figure these relationships out for tiny numbers of cells. In my research I record signals (electrical and chemical) and try and determine how neurons react to inputs, etc. and I can't imagine how you could get even the the roughest approximation to input in the computer. I think Madnak's heuristics idea is spot on because of this.

Borodog
11-30-2006, 10:30 PM
Rduke55,

I have no idea how to do it. But I the vast majority of the "statistical microstate of the brain", to use Metric's excellent phrase, is completely irrelevent to the functions we are interested in. And I also know that the brain does it, so that it can't be an insurmountable problem, can it?

Phil153
11-30-2006, 10:40 PM
OK. What I'm saying is that I believe the complexity of calculations (is that even the right word?) required by a singularity type entity (i.e. a self improving computer capable of greater than human thought) cannot be achieved on a rigid architecture, due to the costs and difficulties associated with increasing complexity on that architecture. Many of the problems computers need to solve to become "singular" appear to be of greater than exponential difficulty.

The only way to deal with this level of rapidly increasing complexity (required for self improvement) is a system similar to the brain - where the actual processing structure itself changes in response to stimuli. Otherwise, either space, time, or circuitry considerations will always bottleneck self improvement for the kinds of problems computers would need to solve (due to problems such as exponentially increasing round trip travelling times in a fixed architecture).

Metric
11-30-2006, 10:57 PM
FWIW, I'm not arguing that clever engineering/hardware advances won't make things many orders of magnitude easier on the road to this kind of stuff. I'm simply trying to say that our current theoretical framework for machine computation (individual bits being flipped, etc) is adequate to form the building blocks for a better-than-human intelligence, since it appears to be adequate for simulating any physical process.

Borodog
11-30-2006, 11:42 PM
[ QUOTE ]
FWIW, I'm not arguing that clever engineering/hardware advances won't make things many orders of magnitude easier on the road to this kind of stuff. I'm simply trying to say that our current theoretical framework for machine computation (individual bits being flipped, etc) is adequate to form the building blocks for a better-than-human intelligence, since it appears to be adequate for simulating any physical process.

[/ QUOTE ]

QFT.

Phil153
11-30-2006, 11:50 PM
Borodog, weren't you the same guy who said there'd probably be a singularity within 30 years?

madnak
11-30-2006, 11:52 PM
[ QUOTE ]
You're really funny madnak /images/graemlins/smile.gif

Anyway, I didn't say Kurzweil predictions are correct, just that all those things don't seem so far fetched, maybe the time needed to reach that point of advancement would be much more, but im not so sure. It may even be much less. It depends on many factors. Especially you have to consider that just because we possess the ability to produce a certain technology, doesn't mean we'll be applying it, or that everyone will have access to it.

[/ QUOTE ]

I don't disagree with any of that, but it's the difference between saying "I don't believe in God" and saying "God definitely doesn't exist in any form." One is reasonable, the other is

But there are other things to take into consideration. For example, perhaps the human lifespan will continue to increase, but the rate of that increase may never get above the aging itself. That is, we might increase the human lifespan at a constant rate of 80 years for every century. In that case, nobody would ever achieve "effective immortality." Now, I believe we will achieve that, or rather, I believe humans will live long enough that it would take an "accident" to kill them, but we can't be certain of such a thing, and we have no way to make clear predictions about when it will happen.

Also I think seeing it as an inevitability undermines the efforts of those who, in the world as it is, in immediate reality, are working their asses off to make it happen. Because if those people didn't work their asses off, then technology wouldn't keep progressing. Can it be viewed as a process? Perhaps. But the leading edge of that process is the individual scientists and engineers of today.

Borodog
11-30-2006, 11:52 PM
[ QUOTE ]
Borodog, weren't you the same guy who said there'd probably be a singularity within 30 years?

[/ QUOTE ]

Yes.

Phil153
11-30-2006, 11:57 PM
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

Borodog
12-01-2006, 12:01 AM
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

madnak
12-01-2006, 12:10 AM
[ QUOTE ]
You're great at hyperbole.

[/ QUOTE ]

No. I'm great at understatement. Seriously, the processing power required would be... Indescribable.

[ QUOTE ]
So we agree that it's just a matter of computing power.

[/ QUOTE ]

Most likely, but not necessarily. But again, this is similar to the lifespan issue. Yes, if computers had some kind of crazy (http://en.wikipedia.org/wiki/Knuth's_up-arrow_notation) processing power, they could do it. Probably. And hey, if we have that we can manage the analog computing if need be. But again, saying we'll have that anytime soon is even more of a leap than saying we'll live to be a million years old. And no, this isn't hyperbole, this is an attempt to express "really frickin' big numbers" in a way that at least sounds understandable. In fact, assuming a true exponential curve, it would probably still be at least a hundred years before we got this kind of processing power. Because it's not that exponential. And if technology doubles every 20 years, that's still just Le^((ln(2)/20)t) where L is the current level of technology. That graph isn't exactly the fastest thing on earth. It means over the next 100 years we get our level of technology multiplied by 2^5 or 32 times. So, in 2100 we'll be 32 times further than we are now - that is nowhere near enough to reach these technologies. Now, you're probably not talking about a strictly exponential function, but virtually no function consistent with what we've seen fits. Kurzweil is talking about a function shaped like -ln(-x), where the function approaches infinity as x approaches 0 (or a certain point, because you'd multiply by a constant and add another one and blah blah, but you get the idea). The problem is the data doesn't support this kind of logarithmic function.

[ QUOTE ]
I also tend to think that the process could be simplified a great deal -- i.e. not 100% of the information contained in the statistical microstate describing the brain at any given moment is directly essential for describing what is going on.

[/ QUOTE ]

We don't know how much of it is, but if it's even a relatively small amount, that's huge. Just keeping track of all the neurons, their relationships, the sodium and potassium ions, and the neurotransmitters would be overwhelming. And that's leaving out quite a lot - it's very unlikely we could simulate a brain with only that much to work with.

madnak
12-01-2006, 12:11 AM
[ QUOTE ]
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

[/ QUOTE ]

Please see my post. How does an exponential function reach anything resembling a singularity? You must be thinking of a logarithmic function, but how does that fit the data?

Phil153
12-01-2006, 12:14 AM
[ QUOTE ]
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

[/ QUOTE ]
I have a math and phsyics degree, so yes. Care to answer my question? Has the growth in AI sophistication over the last 20 years been "exponential"? In fact, has the growth of anything except processor power and storage been exponential?

vhawk01
12-01-2006, 12:28 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

[/ QUOTE ]
I have a math and phsyics degree, so yes. Care to answer my question? Has the growth in AI sophistication over the last 20 years been "exponential"? In fact, has the growth of anything except processor power and storage been exponential?

[/ QUOTE ]

Bear in mind this is from Kurtzweil's book, but yes. The resolution of brain scanning technology has been increasing exponentially for 35 years. Reduction in the power usage/per cps for computer hardware has decreased exponentially over the last 40 years. The things that are impacted by the simpler, more accepted exponential trends (Moore's Law) seem to use that exponential growth to spur their own. There are more examples.

madnak
12-01-2006, 12:43 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Contiuous numbers are no problem to model to whatever accuracy level is required -- it's just a matter of sheer computing power.

[/ QUOTE ]

So I guess we don't need calculus - we can just take power series to an arbitrary degree of precision?

[/ QUOTE ]

Calculus is a great thing. But when it gets too complex, we turn to digital calculation. Trust me, I know. /images/graemlins/wink.gif

[/ QUOTE ]

I've heard it's impossible to find the area under a bell curve using calculus. Bemusing, really. But can we really predict the results of a quantum system using approximation? Can't even a "very" tiny change result in a significant effect on the system as a whole? And aren't there quantum effects that are basically unquantifiable according to how we "do things?"

[ QUOTE ]
[ QUOTE ]
The ability to approximate something closely may be very far from the ability to model it accurately. 3.141592 is no closer to pi than 9*10^99,999,999,999.

[/ QUOTE ]

I can't imagine in what way you mean this statement, because it seems clearly incorrect. The % error between 3.141592 and pi is approximately 2x10^-5%. The percent difference between pi and 9*10^99,999,999,999 is approximately 2.9*10^99,999,999,997%. Clearly 3.141592 is closer to pi.

[/ QUOTE ]

I'm building up a tolerance for coffee. Sorry. What I mean is that in terms of pure math and number theory, while the two numbers are a greater distance relative to each other, neither is really... Let me put it into words. Ah yes. Neither comes any closer to being a perfect expression of pi. I don't like using the concept of digits here but I'm not sure how else to illustrate it. Let's drop the super-large number and just use 3 and 3.141592 (should be 93 I suppose, but oh well). 3 approximates pi to 1 digit. 3.141592 approximates pi to 7 digits. Pi approximates pi perfectly. Let's take a number and call it p. Let's let p approximate pi to 50 digits. Now the difference between how well 3 and 3.141592 approximate pi and the difference between how well 3 and p approximate pi is 6/49. Make p 100, that becomes 6/99. Etc. Now, the limit of the difference between how well 3 approximates pi and how well 3.141592 approximates pi relative to the difference between how well 3 approximates pi and how well p approximates pi, as p approaches infinity, is 0. That is, taking pi as a whole, 3 and 3.141592 both approximate pi to an infinitesimal, ie nonexistent (in the reals) degree. Thus, 3 and 3.141592 approximate pi to the same degree of fidelity according to a pure math context. Did that make sense?

[ QUOTE ]
[ QUOTE ]
The fact that "close enough" works for most human applications is beside the point. Newtonian mechanics works for most human application, does that mean QM is useless?

[/ QUOTE ]

This isn't really an apt analogy. Newtonian mechanics is an approximation to QM (and relativity too, which is sort of scary, because QM and relativity are currently mutually exclusive). We use Newtonian mechanics when it is "good enough" because it is. In those regimes when it is not good enough, we use more accurate models. What you are essentially claiming, seemingly completely without justification, is that the brain may have no level of approximation which is "good enough", which simply seems unjustifiable.

[/ QUOTE ]

This is what I'm claiming, and it needs no justification. You're saying the brain must have such a level of approximation, I'm saying the brain may not have such a level. The burden of proof is on you here. I believe uncertainty is the "default" position. I can get into that if you like, but I imagine you ordinarily consider it the default position yourself. I can't fathom why you don't in this case.

[ QUOTE ]
For example, there is almost certainly nothing about the brain that depends on the fact that protons and neutrons are composite particles and not made of quarks. Hence any simulation of a brain that could simulate it down the level of all fundamental particles, but neglected the substructure of protons and neutrons would almost certainly be "good enough." Clearly there is some level of simulation that would be "good enough". My guess is that level would be quite high. In fact, I daresay that if the complex structure of a neuron could not be "reduced" to some (relatively) simple functional model, it wouldn't be any good for what it does. In other words, if neurons cannot "count" on other neurons behaving in some sense "predictably", like some sort of algorythmic black box, they would be of no use to each other or themselves for their jobs.

[/ QUOTE ]

Almost certainly isn't certainly. Perhaps we're splitting hairs. To me any nonzero probability is relevant - even the probability that I'm about to spontaneously sprout donkey ears. I'm not comfortable merely assuming that won't happen, but like to say the probability of that happening is so low that it's not worth the energy to consider it. I certainly would never zealously state that I'll never grow donkey ears, and it's the zeal here that I'm primarily opposed to. Zeal has no place in rational discussion, or in science, IMO.

[ QUOTE ]
[ QUOTE ]
Of course, I personally think it's unlikely that a high degree of fidelity can be achieved digitally. It's certainly mathematically possible that this isn't the case, that the continuous mechanics of the brain are essential. But it seems unlikely to me. And other than the uncanny valley, what do we have to fear?

[/ QUOTE ]

I'm not sure I understand this part.

[/ QUOTE ]

Coffee, coffee. I made a couple typos.

I'm saying two things. First, that there is a nonzero, but small, probability that brain function is based on specifically continuous mechanisms. Second, that if this is true the implications may be rather scary (philosophical zombies, for example. The concept reminded me of the Uncanny Valley (http://en.wikipedia.org/wiki/Uncanny_valley), though in retrospect the analogy wasn't apt).

[ QUOTE ]
[ QUOTE ]
The problem is that computers and brains work in very different ways. Simulating a brain would be like simulating a weather system. In fact, it's entirely possible you'd need to run a full simulation at the molecular level - weather system? Those are a piece of cake in comparison.

[/ QUOTE ]

This is what I'm talking about. We can simulate weather systems using the mechanics of continuos media, and not the molecular level, because the mechanics of the moledular level reduce to the equation of continuous mechanics. Similarly, a neuron is a vast and huge thing compared to the molecular level, and it is almost certain that the neuron doesn't "care" what happens at the molecular level, it only "cares" what that reduces to at it's level, which is almost certainly a (relatively) simpler functional set.

[/ QUOTE ]

Discrete perhaps. And there may be, I forget what those "humps" are called in chaos theory. But a single ion or molecule of neurotransmitter can make a difference. Butterfly effect, that's it, wikipedia's nice. As a result, I don't see how we can functionally represent the brain without taking the molecular level into account (at least to some degree), that is without something "slipping through the cracks." In weather prediction, it's perfectly fine for that to happen - no simulation needs perfect fidelity, we don't need to predict the specific wind speed and angle at an exact point somewhere. If we did, we'd have to take the molecules into account - including those of obstacles in the way, particles of dirt, etc. Now, if we're just talking about general temperature there's no trouble. But with the brain, we (presumably) want everything to add up. We need to simulate that guy's hat blowing off his head, flipping through the air three time, and getting stuck in the gutter. Not just the general conditions over a broad area.

[ QUOTE ]
[ QUOTE ]
Even assuming that computing power breaks all the physical bounds that it seems to be approaching (and it has never faced such a challenge in the past, which is largely why Moore's law has held true), assume it even goes orders of magnitude beyond that. Assume complex circuits smaller than basic particles.

[/ QUOTE ]

But don't you see this is silly? We already have computers that can simulate brains in the same space as a brain; they're called brains! If "digital" computers prove limiting, we'll switch to some other technology. We know such a technology can exist because it already does.

[/ QUOTE ]

Brains aren't computer. A computer simulation of a brain is almost by definition so much less efficient than the brain itself it's hard to describe. It's like writing a 5-billion-page proof that 2+2=4, using all the other theorems ever developed. This is exactly what I'm saying - either we should use "some other technology" (at least to some degree) in simulating a human brain, or we should worry ourselves less about a perfect duplication of human intelligence and focus on a type of intelligence that would be more efficient in a machine like a computer (and heuristics seems most promising to me there).

[ QUOTE ]
[ QUOTE ]
It would still take a computer so large it might not fit in the state of Texas, and so slow that a hundred years of simulation wouldn't match a minute of "real-time" for the simulated entity. So yeah, we might iron the bugs out of a prototype within, hmm, a few hundred millenia?

[/ QUOTE ]

Argument from incredulity?

[/ QUOTE ]

See my discussion of exponential functions. It could still take many millenia to reach such an enormous processing power, even if the amount of time it take for that processing power to double decreases continuously. Just so long as you aren't postulating an infinity, nothing consistent with what we've seen so far could possibly indicate progression that fast.

[ QUOTE ]
A) We know computers as complex and sophisticated as the human brain can exist because they do; they're called human brains.

[/ QUOTE ]

Human brains aren't computers.

[ QUOTE ]
B) Most of the complexity of the component parts of the human brian have absolutely nothing to do with it's function, thinking.

[/ QUOTE ]

Rduke, is this true? My impression was to the contrary.

[ QUOTE ]
The vast majority of the complexity of the subcomponents comes from the fact that the brain has to be built out of the same thing everything else is built out of: living cells. Living cells that have to be complex enough, in each and every one, to serve the function of every other kind of cell, and all the complexity that entails, including metabolizing, waster removal, reproduction, etc, etc, etc.

[/ QUOTE ]

From what I know, nerve cells don't contain most of the superfluous elements. A lot of it has to do with special proteins that deal with neurotransmitters, and with sodium and potassium gradients. Stuff unique to nerve cells. But I've only covered the very basics, it will probably be a couple of years before I cover this stuff in really heavy detail in school, and I don't know if I'll have time for independent study. Again, Rduke? And vhawk? Are the molecular interactions of the neurons really irrelevant? And aren't they also how dendrites grow? And about the glia, I actually was looking at a study about that just today - someone at CCNY determined that the glia were somehow relevant in how fruit flies see, I think. At any rate, it's definitely not like a computer program and computer memory. Not even close.

[ QUOTE ]
The task as I see it is to design and build component parts that do away with all that baggage, while reproducing what my gut tells me is the relatively simple functional set of any single component, and then assembling those components.

[/ QUOTE ]

Baggage, no no. You don't know how streamlined biological systems are if you say that. There's some baggage at the "higher level," but at the molecular level it's literally scary how perfectly efficient everything is. That's what it's all about. I'm beginning to think you haven't studied biology in great depth, and that you would really find it interesting to do so. But then again, my knowledge is only slightly above that of other undergrad bio students, so maybe you know what I mean. Still, I really think you're taking the wrong approach - the "circuitry" of our brain is way way way more efficient than the circuitry of our computers.

[ QUOTE ]
I see no fundamental technological difficulty in such a process. Whether or not it can be done "in our lifetimes" will only be settled with time, but my estimate on the over-under is 30 years.

[/ QUOTE ]

And I say that's an irrational estimate that can't be justified.

madnak
12-01-2006, 12:47 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

[/ QUOTE ]
I have a math and phsyics degree, so yes. Care to answer my question? Has the growth in AI sophistication over the last 20 years been "exponential"? In fact, has the growth of anything except processor power and storage been exponential?

[/ QUOTE ]

Bear in mind this is from Kurtzweil's book, but yes. The resolution of brain scanning technology has been increasing exponentially for 35 years. Reduction in the power usage/per cps for computer hardware has decreased exponentially over the last 40 years. The things that are impacted by the simpler, more accepted exponential trends (Moore's Law) seem to use that exponential growth to spur their own. There are more examples.

[/ QUOTE ]

How do you answer my question? Exponential growth doesn't go to a singularity. Somewhere you must be "slipping" the inverse into it and making a logarithmic function.

Phil153
12-01-2006, 12:48 AM
I mean with regard to AI and computers. The computations/second for simple, linear algorithms will increase exponentially, but the models themselves do not. For example, graphics processing has improved exponentially, making things like 3D games and GUIs possible, but handwriting recognition technology (a far more complex field) has barely improved in 20 years, despite us now having the processing power to do it. And this is multiple multiple multiple orders of magnitudes less complex than designing intelligence. Another example is chess AI, which has been largely solved and is a fairly simple algorithm requiring nothing but processing power, as compared to a computer's ability to pass a turing test for example. One has skyrocketed with processing power, the other has barely moved in ten years.

Our solutions to complex computing problems have not increased exponentially. Which is one of the reasons why Borodog is hilariously mistaken to think we'll have a singularity within 30 years.

Borodog
12-01-2006, 12:49 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

[/ QUOTE ]
I have a math and phsyics degree, so yes. Care to answer my question? Has the growth in AI sophistication over the last 20 years been "exponential"? In fact, has the growth of anything except processor power and storage been exponential?

[/ QUOTE ]

Yes. Lots of things. Like the number of scientists working on problems related to these things, the number of journal articles published about such things, and the amount of resources devoted to researching such things.

vhawk01
12-01-2006, 12:58 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Do you have any idea of the complexity of human thought? Do you know how much AI algorithms have progressed in the last 20 years? The last 5 years?

[/ QUOTE ]

Do you know what "exponential" means?

[/ QUOTE ]
I have a math and phsyics degree, so yes. Care to answer my question? Has the growth in AI sophistication over the last 20 years been "exponential"? In fact, has the growth of anything except processor power and storage been exponential?

[/ QUOTE ]

Bear in mind this is from Kurtzweil's book, but yes. The resolution of brain scanning technology has been increasing exponentially for 35 years. Reduction in the power usage/per cps for computer hardware has decreased exponentially over the last 40 years. The things that are impacted by the simpler, more accepted exponential trends (Moore's Law) seem to use that exponential growth to spur their own. There are more examples.

[/ QUOTE ]

How do you answer my question? Exponential growth doesn't go to a singularity. Somewhere you must be "slipping" the inverse into it and making a logarithmic function.

[/ QUOTE ]

Madnak,

I definitely wasnt answering your question. I was trying to answer the one Phil asked.

vhawk01
12-01-2006, 12:59 AM
[ QUOTE ]
I mean with regard to AI and computers. The computations/second for simple, linear algorithms will increase exponentially, but the models themselves do not. For example, graphics processing has improved exponentially, making things like 3D games and GUIs possible, but handwriting recognition technology (a far more complex field) has barely improved in 20 years, despite us now having the processing power to do it. And this is multiple multiple multiple orders of magnitudes less complex than designing intelligence. Another example is chess AI, which has been largely solved and is a fairly simple algorithm requiring nothing but processing power, as compared to a computer's ability to pass a turing test for example. One has skyrocketed with processing power, the other has barely moved in ten years.

Our solutions to complex computing problems have not increased exponentially. Which is one of the reasons why Borodog is hilariously mistaken to think we'll have a singularity within 30 years.

[/ QUOTE ]

Well, I did include the wattage required/cps, but I agree thats sort of tangentially related.

EDIT: Along the same lines, price-performance is increasing exponentially as well.

Rduke55
12-01-2006, 01:27 AM
[ QUOTE ]
Rduke55,

I have no idea how to do it. But I the vast majority of the "statistical microstate of the brain", to use Metric's excellent phrase, is completely irrelevent to the functions we are interested in. And I also know that the brain does it, so that it can't be an insurmountable problem, can it?

[/ QUOTE ]

What percentage would you say it is then?

I'm talking now on how do we collect that information, not how to solve how to utilize it. I think you're still thinking the latter.

Rduke55
12-01-2006, 01:29 AM
[ QUOTE ]
[ QUOTE ]
Borodog, weren't you the same guy who said there'd probably be a singularity within 30 years?

[/ QUOTE ]

Yes.

[/ QUOTE ]

Holy crap, I forgot about that.
Just to be clear, Boro, what are you saying will happen within 30 years?

Rduke55
12-01-2006, 01:30 AM
[ QUOTE ]
The resolution of brain scanning technology has been increasing exponentially for 35 years.

[/ QUOTE ]

It's just been invented though!!!!

Rduke55
12-01-2006, 01:36 AM
[ QUOTE ]
But a single ion or molecule of neurotransmitter can make a difference. Butterfly effect, that's it, wikipedia's nice.

[/ QUOTE ]

Actually, some aspects of brain functions are fantastic at dealing with noise.

[ QUOTE ]
Are the molecular interactions of the neurons really irrelevant?

[/ QUOTE ]

Hell no. While one ion, etc. does not make a difference I think Boro and Metric are minimizing some of the non-electrical aspects of neural communication.

Rduke55
12-01-2006, 01:41 AM
[ QUOTE ]
[ QUOTE ]
B) Most of the complexity of the component parts of the human brian have absolutely nothing to do with it's function, thinking.

[/ QUOTE ]

Rduke, is this true? My impression was to the contrary.

[/ QUOTE ]

I'm not sure what he is saying.

But I'd be inclined to disagree with that statement based on encephalization quotients, etc.
What makes our brain unique and more complicated than any other is almost certainly the goo that's involved "thinking"

Borodog
12-01-2006, 02:15 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
Contiuous numbers are no problem to model to whatever accuracy level is required -- it's just a matter of sheer computing power.

[/ QUOTE ]

So I guess we don't need calculus - we can just take power series to an arbitrary degree of precision?

[/ QUOTE ]

Calculus is a great thing. But when it gets too complex, we turn to digital calculation. Trust me, I know. /images/graemlins/wink.gif

[/ QUOTE ]

I've heard it's impossible to find the area under a bell curve using calculus. Bemusing, really. But can we really predict the results of a quantum system using approximation?

[/ QUOTE ]

Probabalistically, yes. That's what quantum mechanics is all about.

[ QUOTE ]
Can't even a "very" tiny change result in a significant effect on the system as a whole? And aren't there quantum effects that are basically unquantifiable according to how we "do things?"

[/ QUOTE ]

I really don't see how it is relevent to the problem. A neuron or a glial cell or anything else is not in the quantum regime. The DeBroglie wavelength of a neutron is probably smaller than the nucleus of an atom. It just isn't relevent.

[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The ability to approximate something closely may be very far from the ability to model it accurately. 3.141592 is no closer to pi than 9*10^99,999,999,999.

[/ QUOTE ]

I can't imagine in what way you mean this statement, because it seems clearly incorrect. The % error between 3.141592 and pi is approximately 2x10^-5%. The percent difference between pi and 9*10^99,999,999,999 is approximately 2.9*10^99,999,999,997%. Clearly 3.141592 is closer to pi.

[/ QUOTE ]

I'm building up a tolerance for coffee. Sorry. What I mean is that in terms of pure math and number theory, while the two numbers are a greater distance relative to each other, neither is really... Let me put it into words. Ah yes. Neither comes any closer to being a perfect expression of pi. I don't like using the concept of digits here but I'm not sure how else to illustrate it. Let's drop the super-large number and just use 3 and 3.141592 (should be 93 I suppose, but oh well). 3 approximates pi to 1 digit. 3.141592 approximates pi to 7 digits. Pi approximates pi perfectly. Let's take a number and call it p. Let's let p approximate pi to 50 digits. Now the difference between how well 3 and 3.141592 approximate pi and the difference between how well 3 and p approximate pi is 6/49. Make p 100, that becomes 6/99. Etc. Now, the limit of the difference between how well 3 approximates pi and how well 3.141592 approximates pi relative to the difference between how well 3 approximates pi and how well p approximates pi, as p approaches infinity, is 0. That is, taking pi as a whole, 3 and 3.141592 both approximate pi to an infinitesimal, ie nonexistent (in the reals) degree. Thus, 3 and 3.141592 approximate pi to the same degree of fidelity according to a pure math context. Did that make sense?

[/ QUOTE ]

In short, no, it doesn't make any sense. Everything we observe about the world is observed approximately, and some approximations are better than others, or else nothing did based on these observations would be effective.

[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The fact that "close enough" works for most human applications is beside the point. Newtonian mechanics works for most human application, does that mean QM is useless?

[/ QUOTE ]

This isn't really an apt analogy. Newtonian mechanics is an approximation to QM (and relativity too, which is sort of scary, because QM and relativity are currently mutually exclusive). We use Newtonian mechanics when it is "good enough" because it is. In those regimes when it is not good enough, we use more accurate models. What you are essentially claiming, seemingly completely without justification, is that the brain may have no level of approximation which is "good enough", which simply seems unjustifiable.

[/ QUOTE ]

This is what I'm claiming, and it needs no justification. You're saying the brain must have such a level of approximation, I'm saying the brain may not have such a level. The burden of proof is on you here. I believe uncertainty is the "default" position. I can get into that if you like, but I imagine you ordinarily consider it the default position yourself. I can't fathom why you don't in this case.

[/ QUOTE ]

Because it seems transparently obvious to me.

[ QUOTE ]
[ QUOTE ]
For example, there is almost certainly nothing about the brain that depends on the fact that protons and neutrons are composite particles and not made of quarks. Hence any simulation of a brain that could simulate it down the level of all fundamental particles, but neglected the substructure of protons and neutrons would almost certainly be "good enough." Clearly there is some level of simulation that would be "good enough". My guess is that level would be quite high. In fact, I daresay that if the complex structure of a neuron could not be "reduced" to some (relatively) simple functional model, it wouldn't be any good for what it does. In other words, if neurons cannot "count" on other neurons behaving in some sense "predictably", like some sort of algorythmic black box, they would be of no use to each other or themselves for their jobs.

[/ QUOTE ]

Almost certainly isn't certainly. Perhaps we're splitting hairs. To me any nonzero probability is relevant - even the probability that I'm about to spontaneously sprout donkey ears. I'm not comfortable merely assuming that won't happen, but like to say the probability of that happening is so low that it's not worth the energy to consider it. I certainly would never zealously state that I'll never grow donkey ears, and it's the zeal here that I'm primarily opposed to. Zeal has no place in rational discussion, or in science, IMO.

[/ QUOTE ]

You have an unnecessarily bleak view of zeal. I will zealously claim that I will never sprout donkey ears, and I will also zealously say that neurons do not "care" about quarks, in the sense that the functioning of a neuron, even down to the atomic level, would be completely identical if neutrons and protons did not have substructure. I will also state that I zealosly doubt that quantum effects have any significance in the workings of the brain for the functions I am interested in, that is thinking. I am sure that quantum effects are important at the atomic and molecular level in the machinery of cells.

[ QUOTE ]


[ QUOTE ]
[ QUOTE ]
Of course, I personally think it's unlikely that a high degree of fidelity can be achieved digitally. It's certainly mathematically possible that this isn't the case, that the continuous mechanics of the brain are essential. But it seems unlikely to me. And other than the uncanny valley, what do we have to fear?

[/ QUOTE ]

I'm not sure I understand this part.

[/ QUOTE ]

Coffee, coffee. I made a couple typos.

I'm saying two things. First, that there is a nonzero, but small, probability that brain function is based on specifically continuous mechanisms.

[/ QUOTE ]

I could razz you hear and remind you that there are no continuous things; there are only mind-bogglingly large quantum numbers.

[ QUOTE ]
Second, that if this is true the implications may be rather scary (philosophical zombies, for example. The concept reminded me of the Uncanny Valley (http://en.wikipedia.org/wiki/Uncanny_valley), though in retrospect the analogy wasn't apt).

[/ QUOTE ]

OK. Never heard of it anyway.

[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The problem is that computers and brains work in very different ways. Simulating a brain would be like simulating a weather system. In fact, it's entirely possible you'd need to run a full simulation at the molecular level - weather system? Those are a piece of cake in comparison.

[/ QUOTE ]

This is what I'm talking about. We can simulate weather systems using the mechanics of continuos media, and not the molecular level, because the mechanics of the moledular level reduce to the equation of continuous mechanics. Similarly, a neuron is a vast and huge thing compared to the molecular level, and it is almost certain that the neuron doesn't "care" what happens at the molecular level, it only "cares" what that reduces to at it's level, which is almost certainly a (relatively) simpler functional set.

[/ QUOTE ]

Discrete perhaps. And there may be, I forget what those "humps" are called in chaos theory. But a single ion or molecule of neurotransmitter can make a difference. Butterfly effect, that's it, wikipedia's nice. As a result, I don't see how we can functionally represent the brain without taking the molecular level into account (at least to some degree), that is without something "slipping through the cracks." In weather prediction, it's perfectly fine for that to happen - no simulation needs perfect fidelity, we don't need to predict the specific wind speed and angle at an exact point somewhere. If we did, we'd have to take the molecules into account - including those of obstacles in the way, particles of dirt, etc. Now, if we're just talking about general temperature there's no trouble. But with the brain, we (presumably) want everything to add up. We need to simulate that guy's hat blowing off his head, flipping through the air three time, and getting stuck in the gutter. Not just the general conditions over a broad area.

[/ QUOTE ]

You're missing the point. That a system is chaotic (which I doubt the brain is) does not mean that it cannot be simulated. The simulated result might not reproduce the result if you run the actual experiment, but here's the things: the actual system will not produce the same result either, because it is chaotic!

[ QUOTE ]


[ QUOTE ]
[ QUOTE ]
Even assuming that computing power breaks all the physical bounds that it seems to be approaching (and it has never faced such a challenge in the past, which is largely why Moore's law has held true), assume it even goes orders of magnitude beyond that. Assume complex circuits smaller than basic particles.

[/ QUOTE ]

But don't you see this is silly? We already have computers that can simulate brains in the same space as a brain; they're called brains! If "digital" computers prove limiting, we'll switch to some other technology. We know such a technology can exist because it already does.

[/ QUOTE ]

Brains aren't computer.

[/ QUOTE ]

Yes, they are. In every meaningful sense. And denying that the brain is a computer begs the question. To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc. Complex differential equations must be solved. That you don't understand how your brain does these calculations does not mean that your brain is not doing them. There's no such thing as a free lunch; if you want to place your hand in the right place to catch a ball, you must calculate ahead of time where it will be.

[ QUOTE ]
A computer simulation of a brain is almost by definition so much less efficient than the brain itself it's hard to describe.

[/ QUOTE ]

I don't think you understand what I'm saying. I'm saying that a perfectly adequate simulation of a brain is a set of compenents that behave like brain cells, shaped like a brain, and interconnected like a brain. In what way is such a system NOT a brain? /images/graemlins/confused.gif My other point is that such components almost certainly do not need all of the cellular baggage that goes along with the fact that our brains are stuck with being built out of cells, which is the only building blocks our genes had to work with.

[ QUOTE ]
It's like writing a 5-billion-page proof that 2+2=4, using all the other theorems ever developed. This is exactly what I'm saying - either we should use "some other technology" (at least to some degree) in simulating a human brain, or we should worry ourselves less about a perfect duplication of human intelligence and focus on a type of intelligence that would be more efficient in a machine like a computer (and heuristics seems most promising to me there).

[/ QUOTE ]

Maybe we're arguing over nothing then. I personally don't imagine artifical brains to be simulations running on some kind of digital intel chip. The brain is inherently a massively parallel architecture, and that will be the way to create "AI" in my opinion. Although, more traditional computer technology will play a huge part, as it already has, in IA (Intelligence Amplification). Any doofus with a computer and the internet can now ace many intelligence tests.

[ QUOTE ]
[ QUOTE ]
It would still take a computer so large it might not fit in the state of Texas, and so slow that a hundred years of simulation wouldn't match a minute of "real-time" for the simulated entity. So yeah, we might iron the bugs out of a prototype within, hmm, a few hundred millenia?

[/ QUOTE ]

Argument from incredulity?

[/ QUOTE ]

See my discussion of exponential functions. It could still take many millenia to reach such an enormous processing power, even if the amount of time it take for that processing power to double decreases continuously. Just so long as you aren't postulating an infinity, nothing consistent with what we've seen so far could possibly indicate progression that fast.

[/ QUOTE ]

I disagree. The number of human brains working on the problem is increasing exponentially. The technology with which they are investigating the problem is increasing exponentially. It's not any one thing that is increasing exponentially. Look at the actual data on Moore's Law. Moore's law is not sumply exponential, it's beyond exponential. The time it takes to double processor power to double is decreasing. If anything, it's that rate of decrease that is increasing exponentially. It's a feedback effect; human beings now use IA to help them design better computers in shorter and shorter amounts of time. That is the recipe for the technological singularity.

I'm not saying that digital computers are the answer to AI. But the hyper-exponential increase in the power of digital computers, computing with the exponentially increasing amount of human brain power that is being applied to these problems is.


[ QUOTE ]


[ QUOTE ]
A) We know computers as complex and sophisticated as the human brain can exist because they do; they're called human brains.

[/ QUOTE ]

Human brains aren't computers.

[/ QUOTE ]

Yes, they are.

[ QUOTE ]
[ QUOTE ]
B) Most of the complexity of the component parts of the human brian have absolutely nothing to do with it's function, thinking.

[/ QUOTE ]

Rduke, is this true? My impression was to the contrary.

[/ QUOTE ]

I'd love to hear Rduke's answer, but I'm fairly certain that most of the cellular machinery is irrelevent, because it is identical between brain cells and, for example, bone cells.

[ QUOTE ]


[ QUOTE ]
The vast majority of the complexity of the subcomponents comes from the fact that the brain has to be built out of the same thing everything else is built out of: living cells. Living cells that have to be complex enough, in each and every one, to serve the function of every other kind of cell, and all the complexity that entails, including metabolizing, waster removal, reproduction, etc, etc, etc.

[/ QUOTE ]

From what I know, nerve cells don't contain most of the superfluous elements. A lot of it has to do with special proteins that deal with neurotransmitters, and with sodium and potassium gradients. Stuff unique to nerve cells. But I've only covered the very basics, it will probably be a couple of years before I cover this stuff in really heavy detail in school, and I don't know if I'll have time for independent study. Again, Rduke? And vhawk? Are the molecular interactions of the neurons really irrelevant? And aren't they also how dendrites grow? And about the ganglia, I actually was looking at a study about that just today - someone at CCNY determined that the ganglia were somehow relevant in how fruit flies see, I think. At any rate, it's definitely not like a computer program and computer memory. Not even close.

[/ QUOTE ]

Ugh. You're focussing on how brain hardware is different from digital computer hardware. It is. I don't care. It doesn't matter. A computer isn't a fluid, but I can simulate fluids on a computer. I don't need a neuron if I have a device that behaves in every way like a neuron. Or a glial cell. Or an axon. Or a synapse. Or whatever it is.

One thing is for certain, that no matter how complex a neuron is, it is a far simpler thing than the brain as a whole. The complex emergent properties of the brain arise from the workings of far simpler parts. Those parts can be understood, and if they can be understood, they can be simulated or replicated artificially.

[ QUOTE ]
[ QUOTE ]
The task as I see it is to design and build component parts that do away with all that baggage, while reproducing what my gut tells me is the relatively simple functional set of any single component, and then assembling those components.

[/ QUOTE ]

Baggage, no no. You don't know how streamlined biological systems are if you say that. There's some baggage at the "higher level," but at the molecular level it's literally scary how perfectly efficient everything is. That's what it's all about. I'm beginning to think you haven't studied biology in great depth, and that you would really find it interesting to do so. But then again, my knowledge is only slightly above that of other undergrad bio students, so maybe you know what I mean. Still, I really think you're taking the wrong approach - the "circuitry" of our brain is way way way more efficient than the circuitry of our computers.

[/ QUOTE ]

I'm beginning to think you are not understanding me. The construction of a human brain is of course fantastically efficient. But that doesn't change the fact that brain cells are stuck with the fact that they have to be manufactured from a modular unit that must also serve to manufacture every other kind of cell, as well as metabolize its own energy, eject its own biological waste products, etc. There is no reason that you can build a device that behaves like a neuron, but doesn't have to contain all of the stuff that is irrelevent to its function as a neuron.

[ QUOTE ]
[ QUOTE ]
I see no fundamental technological difficulty in such a process. Whether or not it can be done "in our lifetimes" will only be settled with time, but my estimate on the over-under is 30 years.

[/ QUOTE ]

And I say that's an irrational estimate that can't be justified.

[/ QUOTE ]

I don't see how thousands of years is any better of an estimate.

Borodog
12-01-2006, 02:19 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
B) Most of the complexity of the component parts of the human brian have absolutely nothing to do with it's function, thinking.

[/ QUOTE ]

Rduke, is this true? My impression was to the contrary.

[/ QUOTE ]

I'm not sure what he is saying.

But I'd be inclined to disagree with that statement based on encephalization quotients, etc.
What makes our brain unique and more complicated than any other is almost certainly the goo that's involved "thinking"

[/ QUOTE ]

I'm saying that the workings of a giant thing like a neuron does not depend on the workings of things like DNA, mitochondria, etc. That's not to say that the workings of all that stuff is irrelevent to building the device, clearly it's not. But once you have a device that acts like a neuron, does it matter what's inside of it?

Borodog
12-01-2006, 02:27 AM
[ QUOTE ]
[ QUOTE ]
But a single ion or molecule of neurotransmitter can make a difference. Butterfly effect, that's it, wikipedia's nice.

[/ QUOTE ]

Actually, some aspects of brain functions are fantastic at dealing with noise.

[ QUOTE ]
Are the molecular interactions of the neurons really irrelevant?

[/ QUOTE ]

Hell no. While one ion, etc. does not make a difference I think Boro and Metric are minimizing some of the non-electrical aspects of neural communication.

[/ QUOTE ]

No, I'm not, and I certainly did not say that "molecular interactions are irrelevent." Madnak is misunderstanding my point and putting words in my mouth.

All I'm saying is that I think that people like Rduke55 are smart enough to figure this stuff out and explain it. There are exponentially more people working on the problem every year using exponentially high technology tools and techniques to research and analyze them. And if it can be understood and explained, it can be simulated or artificially replicated.

Phil153
12-01-2006, 02:31 AM
[ QUOTE ]
There are exponentially more people working on the problem every year using exponentially high technology tools and techniques to research and analyze them.

[/ QUOTE ]
This assertion is false. Do you see why?

vhawk01
12-01-2006, 02:32 AM
[ QUOTE ]
[ QUOTE ]
The resolution of brain scanning technology has been increasing exponentially for 35 years.

[/ QUOTE ]

It's just been invented though!!!!

[/ QUOTE ]

Hard to say that x-rays gave NO information about the human brain.

Borodog
12-01-2006, 02:37 AM
[ QUOTE ]
[ QUOTE ]
There are exponentially more people working on the problem every year using exponentially high technology tools and techniques to research and analyze them.

[/ QUOTE ]
This assertion is false. Do you see why?

[/ QUOTE ]

No, because it isn't. Do you see why?

Metric
12-01-2006, 05:21 AM
[ QUOTE ]

Please see my post. How does an exponential function reach anything resembling a singularity? You must be thinking of a logarithmic function, but how does that fit the data?

[/ QUOTE ]
The word "singularity" in the term "technological singularity" was unfortunately co-opted from mathematics without retaining the same meaning. No, an exponential is not a singular function. The meaning is that within a fairly short length of time compared to a human lifespan, technological paradigm shifts will go from a slow, steady, and predictable rate to something so fast that no unenhanced human mind will be able to follow and comprehend -- something that can be achieved by an exponential function.

madnak
12-01-2006, 10:44 AM
Have a paper to finish and class. I'll get back to this tonight.

Rduke55
12-01-2006, 11:44 AM
[ QUOTE ]
All I'm saying is that I think that people like Rduke55 are smart enough to figure this stuff out and explain it.

[/ QUOTE ]

To figure what out?

[ QUOTE ]
There are exponentially more people working on the problem every year

[/ QUOTE ]

Why do you think this trend will continue? How many more scientists is there resources, etc. for?

[ QUOTE ]
And if it can be understood and explained

[/ QUOTE ]

I think our quibble is that I'm not sure you have a strong grasp of how far away we are from solving some of these problems.

Do you know what has been one of the best models for studying decision making, integration, and processing at the neural level? Bunches of brilliant minds working around the clock for their entire careers on?
Eye movements. Whether or not to look at something or not and how to do it. Pretty important to many primates but a simple problem when you compare it to many others in the brain.
When you get to this level of analysis the problems are ridiculously difficult to examine. That's why they take something simple like eye movements. Research on this has been going on for over 30 years. Huge steps have been made but a lot still remains to be learned.

P.S. I'm not taking anything away form these researchers. I used to be one, and they are - without a doubt - some of the most brilliant and dedicated scientists around. It pisses me off when I see a popular science book talking about all these cobbled together, hand-waving ideas about neural processing and integration in all these complex things (consciousness being the foremost) when these people are making real progress here.

Rduke55
12-01-2006, 11:47 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The resolution of brain scanning technology has been increasing exponentially for 35 years.

[/ QUOTE ]

It's just been invented though!!!!

[/ QUOTE ]

Hard to say that x-rays gave NO information about the human brain.

[/ QUOTE ]

Sigh. OK, Mr. Nitpickypants. How about you add "physiological" imaging to my post then?
Hard to figure out what the brain is doing with X-rays.

Rduke55
12-01-2006, 12:12 PM
[ QUOTE ]
To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc. Complex differential equations must be solved. That you don't understand how your brain does these calculations does not mean that your brain is not doing them. There's no such thing as a free lunch; if you want to place your hand in the right place to catch a ball, you must calculate ahead of time where it will be.

[/ QUOTE ]

I'm curious as to where or why you read or think this.
Neuro types work on this and it's most certainly NOT how the brain catches a fly ball.

Models include some variation of visual trajectory tracking cues. Usually something on keeping a constant angle of gaze and/or keeping a constant horizontal alignment with the ball which then has a changing tangent. Then there was a paper in Science in 1995 by McBeath et al. (268(5210):569-73.) that proposed a new variation called the Linear Optical Trajectory model where the fielder runs a curved path to keep the apparent trajectory of the ball a straight line.
There's been a lively debate since then on these models and the papers do use the complex math (and complex math terms) you are talking about to analyze it, but it's pretty accepted that the brain isn't doing differential equations to catch a ball.
These are also involved in prey capture stuff too. (and Mcbeath did a "How do dogs catch frisbees" paper more recently)

soon2bepro
12-01-2006, 12:15 PM
[ QUOTE ]
For example, perhaps the human lifespan will continue to increase, but the rate of that increase may never get above the aging itself.

[/ QUOTE ]

I was thinking more in the terms of extending life long enough for effective cloning/transplants/electronics to develop to a point where no more progress is needed in order for us to avoid death from natural causes

Borodog
12-01-2006, 12:44 PM
[ QUOTE ]
[ QUOTE ]
To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc. Complex differential equations must be solved. That you don't understand how your brain does these calculations does not mean that your brain is not doing them. There's no such thing as a free lunch; if you want to place your hand in the right place to catch a ball, you must calculate ahead of time where it will be.

[/ QUOTE ]

I'm curious as to where or why you read or think this.
Neuro types work on this and it's most certainly NOT how the brain catches a fly ball.

Models include some variation of visual trajectory tracking cues. Usually something on keeping a constant angle of gaze and/or keeping a constant horizontal alignment with the ball which then has a changing tangent. Then there was a paper in Science in 1995 by McBeath et al. (268(5210):569-73.) that proposed a new variation called the Linear Optical Trajectory model where the fielder runs a curved path to keep the apparent trajectory of the ball a straight line.
There's been a lively debate since then on these models and the papers do use the complex math (and complex math terms) you are talking about to analyze it, but it's pretty accepted that the brain isn't doing differential equations to catch a ball.
These are also involved in prey capture stuff too. (and Mcbeath did a "How do dogs catch frisbees" paper more recently)

[/ QUOTE ]

I don't think you're understanding my point. I'm not saying that whatever technique the brain uses to catch a fly ball would resemble differential calculus, but it must resemble some method that represents a very good approximation to the results thereof, or you would not be able to catch the ball. Like I said, there is no such thing as a free lunch. If you want to catch a ball, you must predict where it will be and when, which is a calculation, which makes the brain a computer by any reasonable definition. The same logic applies to everything the brain does. It's all calculation. It doesn't make any sense to deny that the brain is a computer, because that's what it does, compute. The fact that you can't see the computations or understand the methods the brain uses to make them, even in your own head, does not magically make them not computations.

Rduke55
12-01-2006, 01:05 PM
[ QUOTE ]

I don't think you're understanding my point. I'm not saying that whatever technique the brain uses to catch a fly ball would resemble differential calculus, but it must resemble some method that represents a very good approximation to the results thereof, or you would not be able to catch the ball. Like I said, there is no such thing as a free lunch. If you want to catch a ball, you must predict where it will be and when, which is a calculation, which makes the brain a computer by any reasonable definition. The same logic applies to everything the brain does. It's all calculation. It doesn't make any sense to deny that the brain is a computer, because that's what it does, compute. The fact that you can't see the computations or understand the methods the brain uses to make them, even in your own head, does not magically make them not computations.

[/ QUOTE ]

But my point is that most of the time the brain is NOT predicting where the ball will land.

Quit thinking like a physicist with all your hi-falutin' logic.
You're using your knowledge of how computers work to inform you on how the brain works. That's a useful analogy in some cases only.

Also, I may want to quibble with your contention about computations and calculations. Something about closures and specification of behavior, but one of us got trashed last night and has a bunch of stuff to brute force my way through today so I'll have to wait on thinking on that.

Borodog
12-01-2006, 01:37 PM
[ QUOTE ]
[ QUOTE ]
All I'm saying is that I think that people like Rduke55 are smart enough to figure this stuff out and explain it.

[/ QUOTE ]

To figure what out?

[/ QUOTE ]

How brain bits work.

[ QUOTE ]
[ QUOTE ]
There are exponentially more people working on the problem every year

[/ QUOTE ]

Why do you think this trend will continue? How many more scientists is there resources, etc. for?

[/ QUOTE ]

Suffice it to say that I don't think the trend will abate for at least the next 30 years.

[ QUOTE ]
[ QUOTE ]
And if it can be understood and explained

[/ QUOTE ]

I think our quibble is that I'm not sure you have a strong grasp of how far away we are from solving some of these problems.

Do you know what has been one of the best models for studying decision making, integration, and processing at the neural level? Bunches of brilliant minds working around the clock for their entire careers on?
Eye movements. Whether or not to look at something or not and how to do it. Pretty important to many primates but a simple problem when you compare it to many others in the brain.
When you get to this level of analysis the problems are ridiculously difficult to examine. That's why they take something simple like eye movements. Research on this has been going on for over 30 years. Huge steps have been made but a lot still remains to be learned.

[/ QUOTE ]

Believe me, I understand. The brain is incredibly complex. The human brain is easily the most complex device in the known universe. However, I also know that complex devices are made of simpler components that each have relatively simple jobs to do. Trying to figure out how entire subsystems of the brain operate is bound to be a daunting task. But my suspicion is that, once the relevent microtechnologies have matured, which is at most a decade away, the workings of the individual component parts should be far, far simpler to understand. And I think that's where the real progress in creating artificial brains will come from, building them from the bottom up from simple component parts, rather than designing them from the top down. Because really, that's how the body does it.

[ QUOTE ]
P.S. I'm not taking anything away form these researchers. I used to be one, and they are - without a doubt - some of the most brilliant and dedicated scientists around. It pisses me off when I see a popular science book talking about all these cobbled together, hand-waving ideas about neural processing and integration in all these complex things (consciousness being the foremost) when these people are making real progress here.

[/ QUOTE ]

One thing that I'd like to make clear is that I've never read anything by this Kurzweil guy. I have nothing but handwaving assertions, because obviously I cannot predict the course of neuroscience for the next decades. I could be totally, completely, utterly wrong. Only time will tell. But my real, honest, best guess is 30 years.

You may have read this, but here (http://mindstalk.net/vinge/vinge-sing.html) is a view of the "singularity" that does not come from this Kurzweil guy. I'd read Vinge for years before I discovered he had written extensively on the topic, and found that there was a whole literature on a topic that I had been thinking about for some time. Vinge's estimate is fairly in line with mine. Writing in 1993 he said he would be surprised if the singularity happened after 2030. I'd be surprised if it happened before 2020 or after 2050.

Borodog
12-01-2006, 01:43 PM
[ QUOTE ]
[ QUOTE ]

I don't think you're understanding my point. I'm not saying that whatever technique the brain uses to catch a fly ball would resemble differential calculus, but it must resemble some method that represents a very good approximation to the results thereof, or you would not be able to catch the ball. Like I said, there is no such thing as a free lunch. If you want to catch a ball, you must predict where it will be and when, which is a calculation, which makes the brain a computer by any reasonable definition. The same logic applies to everything the brain does. It's all calculation. It doesn't make any sense to deny that the brain is a computer, because that's what it does, compute. The fact that you can't see the computations or understand the methods the brain uses to make them, even in your own head, does not magically make them not computations.

[/ QUOTE ]

But my point is that most of the time the brain is NOT predicting where the ball will land.

[/ QUOTE ]

I don't see in what sense this can be meaningful. Of course the brain predicts where the ball will be, because it then arranges to put the hand there. You can't put the hand there if you don't know where the ball is going to be, and where the ball is going to be is a calculation, or at the very least, functionally equivalent to a calculation. All of modern physical science is based on that principle; you've got to give me that much!

[ QUOTE ]
Quit thinking like a physicist with all your hi-falutin' logic.
You're using your knowledge of how computers work to inform you on how the brain works. That's a useful analogy in some cases only.

[/ QUOTE ]

No, I'm simply saying that the brain makes calculations. I'm not saying how, because I have no idea.

[ QUOTE ]


Also, I may want to quibble with your contention about computations and calculations. Something about closures and specification of behavior, but one of us got trashed last night and has a bunch of stuff to brute force my way through today so I'll have to wait on thinking on that.

[/ QUOTE ]

Okie doakie.

vhawk01
12-01-2006, 01:59 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
The resolution of brain scanning technology has been increasing exponentially for 35 years.

[/ QUOTE ]

It's just been invented though!!!!

[/ QUOTE ]

Hard to say that x-rays gave NO information about the human brain.

[/ QUOTE ]

Sigh. OK, Mr. Nitpickypants. How about you add "physiological" imaging to my post then?
Hard to figure out what the brain is doing with X-rays.

[/ QUOTE ]

So will you only accept PET imaging, or do MRI's count? MRI image resolutions have increased exponentially since the technology was first used in the 1980's.

Bear in mind I was just trying to snatch some relevant examples for Phil of other related trends that are increasing exponentially. I gave a disclaimer then and will do so again that these are pulled from his book.

Rduke55
12-01-2006, 02:10 PM
[ QUOTE ]
Suffice it to say that I don't think the trend will abate for at least the next 30 years.

[/ QUOTE ]

You think we'll keep adding scientists at an exponential rate for the next 30 years? NIH will be HUGE. Every villiage will have a university!

[ QUOTE ]
How brain bits work.

[/ QUOTE ]

[ QUOTE ]
Believe me, I understand. The brain is incredibly complex. The human brain is easily the most complex device in the known universe.

[/ QUOTE ]

I think my point is centering on the ridiculous difficulty of understanding much of this in order to model it at a fairly accurate level. And while I'm not in my field of expertise with the computer stuff I think I have a pretty good understanding of how fast we are going and have been going in this regard as well as the obstacles facing us.

[ QUOTE ]
Trying to figure out how entire subsystems of the brain operate is bound to be a daunting task. But my suspicion is that, once the relevent microtechnologies have matured, which is at most a decade away, the workings of the individual component parts should be far, far simpler to understand.

[/ QUOTE ]

I think this may be a major part of our disagreement.

[ QUOTE ]
One thing that I'd like to make clear is that I've never read anything by this Kurzweil guy. I have nothing but handwaving assertions, because obviously I cannot predict the course of neuroscience for the next decades. I could be totally, completely, utterly wrong. Only time will tell. But my real, honest, best guess is 30 years.

[/ QUOTE ]

Depending on the specifics we are talking about I'm saying it's way past that. Can we get better sorting programs, etc. based on some kind of weak AI by then? Sure. Can we get some basic distributed coding stuff or a new type of circuit or computers that's been informed by the brain? Sure.

But simulation of the human brain, much less some more complex intelligence? Hell no.

Rduke55
12-01-2006, 02:13 PM
[ QUOTE ]
Of course the brain predicts where the ball will be, because it then arranges to put the hand there.

[/ QUOTE ]

Often by tracking, not the type of computation you're talking about.

Plus think break up the examples you may be thinking of in "near-field" and "far-field" examples (probably poor choice of terms there).

Rduke55
12-01-2006, 02:18 PM
We're on different time scales. I'm saying the 80's is just invented and I have a problem when people don't take that into account. Of course we're in a period of rapid growth, we're right at the beginning of this technology!

vhawk01
12-01-2006, 02:29 PM
[ QUOTE ]
We're on different time scales. I'm saying the 80's is just invented and I have a problem when people don't take that into account. Of course we're in a period of rapid growth, we're right at the beginning of this technology!

[/ QUOTE ]
I agree its recent compared to, say, human civilization or evolution. But you gotta understand that the relevant timescales for technology are drastically reduced. Computers are less than a hundred years old yet we have no trouble tracking trends year-by-year. The same can be true of medical technology. This is fundamentally part of the point Boro (and Kurzweil) are trying to make, I think.

Phil153
12-01-2006, 03:17 PM
RDuke,

how many people would you say work in your field of research (the structure and direct functioning of the brain)? How many ten years ago? When did the field start? Are student numbers rapidly increasing?

I wish to disabuse Borodog of his notion that the number of scientists researching this field has been increasing exponentially, and hard numbers are the only way to get through his stubborn skull.

Rduke55
12-01-2006, 03:47 PM
The best I can do is give you the attendance numbers for the annual Society for Neuroscience meeting.
It's the biggest meeting dedicated to neuroscience and, while not everyone goes to it, the majority of neuroscientists I know go to it.

http://www.sfn.org/index.cfm?pagename=annualMeeting_statistics

Look after the graph for older years.

Rduke55
12-01-2006, 03:49 PM
[ QUOTE ]
[ QUOTE ]
We're on different time scales. I'm saying the 80's is just invented and I have a problem when people don't take that into account. Of course we're in a period of rapid growth, we're right at the beginning of this technology!

[/ QUOTE ]
I agree its recent compared to, say, human civilization or evolution. But you gotta understand that the relevant timescales for technology are drastically reduced. Computers are less than a hundred years old yet we have no trouble tracking trends year-by-year. The same can be true of medical technology. This is fundamentally part of the point Boro (and Kurzweil) are trying to make, I think.

[/ QUOTE ]

Why mention evolution? I'm comparing it to other life science technology (microscopy, electrophys, staining techniques, etc.)

vhawk01
12-01-2006, 03:55 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
We're on different time scales. I'm saying the 80's is just invented and I have a problem when people don't take that into account. Of course we're in a period of rapid growth, we're right at the beginning of this technology!

[/ QUOTE ]
I agree its recent compared to, say, human civilization or evolution. But you gotta understand that the relevant timescales for technology are drastically reduced. Computers are less than a hundred years old yet we have no trouble tracking trends year-by-year. The same can be true of medical technology. This is fundamentally part of the point Boro (and Kurzweil) are trying to make, I think.

[/ QUOTE ]

Why mention evolution? I'm comparing it to other life science technology (microscopy, electrophys, staining techniques, etc.)

[/ QUOTE ]

Ok. I was just trying to address your point that MRI and other imaging techniques are recent by pointing out that 'recent' is pretty arbitrary. I'd be curious to see if resolving power for microscopy increased at an exponential rate over the history of microscopy. Intuitively I would say that it did, just from what I know of the history of the technique.

But microscopy is a point in your direction, Rduke. While it certainly may (and probably did) increase exponentially, that growth HAS to slow down and eventually halt because of the Planck constant. If we want to have a better resolving power we need a new technique or else its impossible. Do you know of any such limit imposed on these other technological advances? And do you think we would reach these limits before accomplishing what Boro anticipates?

Rduke55
12-01-2006, 03:58 PM
Also, I do think Boro and Metric's point about some of the smaller issues with the brain is valid.
When I first introduce the subject of circuits, distributed coding, integration, etc. to class before we get to the nuts and bolts lectures on those subjects there's often a young know-it-all (usually form some physics, chemistry, or such boring science /images/graemlins/grin.gif) that asks some version of "Why is this important? If we know what the biochemistry that makes up the brain is we should be able to predict its behavior."

My standard answer is "OK, smash your computer with a hammer, put that into a blender, analyze the chemical makeup, and tell me how Windows works."

Rduke55
12-01-2006, 04:08 PM
[ QUOTE ]

Ok. I was just trying to address your point that MRI and other imaging techniques are recent by pointing out that 'recent' is pretty arbitrary. I'd be curious to see if resolving power for microscopy increased at an exponential rate over the history of microscopy. Intuitively I would say that it did, just from what I know of the history of the technique.

[/ QUOTE ]

Leeuwenhoek's microscope could do 270x or so. Modern light microscopes top out at 1250 under ordinary light or a couple thousand under blue light. So no big jump there.
However electron microscopes were a huge jump. EMs can magnify 500,000 or a million times if I remember correctly.

[ QUOTE ]
But microscopy is a point in your direction, Rduke. While it certainly may (and probably did) increase exponentially, that growth HAS to slow down and eventually halt because of the Planck constant. If we want to have a better resolving power we need a new technique or else its impossible. Do you know of any such limit imposed on these other technological advances? And do you think we would reach these limits before accomplishing what Boro anticipates?

[/ QUOTE ]

I think you just hit what I've been thrashing around and not getting to. I'll have to think about that but I'm kind of thinking there's a technological limit to how much we can find out about the human brain (should we start using the term "mind" to distinguish the human brain qualities from other, general brain qualitites, or is that too loaded a term?) within the foreseeable future because of obstacles inherent to the brain.
I ahve to think about this rather than pop off.

Rduke55
12-01-2006, 04:14 PM
As an aside, this is exactly the kind of thread I wish SMP had more of. Interesting subject, great discussion, everyone's civil.

Phil153
12-01-2006, 04:54 PM
[ QUOTE ]
The best I can do is give you the attendance numbers for the annual Society for Neuroscience meeting.

[/ QUOTE ]
Thanks, very helpful.

For borodog:

http://img116.imageshack.us/img116/2373/nocheesehj2.gif
(the red lines are mine).

The growth in this field was initially exponential then became linear (as common sense would suggest). Note also that new scientists up until today are largely coming from a time of exponential population growth (which has since plateaued out). So there is definitely not an exponential growth in the number of people working on this problem, as you asserted. Do you see why?

Please stop making false assertions, then I won't have to waste time debunking them.

hmkpoker
12-01-2006, 05:14 PM
[ QUOTE ]

Models include some variation of visual trajectory tracking cues. Usually something on keeping a constant angle of gaze and/or keeping a constant horizontal alignment with the ball which then has a changing tangent. Then there was a paper in Science in 1995 by McBeath et al. (268(5210):569-73.) that proposed a new variation called the Linear Optical Trajectory model where the fielder runs a curved path to keep the apparent trajectory of the ball a straight line.

[/ QUOTE ]

[ QUOTE ]
[ QUOTE ]
[ QUOTE ]

I don't think you're understanding my point. I'm not saying that whatever technique the brain uses to catch a fly ball would resemble differential calculus, but it must resemble some method that represents a very good approximation to the results thereof, or you would not be able to catch the ball. Like I said, there is no such thing as a free lunch. If you want to catch a ball, you must predict where it will be and when, which is a calculation, which makes the brain a computer by any reasonable definition. The same logic applies to everything the brain does. It's all calculation. It doesn't make any sense to deny that the brain is a computer, because that's what it does, compute. The fact that you can't see the computations or understand the methods the brain uses to make them, even in your own head, does not magically make them not computations.

[/ QUOTE ]

But my point is that most of the time the brain is NOT predicting where the ball will land.

[/ QUOTE ]

I don't see in what sense this can be meaningful. Of course the brain predicts where the ball will be, because it then arranges to put the hand there. You can't put the hand there if you don't know where the ball is going to be, and where the ball is going to be is a calculation, or at the very least, functionally equivalent to a calculation. All of modern physical science is based on that principle; you've got to give me that much!

[/ QUOTE ]


What this suggests is that if someone hits a fly ball into outfield and the presumably fast outfielder believes that he can cover a lot of ground on the field, then by simply maintaining a straight line with the ball's trajectory, he can simply spend the entire run readjusting to the new trajectory information and eventually catch the ball without ever having initially predicted it to any reasonable degree of accuracy. So if this catcher stands in the middle of the field, knowing only that the ball will land somewhere on the field and catches the ball by constantly adjusting to the trajectory, this algorithm will lead him to eventually catch the ball.

The problem is a matter of degrees with respect to the term "calculation." I think Rduke is interpreting this to mean a single, complex mathematical process, while boro is interpretting it to mean any quantitative process (or group of processes.) The algorithm that Rduke described basically suggests that the runner is not performing one elegant "calculation," but an extremely large number of basic comparative compensations. i.e., if at t=1, the ball's trajectory is to the right, the player will act by moving right to center. Then at t=1.5, the ball's trajectory is to the left, the player will move there, and will continue making these simple adjustments until the ball reaches his hand.

Because this act is so continuous with time, the nature of the prediction must be too. For example, the moment the ball is launched the player has no idea where on the field it will land. However, the moment he moves to visually align the ball's trajectory to the right, he has ruled out the possibility that the ball will land on the left side of the field, and refines his prediction to the right side of the field, increasing the accuracy of his prediction by a factor of two. As he continues to do this, the accuracy will coincide with the area of his mitt. It doesn't have to be a differential equation, or even a single equation, at all.

Do these constant simple adjustments count as a "calculation?" At this point it's just semantic.

vhawk01
12-01-2006, 05:50 PM
hmk,

Does it matter that in reality the player is NOT continually adjusting to the ball in flight, or at the very least he doesnt need to be? A baseball player first picks up the ball, starts moving in a general direction and then PREDICTS where the ball will land. He can still make adjustments and corrections to his prediction, but he can also stop looking at the ball and simply run to the spot it will land with impressive accuracy. It seems intuitively that he really is plotting out some sort of parabolic path for the ball.

madnak
12-01-2006, 05:54 PM
I've caught up on reading the thread, I'm not sure if I'll ever catch up on responses. I'll probably just jump to the front of the line. But a few things...

[ QUOTE ]
You're missing the point. That a system is chaotic (which I doubt the brain is) does not mean that it cannot be simulated. The simulated result might not reproduce the result if you run the actual experiment, but here's the things: the actual system will not produce the same result either, because it is chaotic!

[/ QUOTE ]

Given the exact same initial conditions it will. But Rduke seems to agree that the brain isn't chaotic, so that's the end of that.


[ QUOTE ]
Yes, they are. In every meaningful sense. And denying that
the brain is a computer begs the question. To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc. Complex differential equations must be solved. That you don't understand how your brain does these calculations does not mean that your brain is not doing them. There's no such thing as a free lunch; if you want to place your hand in the right place to catch a ball, you must calculate ahead of time where it will be.

[/ QUOTE ]

First, the brain doesn't do that. And it doesn't have to. Having done some work as a programmer, I'm almost shocked you could suggest such a thing. I could probably even program something to catch a ball without going beyond algebra, assuming some developed motor functions were in place. And it's been shown many times that the brain loves to use "tricks," especially in the interpretations of visual stimuli. Also studies done relating to migration, etc, indicate the math is wrong. And more and more studies are showing that "brain math" is nothing like symbolic math - here's (http://www.sciencedaily.com/releases/2006/04/060425015333.htm) a recent one I found googling for a post on this site.

But the major issue is the terminology. You're using "computer" in two different ways, and only one of them has anything to do with the accepted definition (http://dictionary.reference.com/browse/computer). At the most basic level, "computer" means "something that follows programs." It does not mean "something that computes." That's what the word sounds like, but not what it means - not since a long time ago. In the definition I linked, the relevant phrase is "perform prescribed mathematical and logical operations," and the relevant word is "prescribed." The same concept is present in virtually every definition and description of the computer.

The brain doesn't follow programs, it's indescribably dynamic. I suppose there are certain brain functions that might meet the definition technically, but if so the brain only "acts like" a computer in very specific situations - most of the functions of the brain are not computational, and certainly most functions of the brain don't follow a program. You seem to have a stubborn assumption that the brain "must" follow some program. I don't care if it seems transparently obvious to you, it's not how things work.

[ QUOTE ]
My other point is that such components almost certainly do not need all of the cellular baggage that goes along with the fact that our brains are stuck with being built out of cells, which is the only building blocks our genes had to work with.

[/ QUOTE ]

I don't think you understand the efficiency and genius of brain mechanisms. You're talking about the most streamlined, efficient machine in the known universe, and talking about "baggage." And suggesting that another kind of machine, a machine which, in its current incarnation, is a hulking monstrosity of clumsy and wasteful processes... And you're saying that the giant lumbering thing will be more efficient than the slick dynamo. It's like saying we'll make a computer smaller than the very smallest fundamental particle. Is it possible? Maybe. But there are some huge hurdles to go through before it can happen.

[ QUOTE ]
I disagree. The number of human brains working on the problem is increasing exponentially. The technology with which they are investigating the problem is increasing exponentially. It's not any one thing that is increasing exponentially. Look at the actual data on Moore's Law. Moore's law is not sumply exponential, it's beyond exponential. The time it takes to double processor power to double is decreasing. If anything, it's that rate of decrease that is increasing exponentially. It's a feedback effect; human beings now use IA to help them design better computers in shorter and shorter amounts of time. That is the recipe for the technological singularity.

I'm not saying that digital computers are the answer to AI. But the hyper-exponential increase in the power of digital computers, computing with the exponentially increasing amount of human brain power that is being applied to these problems is.

[/ QUOTE ]

So you have e^(x^3) instead of e^x? Again, I don't think you recognize just how big the gap is. Everything we've accomlished so far isn't even a speck of dust on a football field compared to the gulf you're talking about. You're saying we'll go many, many orders of magnitude farther in the next 30 years than we have in the last 50,000.

Anyhow, Moore's law takes these factors into account. This is why processor power has been doubling every two years. Maybe it's slightly more than that, sure - but if you control for this growth (which is probably not exponential and almost certainly not "very" exponetial, but others are covering that) you'll find that Moore's law no longer holds. Moore's law describes the cumulative effects of these processes.

[ QUOTE ]
I'd love to hear Rduke's answer, but I'm fairly certain that most of the cellular machinery is irrelevent, because it is identical between brain cells and, for example, bone cells.

[/ QUOTE ]

I believe nerve cells are actually some of the most differentiated cells in existence. They're tailored so well to their tasks it's hard to imagine anything better. Tell me if I'm wrong, Rduke...

[ QUOTE ]
Ugh. You're focussing on how brain hardware is different from digital computer hardware. It is. I don't care. It doesn't matter. A computer isn't a fluid, but I can simulate fluids on a computer. I don't need a neuron if I have a device that behaves in every way like a neuron. Or a glial cell. Or an axon. Or a synapse. Or whatever it is.

[/ QUOTE ]

A supercomputer could probably simulate a neuron. So a hundred billion of them. And then you'd need more for the relationships between neurons - quadrillions of them - and you'd have to have self-contained power sources (more efficent than mitochondria?), you'd need to control for damage and repairs, you'd need cell division (because yes, neurons do sometimes divide)... And we're just getting started, and ignoring the glia. Again, knock off an order of magnitude or three, it's still a much greater accomplishment than humans have ever come close to.

[ QUOTE ]
I'm beginning to think you are not understanding me. The construction of a human brain is of course fantastically efficient. But that doesn't change the fact that brain cells are stuck with the fact that they have to be manufactured from a modular unit that must also serve to manufacture every other kind of cell, as well as metabolize its own energy, eject its own biological waste products, etc. There is no reason that you can build a device that behaves like a neuron, but doesn't have to contain all of the stuff that is irrelevent to its function as a neuron.

[/ QUOTE ]

Neurons have made all that stuff essential to their function as neurons - that's a big part of why they're so awesome. They use some of the same mechanisms that exist in other cells to do things that other cells don't do. And of course, some of the mechanisms - power, repair, etc - apply equally to man-made equipment. (But to repair as well as an enzyme? To provide localized power better than mitochondria? To regenerate components better than ribosomes and RNA? Those alone are monumental tasks.)

[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
I see no fundamental technological difficulty in such a process. Whether or not it can be done "in our lifetimes" will only be settled with time, but my estimate on the over-under is 30 years.

[/ QUOTE ]

And I say that's an irrational estimate that can't be justified.

[/ QUOTE ]

I don't see how thousands of years is any better of an estimate.

[/ QUOTE ]

Try to make it work, mathematically. Use Kurzweil's most optimistic version of the history of technology, and develop some function that fits that curve. project that function across the next 30 years. And see how far it goes. Then we can talk about just how much we'd need to multiply our current level of technology in order to reach these stages you're talking about. My best guess would be over a million, but it's at least a hundred. Can you show, even with the optimistic data, a hundredfold increase in our level of technology over the next 30 years? (Much less a millionfold)

Your estimate strikes me as arbitrary. And so is mine, yeah. But "thousands" is pretty broad, and I allow a large margin of error.

vhawk01
12-01-2006, 06:04 PM
[ QUOTE ]

A supercomputer could probably simulate a neuron. So a hundred billion of them. And then you'd need more for the relationships between neurons - quadrillions of them - and you'd have to have self-contained power sources (more efficent than mitochondria?), you'd need to control for damage and repairs, you'd need cell division (because yes, neurons do sometimes divide)... And we're just getting started, and ignoring the glia. Again, knock off an order of magnitude or three, it's still a much greater accomplishment than humans have ever come close to.



[/ QUOTE ]

I think this point here goes to some of the frustration. Ten years ago, instead of saying a hundred billion supercomputers, you would have said either its impossible or it would require 10^12 or 10^15 supercomputers. Now you are allowing it may only require a hundred billion, and in ten more years it might require a billion or a hundred million. In 30 years it might require a hundred. Its amazingly easy to underestimate the amazing growth that is going on. I think in order to make your point valid you MUST posit some sort of restrictive or absolute limit to the continued expansion of computing power or else its essentially a certainty that this will be accomplished within our lifetimes.

Rduke55
12-01-2006, 06:20 PM
[ QUOTE ]
[ QUOTE ]

A supercomputer could probably simulate a neuron. So a hundred billion of them. And then you'd need more for the relationships between neurons - quadrillions of them - and you'd have to have self-contained power sources (more efficent than mitochondria?), you'd need to control for damage and repairs, you'd need cell division (because yes, neurons do sometimes divide)... And we're just getting started, and ignoring the glia. Again, knock off an order of magnitude or three, it's still a much greater accomplishment than humans have ever come close to.



[/ QUOTE ]

I think this point here goes to some of the frustration. Ten years ago, instead of saying a hundred billion supercomputers, you would have said either its impossible or it would require 10^12 or 10^15 supercomputers. Now you are allowing it may only require a hundred billion, and in ten more years it might require a billion or a hundred million. In 30 years it might require a hundred. Its amazingly easy to underestimate the amazing growth that is going on. I think in order to make your point valid you MUST posit some sort of restrictive or absolute limit to the continued expansion of computing power or else its essentially a certainty that this will be accomplished within our lifetimes.

[/ QUOTE ]

I think the thrust of my thoughts regarding that is that because of the incomprehensible amount of digital computing power needed to simulate the brain, combined with the ginormous gaps in our knowledge of the mechanisms behind higher brain function that any reasonable AI - even if its development is informed by neuroscience - will be very different from the brain and very likely not close to our brain, much less "superior", unless we have a huge revolution not in processing power (which is where all these singularity types are spending most of their time) but in how computers work.

vhawk01
12-01-2006, 06:27 PM
Is it incomprehensible though? I fully admit I have no idea how much computing power it would take to simulate a brain. But does that mean that NO ONE does? Is it at least possible to put upper and lower bounds on an estimate?

I'm not accusing you of this, certainly, but for the majority of people "a whole lot" and "an infinite amount" are essentially the same thing.

Rduke55
12-01-2006, 07:00 PM
I couldn't put a number on it either. And I would be skeptical of anyone who says they could. But it would have to friggin' huge. Like - we should be starting to wonder about your point on physical limitations big.
Maybe incomprehensible is not the best word choice. But to use a digital computer to imitate what the brain does with its peculiar and not so peculiar processing would require an amount of processing power that is beyond most people's reckoning (even people like Kurzweil IMO). For one, we have to represent all those real number analog stuff with digital increments.
Maybe we could call it "staggering"?

vhawk01
12-01-2006, 07:06 PM
[ QUOTE ]
I couldn't put a number on it either. And I would be skeptical of anyone who says they could. But it would have to friggin' huge. Like - we should be starting to wonder about your point on physical limitations big.
Maybe incomprehensible is not the best word choice. But to use a digital computer to imitate what the brain does with its peculiar and not so peculiar processing would require an amount of processing power that is beyond most people's reckoning (even people like Kurzweil IMO). For one, we have to represent all those real number analog stuff with digital increments.
Maybe we could call it "staggering"?

[/ QUOTE ]

Ok, quick side question because you'd know better than anyone else I can ask:

Are brain processes really analog? How can they be? I read this all the time, because there are varying strengths of interactions, its an analog, and not 'all-or-nothing' digital process. But how can that be? Isn't any real-life analog process really just an illusion of the smaller digital processes?

Rduke55
12-01-2006, 07:19 PM
[ QUOTE ]
[ QUOTE ]
I couldn't put a number on it either. And I would be skeptical of anyone who says they could. But it would have to friggin' huge. Like - we should be starting to wonder about your point on physical limitations big.
Maybe incomprehensible is not the best word choice. But to use a digital computer to imitate what the brain does with its peculiar and not so peculiar processing would require an amount of processing power that is beyond most people's reckoning (even people like Kurzweil IMO). For one, we have to represent all those real number analog stuff with digital increments.
Maybe we could call it "staggering"?

[/ QUOTE ]

Ok, quick side question because you'd know better than anyone else I can ask:

Are brain processes really analog? How can they be? I read this all the time, because there are varying strengths of interactions, its an analog, and not 'all-or-nothing' digital process. But how can that be? Isn't any real-life analog process really just an illusion of the smaller digital processes?

[/ QUOTE ]

I think that's a great (and hard) question. I'm looking forward to some others thoughts on this as it pertains to things outside of neuroscience too.
People often think these things when they learn about the action potential's all-or-none aspects and the quantal nature of much of neurotransmitter release, but I'd think no. Many of the proteins, etc. involved in the process have multiple states of activation and I'm not sure how you could say the chemical communication or the non-action potential electrical activity is not analog - to name a few.

Borodog
12-01-2006, 07:28 PM
[ QUOTE ]
[ QUOTE ]
Suffice it to say that I don't think the trend will abate for at least the next 30 years.

[/ QUOTE ]

You think we'll keep adding scientists at an exponential rate for the next 30 years? NIH will be HUGE. Every villiage will have a university!

[/ QUOTE ]

A growth rate of 4% per annum is only an increase of 224% after 30 years. What's the problem?

[ QUOTE ]
[ QUOTE ]
How brain bits work.

[/ QUOTE ]

[ QUOTE ]
Believe me, I understand. The brain is incredibly complex. The human brain is easily the most complex device in the known universe.

[/ QUOTE ]

I think my point is centering on the ridiculous difficulty of understanding much of this in order to model it at a fairly accurate level. And while I'm not in my field of expertise with the computer stuff I think I have a pretty good understanding of how fast we are going and have been going in this regard as well as the obstacles facing us.

[/ QUOTE ]

Can you explain to me what is so complex about the workings of say, a single neuron, that indicates that the functions of a single neuron cannot be modelled? And I don't mean the inner workings, or the ways by which it does what it does. I mean the mapping of inputs to outputs. Even if a single neuron has tens of thousands of IO channels, that's not all that much data to keep track of. If say, I can create a neural net that mimmicks the workings of a single neuron, in what way have I not simulated a neuron?

[ QUOTE ]


[ QUOTE ]
Trying to figure out how entire subsystems of the brain operate is bound to be a daunting task. But my suspicion is that, once the relevent microtechnologies have matured, which is at most a decade away, the workings of the individual component parts should be far, far simpler to understand.

[/ QUOTE ]

I think this may be a major part of our disagreement.

[/ QUOTE ]

But nobody has explained to me why it is so fundamentally difficult. Neurons have inputs and outputs. They respond to inputs in some systematic way, or else they would be useless. The problem up to this point is that it is difficult and painstaking to try to map inputs to outputs. This will change dramatically as microtechnologies improve. And the timeframe for that is years or decades, not centuries and millenia.

[ QUOTE ]
[ QUOTE ]
One thing that I'd like to make clear is that I've never read anything by this Kurzweil guy. I have nothing but handwaving assertions, because obviously I cannot predict the course of neuroscience for the next decades. I could be totally, completely, utterly wrong. Only time will tell. But my real, honest, best guess is 30 years.

[/ QUOTE ]

Depending on the specifics we are talking about I'm saying it's way past that. Can we get better sorting programs, etc. based on some kind of weak AI by then? Sure. Can we get some basic distributed coding stuff or a new type of circuit or computers that's been informed by the brain? Sure.

But simulation of the human brain, much less some more complex intelligence? Hell no.

[/ QUOTE ]

I still think you're caught up in the term "simulation" in ways that I'm not. I doubt that a digital simulation of a brain is a very smart thing to do. What I'm talking about is networks of hundreds of billions of artificial brain cells, that operate like normal brain cells, interconnected like a brain is. Each is a very simple hardware device whose computational capabilities is extremely limited. But put them to work doing hyper-massively parallel processing, and those simple components can do amazing things.

Borodog
12-01-2006, 08:09 PM
[ QUOTE ]
[ QUOTE ]
The best I can do is give you the attendance numbers for the annual Society for Neuroscience meeting.

[/ QUOTE ]
Thanks, very helpful.

For borodog:

http://img116.imageshack.us/img116/2373/nocheesehj2.gif
(the red lines are mine).

The growth in this field was initially exponential then became linear (as common sense would suggest). Note also that new scientists up until today are largely coming from a time of exponential population growth (which has since plateaued out). So there is definitely not an exponential growth in the number of people working on this problem, as you asserted. Do you see why?

Please stop making false assertions, then I won't have to waste time debunking them.

[/ QUOTE ]

Fantastic. Take a curved dataset, throw out half, and call the other half linear. Brilliant! And even then you're wrong, because the data from 1981 onward is actually a 4/3 powerlaw.

http://i27.photobucket.com/albums/c153/Borodog/Book1_6042_image001.gif

Not to mention the fact that there is really no reason to expect conference attendence to be proportional to people working in the field.

But I summarily concede the argument that the number of people working on the problem is increasing exponentially, because it doesn't matter. Linearly increasing numbers of people working on it are entirely sufficient.

Rduke55
12-01-2006, 08:24 PM
[ QUOTE ]
I mean the mapping of inputs to outputs.

[/ QUOTE ]

I still think you're oversimplifying it. Each neuron's inputs and outputs, besides being incredibly numerous and both digital and analog, are incredibly dynamic. How can you map all the simulations you'd need given the point that you would be changing those connections by mapping them?

[ QUOTE ]
Even if a single neuron has tens of thousands of IO channels, that's not all that much data to keep track of.

[/ QUOTE ]

I really dislike that statement. Quit thinking like a computer scientist! It's an insane amount of data to keep track of. Differing electrical properties, chemical messengers (both local and distributed), modulators of both those that change depending on the activity, etc. etc. etc.
And the neuron doesn't only affect (or is affected) by neurons it synapses with. This makes it a much more difficult problem than those thousands of IO channels you are talking about.

[ QUOTE ]
I can create a neural net that mimmicks the workings of a single neuron, in what way have I not simulated a neuron?

[/ QUOTE ]

I think what you should say that you can create a neural net that mimics a few aspect of neural function under highly-controlled circumstances. Don't kid yourself, all these learning neural nets are still in highly controlled parameters.

[ QUOTE ]
The problem up to this point is that it is difficult and painstaking to try to map inputs to outputs. This will change dramatically as microtechnologies improve. And the timeframe for that is years or decades, not centuries and millenia.

[/ QUOTE ]

I think words like "difficult" and "painstaking" massively minimize the scale of the problem at hand.

What microtechnologies are you talking about? And how will they run all the inputs and output simulations necessary while also not affecting the dynamic state of the networks?

I'd be less opposed to the centuries idea. But a couple of decades? No.

Also, I don't think it's intentional but you kind of are taking the position that there's all these advancements on the way that I am unaware of. I'm on, and surrounded by people also on, the cutting edge of measuring neural activity and neuronal behavior. I know what our capabilities are and where they are headed. I've spent a lot of time thinking and talking about these things for obvious reasons. (sorry for diva-ing out on you there)

[ QUOTE ]
I doubt that a digital simulation of a brain is a very smart thing to do.

[/ QUOTE ]

We are in agreement!

[ QUOTE ]
But put them to work doing hyper-massively parallel processing, and those simple components can do amazing things.

[/ QUOTE ]

OK I really gotta go but we need to come back to this at a future point. That just opened a whole can of worms. I think it was Dennett talking about this in AI. Also, Searle has some other issues we need to discuss. (no, I'm not just thinking Chinese room here)

Borodog
12-01-2006, 08:45 PM
Phil,

Over the interval in question, there's really no way to distinguish between a powerlaw and a linear function. So consider that conceded as well.

Girchuck
12-01-2006, 09:43 PM
So, the scientists are no longer increasing exponentially.
How about money?
Is money and resources going into the brain scanning research increasing exponentially?
If human brains can be simulated by 2030, then rats brains should be simulated earlier, because they are much simplier.
When would you think rat's brains will be simulated. Is it 2020, 2025? If rat's brains are simulated by 2025, then cock-roach's brains will have to be simulated even sooner because they are simplier. When do you think a cock-roach brain will be simulated? Would it be 2020? Earlier?
That only leaves us 15 years to simulate a cock-roach brain.
Do you believe it will be possible? Every little nearby bit of an exponential curve seems linear to an observer sitting on the curve, and for a good reason. I think, a simulation of an insect brain in 15 years will be an indication of whether your farther prediction will come true.
How much would you be willing to bet that an insect brain will be fully simulated in 15 years, and is in public domain available for nominal fee in 20 years.
How much would you be willing to bet that a simulation of a single neuron with all its myriad of dynamic inputs and outputs is successfully performed in 12 years?

Borodog
12-01-2006, 10:40 PM
I'm going to stop arguing my point in this thread. Defending my position has revealed to me that it's not based on much evidence, but rather simply my "gut feeling."

I'm not going to argue my gut feeling is better than a neuroscientist's knowledge of his field any longer.

However, I'm still betting on 30 years. Sometimes you have to play your hunches. /images/graemlins/tongue.gif

Metric
12-02-2006, 04:56 AM
To stop arguing is one thing -- I already did that earlier because I was busy getting drunk. However, it should not be concluded that you are quitting because you doubt that progress is exponential. I'll give a quick list of quantities that can be fitted to an exponential curve over the last decade or more:

cell phone subscribers
DRAM smallest feature size
DRAM price
Average transistor price
Microprocessor clock speed
Microprocessor cost
Transistors per microprocessor
Processor performance
Supercomputer power
DNA sequencing cost
Growth in genbank
RAM (bits per dollar)
Price performance of wireless data devices
Internet hosts
Internet data traffic
Internet backbone bandwidth
Decrease in size of mechanical devices
Nanotech science citations
U.S. Nano-related patents
U.s. per-capita GDP
Speech recognition software price-performance improvement
Resolution of noninvasive brain scanning
Brain scanning image reconstruction time

...and there are more. It's not just "Borodog's gut" that points to an eventual technological singularity.

madnak
12-02-2006, 11:30 AM
[ QUOTE ]
[ QUOTE ]

Please see my post. How does an exponential function reach anything resembling a singularity? You must be thinking of a logarithmic function, but how does that fit the data?

[/ QUOTE ]
The word "singularity" in the term "technological singularity" was unfortunately co-opted from mathematics without retaining the same meaning. No, an exponential is not a singular function. The meaning is that within a fairly short length of time compared to a human lifespan, technological paradigm shifts will go from a slow, steady, and predictable rate to something so fast that no unenhanced human mind will be able to follow and comprehend -- something that can be achieved by an exponential function.

[/ QUOTE ]

Not by any exponential function that's consistent with past trends. Once again, for the purpose of arguments I'll agree with Kurzweil's math (and again note that I actually think it's absurd for various reasons). Now, Kurzweil suggests that because the function is exponential, technology will go beyond what we can imagine within our lifetimes. There are two problems with this.

The first problem is simple and I'll get it out of the way immediately. It's highly relevant and I find it disturbing that so few seem to recognize it. Even if technology goes beyond what we can imagine, that doesn't mean everything we can imagine will come true. We have many technologies today that people 100 years ago would have had trouble imagining, but many of the things they expected to see in the year 2000 never happened. And in fact, some rather distinguished people in the first half of the 20th century were predicting that AIs would come into existence before the present day. What happened to them? And more, what would you say is the modern analogue of them? These people of the past who had unrealistic expectations of the present... Doesn't it stand to reason there are also people of the present who have unrealistic expectations of the future? And if there are, isn't Kurzweil clearly one of the most exaggerated examples? Our technology within 50 years may blow us away, but we may still lack many of the technologies whose existence the futurists are relying on.

The second point is the relevant point. Because there is no actual singularity, what Kurzweil is suggesting as his "singularity" is arbitrary. Basically, Kurzweil took a graph in 1995, drew a rough exponential function, and then adjusted the scale such that the function "looks kind of vertical" at 2045. So? I could adjust the scale so that the graph "looks vertical" at 2002, or at 6116.

But if we look at what level of technology human beings are actually capable of imagining, and adjust the scale to suit that, we'll find that the graph doesn't "look vertical" for a very long time. And of course, even then humans will probably be enhanced to such a degree that the graph still appears "pretty flat" to them.

This is particularly true when considering specific technologies. For one thing, nobody can predict when AI will "happen," even if they know the exact rate of technological advancement. And for another, the best estimates we have make it a long, long way off, even according to Kurzweil's optimistic vision of the future. The whole thing is really silly.

madnak
12-02-2006, 12:03 PM
[ QUOTE ]
[ QUOTE ]

A supercomputer could probably simulate a neuron. So a hundred billion of them. And then you'd need more for the relationships between neurons - quadrillions of them - and you'd have to have self-contained power sources (more efficent than mitochondria?), you'd need to control for damage and repairs, you'd need cell division (because yes, neurons do sometimes divide)... And we're just getting started, and ignoring the glia. Again, knock off an order of magnitude or three, it's still a much greater accomplishment than humans have ever come close to.



[/ QUOTE ]

I think this point here goes to some of the frustration. Ten years ago, instead of saying a hundred billion supercomputers, you would have said either its impossible or it would require 10^12 or 10^15 supercomputers. Now you are allowing it may only require a hundred billion, and in ten more years it might require a billion or a hundred million. In 30 years it might require a hundred. Its amazingly easy to underestimate the amazing growth that is going on. I think in order to make your point valid you MUST posit some sort of restrictive or absolute limit to the continued expansion of computing power or else its essentially a certainty that this will be accomplished within our lifetimes.

[/ QUOTE ]

Actually, ten years ago I predicted things would be further along than they are now. And if I had made predictions then based on current technology (which I'm not doing btw, simulating a single neuron in a closed environment is still beyond what our supercomputers can do, probably well beyond it), the 32-fold increase over 10 years wouldn't have affected my predictions much.

The neurons are just the beginning. Assuming we already had supercomputers capable of simulating an individual neuron, simulating the connections and environments would probably take many more (for example). Traversing the orders of magnitude may be possible, but not in the time span being suggested.

And there are some fundamental issues - for one thing, it's hard to imagine a simulation of the brain that doesn't involve extreme parallel processing. That is, each neuron would basically need to be represented by its own processor for it to be remotely realistic to perform a simulation in real-time. So a minimum of 100 billion processors (probably many more). That's not something we can do using the kinds of architectures we currently have. And there are problems of cooling, maintenance, etc that are pretty fundamental as well (as computers do more work, they're going to produce more heat as a byproduct of that work, and there are physical barriers to how efficient cooling can be and how much heat will "denature" the processor). And of course, processing speed is already reaching fundamental barriers. How long will it take to reach them? I don't know. Plain and simple. But processing power would have to double every two years for many decades before we were anywhere close to the level you're describing. And there are plenty of barriers to understanding neuronal function. More than you seem to realize. Just as multiple factors cumulatively influence the rate of technological progress, multiple factors cumulatively influence the difficulty of technological progress - and each obstacle is much harder to overcome than the last. Also, if you don't "distort" technological advancement by turning it into a continuous curve, you'll see that much of it happens according to sudden "jumps" that represents to destruction of certain barriers. The implications of that are hard to know, but by no means do we have any assurances where this stuff is concerned.

We never know when one of those barriers will prove invulnerable, and we'll have to go around it - or when one of them will be much harder to break down than we expected. Or when, in other words, we reach a "point of inflection" on the technology curve. It's definitely happened before, and you can write the Dark Ages off if you like, but social upheaval may not be the only thing that can throw a wrench in the works. Oil production went up exponentially, too, for awhile. We (of course) went for the easiest reserves first, and as more and more people "piled on," we tapped those "easy" reserves at a faster and faster rate. But by the same token, we exhausted those reserves at a faster and faster rate. So far 98% of what our technological advancement involves isn't surpassing the fundamental barriers, but rather expanding within them. The "next level" is, as always, very mysterious.

There's a common story about a bunch of miners. They find layers of minerals, and each layer contains greater resources than the last. The mining community becomes wealthy and people start using more and more, depending on the increasing abundance more and more, and saving less and less. Layer after layer the miners work through the ground. And then, at the end, either they find nothing or (more commonly) they break open the lair of a slumbering demon. I'm not a huge fan of cautionary tales in general - I'm a card-carrying member of the "cult of progress." But I also recognize that progress is, to a large degree, anchored by realism and care. It's possible to build a tower to the moon - so long as there's a solid foundation.

madnak
12-02-2006, 12:18 PM
[ QUOTE ]
Can you explain to me what is so complex about the workings of say, a single neuron, that indicates that the functions of a single neuron cannot be modelled? And I don't mean the inner workings, or the ways by which it does what it does. I mean the mapping of inputs to outputs. Even if a single neuron has tens of thousands of IO channels, that's not all that much data to keep track of. If say, I can create a neural net that mimmicks the workings of a single neuron, in what way have I not simulated a neuron?

[/ QUOTE ]

To comment on this with a specific example... Of course even if the "black box" model were valid, a black box itself isn't necessarily easy to model. In fact, the brain as an organ can be viewed as a "black box" (certainly moreso than an individual neuron), and yet just because it takes sensory inputs and turns them into nervous and hormonal outputs doesn't mean that it's as simple as "modeling the IO channels."

But the specific example I'm going to use is that of membranes. Because you seem to be thinking of a neuron as sitting there, taking information from its outside environment, processing it, and returning it to its outside environment, I get the impression you're looking at the cell membrane as just a "line" that divides the "in-neuron" from the "out-neuron." But in reality, the membrane is dynamic and powerful, and much of the processing takes place there. Everything from the shape of the membrane to the specific combinations of its proteins is relevant. Moreover, the membrane is both a fully-functioning part of the inside environment and a fully-function part of the outside environment. As a result, it's impossible to draw a line between the "inside" and the "outside" of the cell. That just isn't how it works. Some things inside the cell affect what's outside the cell, and vice versa, and some things are involved in both environments (without being simple I/O channels), and then of course other things are I/O channels but are so dynamic they can't be modeled as such. Even the "base" of the membrane is constantly flowing and changing its shape (based on internal and external factors, and on interplays of both).

And we can't understand many of these mechanics. I mean the simple mechanics that underlie these dances and interactions of the membrane. And of course, you can't talk about the membrane without talking about the associated polymers, the relationships between membranes (ie what happens in a synapse and how dendritic growth happens), the cytoskeleton, the endomembrane system... The way that the membrane itself works is based on how all these other things work - it really isn't an I/O machine, that's not what it is. It's more like an ecosystem than an I/O machine. And not a contained ecosystem, either.

thylacine
12-02-2006, 04:30 PM
Just been skimming this thread and don't have much to contribute, but FWIW

For constants C,k, C>0, the diff eq

dy/dt=C y^k

leads to exponential growth if k=1

leads to a singularity if k>1

Metric
12-02-2006, 06:26 PM
[ QUOTE ]
The first problem is simple and I'll get it out of the way immediately. It's highly relevant and I find it disturbing that so few seem to recognize it. Even if technology goes beyond what we can imagine, that doesn't mean everything we can imagine will come true.

[/ QUOTE ]
This is part of why Kurzweil calls the whole thing a singularity -- apparently certain aspects of the future become increasingly unpredictable when technological leaps that would take decades in the 20th century (e.g. human genome project or widespread use of the internet) begin taking place at a rate of many per year. Not only is it tough to predict what life will be like 50 years from now (as those 1950's writers tried 50 years ago), but it becomes increasingly tough to predict what life will be like next year. So I don't think I'd be too disturbed that futurists are blithely unaware of unpredictability -- it's basically the main reason they use the term "singularity" (though still perhaps unfortunate from a mathematical point of view).

[ QUOTE ]
The second point is the relevant point. Because there is no actual singularity, what Kurzweil is suggesting as his "singularity" is arbitrary. Basically, Kurzweil took a graph in 1995, drew a rough exponential function, and then adjusted the scale such that the function "looks kind of vertical" at 2045. So? I could adjust the scale so that the graph "looks vertical" at 2002, or at 6116.

[/ QUOTE ]
Again, the point is not to look at a graph and find where things "go crazy" (since they never do in an exponential function) -- the point is to be able to approximate where "big changes" (even if we don't know exactly what they will be) can be expected to take place so quickly that the way humans live is continuously altered within a timespan of decades (we're already here now -- I spend a large fraction of my day at a "laptop computer", connected to the "internet"), then individual years, then less than a year. Where you exactly draw the line is indeed up to you, but I think you'd agree that when the timescale for major change gets this small, this may signal a fundamental change in human history.

It's also possible that the graph is about to lose its exponential shape. I don't know. But hopefully I explained the principle of the "technological singularity" somewhat better.

Zygote
12-02-2006, 06:47 PM
[ QUOTE ]
But my point is that most of the time the brain is NOT predicting where the ball will land.

[/ QUOTE ]

This cannot be true. If a ball is thrown at me, my brain clearly makes some predicition about the time and place the ball will arrive. If my brain holds this belief with a high certainty, and after i hold out my hand the ball doesn't arrive in my hand, my coniscienceness will be tuned in with surprise due to the failed predition.

Just how right now if i kept typing but didn't feel the keyboard's keys under my fingers, i'd be surprised. What would surprise me is no more than my brain realizing an erroneous prediciton.

The fact that we become surprised when a prediciton is wrong, shows us that there was a predicition in place to begin with.

Zygote
12-02-2006, 07:03 PM
[ QUOTE ]
[ QUOTE ]
Suffice it to say that I don't think the trend will abate for at least the next 30 years.

[/ QUOTE ]

You think we'll keep adding scientists at an exponential rate for the next 30 years? NIH will be HUGE. Every villiage will have a university!

[ QUOTE ]
How brain bits work.

[/ QUOTE ]

[ QUOTE ]
Believe me, I understand. The brain is incredibly complex. The human brain is easily the most complex device in the known universe.

[/ QUOTE ]

I think my point is centering on the ridiculous difficulty of understanding much of this in order to model it at a fairly accurate level. And while I'm not in my field of expertise with the computer stuff I think I have a pretty good understanding of how fast we are going and have been going in this regard as well as the obstacles facing us.

[ QUOTE ]
Trying to figure out how entire subsystems of the brain operate is bound to be a daunting task. But my suspicion is that, once the relevent microtechnologies have matured, which is at most a decade away, the workings of the individual component parts should be far, far simpler to understand.

[/ QUOTE ]

I think this may be a major part of our disagreement.

[ QUOTE ]
One thing that I'd like to make clear is that I've never read anything by this Kurzweil guy. I have nothing but handwaving assertions, because obviously I cannot predict the course of neuroscience for the next decades. I could be totally, completely, utterly wrong. Only time will tell. But my real, honest, best guess is 30 years.

[/ QUOTE ]

Depending on the specifics we are talking about I'm saying it's way past that. Can we get better sorting programs, etc. based on some kind of weak AI by then? Sure. Can we get some basic distributed coding stuff or a new type of circuit or computers that's been informed by the brain? Sure.

But simulation of the human brain, much less some more complex intelligence? Hell no.

[/ QUOTE ]


rduke, please, please look at some of this stuff:

http://video.google.ca/videoplay?docid=5650400464718085334&q=jeff+hawkins

http://video.google.ca/videoplay?docid=-2500845581503718756&q=jeff+hawkins

http://video.google.ca/videoplay?docid=6374966037016943942&q=jeff+hawkins

and preferable read "on intelligence" or the white papers on numenta.com and make a post or PM your opinion.

Phil153
12-02-2006, 08:04 PM
[ QUOTE ]
If a ball is thrown at me, my brain clearly makes some predicition about the time and place the ball will arrive.

[/ QUOTE ]
True, but that's not the main mechanism of balls catching. It's constant monitoring of the ball combined with hand/eye coordination.

Try this: Get a friend to stand 10 metres away. Before the balls is halfway to you, close your eyes and try to catch the ball. Measure your success with this method.

The main mechanism of balls catching is not of the kind borodog suggests. Robots that can catch moving objects rely on different models to the human brain.

luckyme
12-02-2006, 08:27 PM
[ QUOTE ]
True, but that's not the main mechanism of balls catching. It's constant monitoring of the ball combined with hand/eye coordination.

[/ QUOTE ]

why is catching so much different than throwing.
Try this. Mark a spot on the ground. Throw the ball attempting to hit it. Close your eyes as soon as the ball leaves your hand. You seem to have derived the entire course in one fell swoop.
I realize correcting as you go makes for more accurate catching than throwing, but we do seem to have some capacity for prediction. I guess I'm wondering why catching isn't 'throwing in reverse'.

luckyme

Zygote
12-02-2006, 10:30 PM
[ QUOTE ]
It's constant monitoring of the ball combined with hand/eye coordination.

[/ QUOTE ]

Right and all this monitoring is relaying information to your brain to process a prediction, which activates your motor skills accordingly.

[ QUOTE ]

Try this: Get a friend to stand 10 metres away. Before the balls is halfway to you, close your eyes and try to catch the ball. Measure your success with this method.


[/ QUOTE ]

Well if you cut off your brain's sensor input/data it obviously can't arrive at as accurate a prediciton.

[ QUOTE ]
The main mechanism of balls catching is not of the kind borodog suggests.

[/ QUOTE ]

I dont doubt this, but i dont think that's the essence of the discussion.

madnak
12-02-2006, 10:33 PM
[ QUOTE ]
[ QUOTE ]
The first problem is simple and I'll get it out of the way immediately. It's highly relevant and I find it disturbing that so few seem to recognize it. Even if technology goes beyond what we can imagine, that doesn't mean everything we can imagine will come true.

[/ QUOTE ]
This is part of why Kurzweil calls the whole thing a singularity -- apparently certain aspects of the future become increasingly unpredictable when technological leaps that would take decades in the 20th century (e.g. human genome project or widespread use of the internet) begin taking place at a rate of many per year. Not only is it tough to predict what life will be like 50 years from now (as those 1950's writers tried 50 years ago), but it becomes increasingly tough to predict what life will be like next year. So I don't think I'd be too disturbed that futurists are blithely unaware of unpredictability -- it's basically the main reason they use the term "singularity" (though still perhaps unfortunate from a mathematical point of view).

[/ QUOTE ]

In which case predictions that AI will be developed, or that blue goo will be developed, particularly within a specific time frame, are out of line.

[ QUOTE ]
[ QUOTE ]
The second point is the relevant point. Because there is no actual singularity, what Kurzweil is suggesting as his "singularity" is arbitrary. Basically, Kurzweil took a graph in 1995, drew a rough exponential function, and then adjusted the scale such that the function "looks kind of vertical" at 2045. So? I could adjust the scale so that the graph "looks vertical" at 2002, or at 6116.

[/ QUOTE ]
Again, the point is not to look at a graph and find where things "go crazy" (since they never do in an exponential function) -- the point is to be able to approximate where "big changes" (even if we don't know exactly what they will be) can be expected to take place so quickly that the way humans live is continuously altered within a timespan of decades (we're already here now -- I spend a large fraction of my day at a "laptop computer", connected to the "internet"), then individual years, then less than a year. Where you exactly draw the line is indeed up to you, but I think you'd agree that when the timescale for major change gets this small, this may signal a fundamental change in human history.

It's also possible that the graph is about to lose its exponential shape. I don't know. But hopefully I explained the principle of the "technological singularity" somewhat better.

[/ QUOTE ]

Again, two things.

First, "changing our lifestyles fundamentally every year" is a long, long way from creating AI or blue goo. Once again, you don't seem to understand the scope we're talking about. We need at least a few orders of magnitude (no less than three, probably much more). That is, if our lifestyle is changing every decade, it would need to be changing daily before a prediction of "within our lifetimes" would be justified. I'm not saying it's necessarily not going to happen, but if it does chances are it will be due to some heuristic approach or some such thing, that we can't possibly predict at all. And I mean, that we can't predict at all, we can't tell if it will come next year or ten thousand years from now. Obviously the individual chance per year will increase as the total "amount" of technological advancement per year increases, but we would need to see leaps and bounds way beyond what you're talking about before a true AI would be likely. Also this prediction about continued exponential growth is largely based on the idea that we'll magically "fly through" these fundamental barriers.

Second, the change isn't continuous at all. It may seem that way I suppose - the rate at which technologies seep into the market may be continuous, and some computer technologies have been, but most of the time it's sudden, "jagged" jumps that cause progress. This is where the "exponential" comes from. If you look closely, you'll see a tapering off until the next jump, then a new vitality, then another tapering off, etc. If nothing changes in genetic engineering, it could very well plateau. Then again, if we, for instance, discover how proteins fold tomorrow, we'll see things in ten years that nobody can predict today (although blue goo and AI will still be a long long long way off).

By representing it all as a smooth, continuous function, Kurzweil is being highly deceptive. It's more like a discrete series of epiphanies (which he himself acknowledges with his use of paradigm shifts). And based solely on the fact that the "dings" of these epiphanies are getting closer together, Kurzweil is making predictions that are neither mathematically nor scientifically supportable.

madnak
12-02-2006, 10:44 PM
[ QUOTE ]
[ QUOTE ]
But my point is that most of the time the brain is NOT predicting where the ball will land.

[/ QUOTE ]

This cannot be true.

[/ QUOTE ]

Yes it can. The brain probably does perform calculations on some level (though nothing like the calculations a computer performs). But it doesn't have to. It is definitely possible to catch a ball without any calculation, and downright easy if the conditions are specified beforehand.

Unfortunately I don't know that any research has been done on this subject specifically, but where research has been done the indication is that the brain is very bad at "calculating" such things, and generally prefers to simply use "standard" patterns to get quick answers about common situations in its environment. Optical illusions are a good example of this - the human brain can be tricked through various means into believing absolute absurdities by exploiting these patterns and "quick and dirty" mechanisms by which the brain actually does things.

One rather simplistic example is that of perceived distance. Depth perception also plays a role, but probably a lesser one. Basically, if the human eye sees something that's supposed to be "big" and it appears "small," the brain concludes the object is far away. Also if an object is supposed to be "small" and it appears "big," the brain concludes the object is near. It's not doing vector math or anything, it's calculating neither distance nor size, it's just saying "big close, small far." As a result, some illusions can have really crazy results on people by varying sizes from what they've learned to expect.

madnak
12-02-2006, 10:53 PM
[ QUOTE ]
why is catching so much different than throwing.

[/ QUOTE ]

Because the brain isn't a computer. Catching might be the "inverse" of throwing, but functionally it's very different. The inputs used, the actual movements necessary, and the basic situations are all different.

Probably the most efficient way to perform catching is how we do it - that is, keep your eye on the ball, if the ball is to the left of you move left, if it's to the right move right, if it's moving up move backwards, if it's moving down move forwards. Obviously it's more complex than just that, but it's easy enough to see that it's a cumulative process rather than a simple calculation - as has been suggested, just fail to keep your eye on the ball and see how well you do.

Throwing, on the other hand, necessitates some level of prediction - while you can adjust your position while catching a ball, you can't adjust the position of the ball after throwing it. Thus, the throw has to be right "from the start." You also don't have a convenient visual stimulus to guide all your actions, you aren't "zeroing in" on anything. I'm not sure exactly what the throwing process is - I can say the brain is very bad at calculating where the ball will go, but practice makes the predictions exponentially more accurate very quickly. Given that most of us have played some sort of throwing/pitching game, we have probably learned some of what's involved - maybe through "tables" of reflexes. Still, I imagine that most of us who aren't athletic sometimes have the experience of throwing the ball and being way off the mark.

Metric
12-02-2006, 11:43 PM
[ QUOTE ]
In which case predictions that AI will be developed, or that blue goo will be developed, particularly within a specific time frame, are out of line.

[/ QUOTE ]
I agree that statements such as "blue goo will be developed within 5 years of 2050" should be taken with a grain of salt.

[ QUOTE ]
First, "changing our lifestyles fundamentally every year" is a long, long way from creating AI or blue goo. Once again, you don't seem to understand the scope we're talking about. We need at least a few orders of magnitude (no less than three, probably much more). That is, if our lifestyle is changing every decade, it would need to be changing daily before a prediction of "within our lifetimes" would be justified. I'm not saying it's necessarily not going to happen, but if it does chances are it will be due to some heuristic approach or some such thing, that we can't possibly predict at all. And I mean, that we can't predict at all, we can't tell if it will come next year or ten thousand years from now. Obviously the individual chance per year will increase as the total "amount" of technological advancement per year increases, but we would need to see leaps and bounds way beyond what you're talking about before a true AI would be likely. Also this prediction about continued exponential growth is largely based on the idea that we'll magically "fly through" these fundamental barriers.

[/ QUOTE ]
I'm not predicting the emergence of any individual technology. I'm just saying that if exponential growth continues (a big if), then a "technological singularity" is inevitable within a very short timespan.

There is also precident for "breaking through barriers" -- computing power continued to increase exponentially even when vacuum tube technology reached its limit. Eventually, the current paradigm will reach a fundamental limit, but you'll note that huge amounts of research money are even now being spent on totally new, vastly more powerful approaches to computing, such as quantum computing.

[ QUOTE ]
Second, the change isn't continuous at all. It may seem that way I suppose - the rate at which technologies seep into the market may be continuous, and some computer technologies have been, but most of the time it's sudden, "jagged" jumps that cause progress. This is where the "exponential" comes from. If you look closely, you'll see a tapering off until the next jump, then a new vitality, then another tapering off, etc. If nothing changes in genetic engineering, it could very well plateau. Then again, if we, for instance, discover how proteins fold tomorrow, we'll see things in ten years that nobody can predict today (although blue goo and AI will still be a long long long way off).

By representing it all as a smooth, continuous function, Kurzweil is being highly deceptive. It's more like a discrete series of epiphanies (which he himself acknowledges with his use of paradigm shifts). And based solely on the fact that the "dings" of these epiphanies are getting closer together, Kurzweil is making predictions that are neither mathematically nor scientifically supportable.

[/ QUOTE ]
You should probably at least take a cursory look at the Kurzweil stuff before you accuse him of being deceptive. He discusses "s-curves" extensively -- they're fully incorporated into his modeling of exponential growth.

Zygote
12-03-2006, 03:16 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
But my point is that most of the time the brain is NOT predicting where the ball will land.

[/ QUOTE ]

This cannot be true.

[/ QUOTE ]

Yes it can. The brain probably does perform calculations on some level (though nothing like the calculations a computer performs). But it doesn't have to. It is definitely possible to catch a ball without any calculation, and downright easy if the conditions are specified beforehand.

Unfortunately I don't know that any research has been done on this subject specifically, but where research has been done the indication is that the brain is very bad at "calculating" such things, and generally prefers to simply use "standard" patterns to get quick answers about common situations in its environment. Optical illusions are a good example of this - the human brain can be tricked through various means into believing absolute absurdities by exploiting these patterns and "quick and dirty" mechanisms by which the brain actually does things.

One rather simplistic example is that of perceived distance. Depth perception also plays a role, but probably a lesser one. Basically, if the human eye sees something that's supposed to be "big" and it appears "small," the brain concludes the object is far away. Also if an object is supposed to be "small" and it appears "big," the brain concludes the object is near. It's not doing vector math or anything, it's calculating neither distance nor size, it's just saying "big close, small far." As a result, some illusions can have really crazy results on people by varying sizes from what they've learned to expect.

[/ QUOTE ]

I was only commenting on Rdukes statement that the brain most often does not make a prediction. The brain always makes a prediction. If the ball arrives in your hand after 5 minutes rather than few seconds, your brain will be surprised since you expected it sooner! If the ball does not arrive in your hand, and you had a high degree of certainty that it would (as in the ball is closely approaching your hand), then you will be surpised. Your brain made a prediction that your hand should be ready to feel a ball, and when the ball doesn;t arrive, your conscienceness kicks in.

This is all evidence that your brain definitely makes predicitions about the time and place the ball will arrive.

madnak
12-03-2006, 10:21 AM
I agree, but it's theoretically possible to catch a ball without making predictions. Also it's virtually certain the brain doesn't perform any calculus in making its predictions.

luckyme
12-03-2006, 11:27 AM
[ QUOTE ]
I agree, but it's theoretically possible to catch a ball without making predictions. Also it's virtually certain the brain doesn't perform any calculus in making its predictions.

[/ QUOTE ]

I wonder if the speed involved when ball catching is a factor. Would the brain take a different approach if it had to get us to a certain spot to -
meet a decending air balloon.
Cut off a drifting opposing pirate ship.
Events where we have to plan our end point more in advance and have more time to do it.

luckyme

Borodog
12-03-2006, 11:55 AM
As I said, I'm not arguing about the singularity anymore (FWIW I agree with everything Metric has said), but the people who keep insisting that the brain doesn't make calculations, say to catch a ball, are clearly insane.

The only way to catch a ball is to calculate ahead of time where it's going to be and put your hand there. If you don't agree with this statement, you are having a semantic problem with the word "calculate", and your semantic problems are not my problem. Whatever algorithm the brain is using to catch the bal, it is functionally equivalent to performing complex calculations, and that is all that matters. If the near-term workings of your local chunk of the universe could not be calculated fairly accurately, a brain would useless, because the world would be completely unpredictable and non-sensical.

People need to stop reading what they want to read and read what I'm writing. I'm not saying you are doing vector calculus in your head without realizing it. I'm saying the algorithms that your brain is using must closely approximate the results of calculus, because they produce the same results.

luckyme
12-03-2006, 12:11 PM
Found these comments on Tammet in the Guardian -

[ QUOTE ]
Tammet is calculating 377 multiplied by 795. Actually, he isn't "calculating": there is nothing conscious about what he is doing. He arrives at the answer instantly. Since his epileptic fit, he has been able to see numbers as shapes, colours and textures. The number two, for instance, is a motion, and five is a clap of thunder. "When I multiply numbers together, I see two shapes. The image starts to change and evolve, and a third shape emerges. That's the answer. It's mental imagery. It's like maths without having to think."

[/ QUOTE ]

It seems some are capable of 'calculating' an answer, even a specific number, by using methods other than our external figures-by-pencil approach.
I watched one of the Jeff Hawkins videos suggested in this thread. His program makes some interesting decisions based on a hierarchal tree approach, which he believes is more in line with how our brains work. It's still another form of calculating.

On the singuality - I suspect Nano will be the critical factor in whatever the date is.

luckyme

Rduke55
12-03-2006, 02:41 PM
[ QUOTE ]
People need to stop reading what they want to read and read what I'm writing. I'm not saying you are doing vector calculus in your head without realizing it. I'm saying the algorithms that your brain is using must closely approximate the results of calculus, because they produce the same results.

[/ QUOTE ]

Below is from an earlier post if yours in this thread.

[ QUOTE ]
To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc. Complex differential equations must be solved. That you don't understand how your brain does these calculations does not mean that your brain is not doing them. There's no such thing as a free lunch; if you want to place your hand in the right place to catch a ball, you must calculate ahead of time where it will be.

[/ QUOTE ]

Also, are you calling me insane?
Have you or zygote read any of the relevant literature in the peer-reviewed journals? Either on humans catching or similar problems in prey catching in animals.

(All of this should probably go in another thread though)

It could be argued that we're not designed for catching ballistic objects. Our goo for this is based on objects that can change speeds and direction. So evolving a brain that can do these predictive calculations is a waste of time and resources. The tracking algorithms work great, except in rare cases. Why change those?
Keep in mind a lot of the prediction stuff you guys are talking about are cases where the object is headed directly for you (well, within about an arms length) where the brain can start making rough plans on where to put the hand. For this we are talking about moving your body at a pretty rapid rate.
I always thought that a better analogy in some ways than the baseball fly ball would be a receiver running on a long pass play. Anyone who thinks that the receiver's brain is calculating "oh, that will get to 6 ft. above the ground at the 12 yard line 3 yards in from the sideline so I had better move at 13 degrees from the goalposts at 18 mph for 6 sec..." is deluded. Keep that thing at a constant retinal arc (or one of the other no prediction deals) and you'll be fine.

Rduke55
12-03-2006, 02:45 PM
Yes, throwing is a completely different ball of wax. huge pain form a neural perspective. Because of the reasons you mentioned that people are attributing to catching in this thread.

Rduke55
12-03-2006, 02:50 PM
[ QUOTE ]
Well if you cut off your brain's sensor input/data it obviously can't arrive at as accurate a prediciton.

[/ QUOTE ]

But the brain can predict this for other things. For example in some eye and arm movements for example, you can elicit a movement (or activate the premotor centers that will elicit it) and cut off the sensory input and you will still arrive at the same place.

Rduke55
12-03-2006, 03:03 PM
Also, I meant to get to the "baggage" thing you and boro were talking about earlier.

I see the fact that a neuron is a cell is actually an advantage, not a deficit. This allows a level of plasticity, processing, etc. within each unit of the circuit, network, or whatever.

Also, the Dennett thing I started to mention earlier, talks about the speed and efficiency of organic material for this when compared to inorganic stuff, and that you'll reach limitations with inorganic material far before you get to the abilities of organic stuff.
You have to remember that a lot of the neuronal communication is basically co-opted elements of intra and intercellular communication that originated with the first cells ever. These are some of the most amazing "machines" known for speed, efficiency, etc. It's incredible complex and useful.
These are what gives the neuron it's amazing qualities in processing and communication.

Rduke55
12-03-2006, 03:22 PM
[ QUOTE ]
So, the scientists are no longer increasing exponentially.
How about money?
Is money and resources going into the brain scanning research increasing exponentially?
If human brains can be simulated by 2030, then rats brains should be simulated earlier, because they are much simplier.
When would you think rat's brains will be simulated. Is it 2020, 2025? If rat's brains are simulated by 2025, then cock-roach's brains will have to be simulated even sooner because they are simplier. When do you think a cock-roach brain will be simulated? Would it be 2020? Earlier?
That only leaves us 15 years to simulate a cock-roach brain.
Do you believe it will be possible? Every little nearby bit of an exponential curve seems linear to an observer sitting on the curve, and for a good reason. I think, a simulation of an insect brain in 15 years will be an indication of whether your farther prediction will come true.
How much would you be willing to bet that an insect brain will be fully simulated in 15 years, and is in public domain available for nominal fee in 20 years.
How much would you be willing to bet that a simulation of a single neuron with all its myriad of dynamic inputs and outputs is successfully performed in 12 years?

[/ QUOTE ]

I really like this post.

ThreeMartini
12-03-2006, 04:47 PM
Below is an abstract regarding ocular pursuit and objects in motion.

http://www.ncbi.nlm.nih.gov/entrez/query...t_uids=12541146 (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?itool=abstractplus&db=pubmed&cmd=Retrie ve&dopt=abstractplus&list_uids=12541146)

FWIW, this is the most interesting thread I've come across in SMP.

CrayZee
12-05-2006, 07:22 PM
For those that believe in the singularity, what are you doing to prepare for it..assuming you're relatively young enough to make it there?

Maybe if I feel like getting around to it, I'll check out Kurzweil's singularity book for kicks. It still seems overly optimistic, but perhaps Kurzweil is onto something when he says we live on the tangent line of the exponential curve when it comes to the long term view of progress.

Borodog
12-06-2006, 12:52 AM
[ QUOTE ]
[ QUOTE ]
People need to stop reading what they want to read and read what I'm writing. I'm not saying you are doing vector calculus in your head without realizing it. I'm saying the algorithms that your brain is using must closely approximate the results of calculus, because they produce the same results.

[/ QUOTE ]

Below is from an earlier post if yours in this thread.

[ QUOTE ]
To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc. Complex differential equations must be solved. That you don't understand how your brain does these calculations does not mean that your brain is not doing them. There's no such thing as a free lunch; if you want to place your hand in the right place to catch a ball, you must calculate ahead of time where it will be.

[/ QUOTE ]

Also, are you calling me insane?

[/ QUOTE ]

I pretty much feel like it, yes, because you go on to write this:

[ QUOTE ]
Have you or zygote read any of the relevant literature in the peer-reviewed journals? Either on humans catching or similar problems in prey catching in animals.

(All of this should probably go in another thread though)

It could be argued that we're not designed for catching ballistic objects. Our goo for this is based on objects that can change speeds and direction. So evolving a brain that can do these predictive calculations is a waste of time and resources. The tracking algorithms work great, except in rare cases. Why change those?

[/ QUOTE ]

/images/graemlins/confused.gif Dude, what do you think those tracking algorithms are? They are calculations. Why are you making something so simple into something so complicated? /images/graemlins/confused.gif

[ QUOTE ]
Keep in mind a lot of the prediction stuff you guys are talking about are cases where the object is headed directly for you (well, within about an arms length) where the brain can start making rough plans on where to put the hand. For this we are talking about moving your body at a pretty rapid rate.

[/ QUOTE ]

Why does any of this matter at all? I don't CARE what the algorithm is, I don't CARE how your brain does it, but it has to predict ahead of time where to put your damn hand because it takes non-zero time to get it there! That's a damn calculation! Christ this is maddening.

[ QUOTE ]
I always thought that a better analogy in some ways than the baseball fly ball would be a receiver running on a long pass play. Anyone who thinks that the receiver's brain is calculating "oh, that will get to 6 ft. above the ground at the 12 yard line 3 yards in from the sideline so I had better move at 13 degrees from the goalposts at 18 mph for 6 sec..." is deluded.

[/ QUOTE ]

Nobody is saying that. Christ, what is so hard to understand about something so simple? Does your brain have to know the difference in degrees celsius for you to conclude that it is warmer outside than it was inside when you walk out the door? Do you think this is somehow not a calculation? Does your brain have to know the flux in Watts per square meter for you to determine that it is too dark to read your book and reach for the lamp on the end table? Are you under the impression that THAT is not a calculation?

[ QUOTE ]
Keep that thing at a constant retinal arc (or one of the other no prediction deals) and you'll be fine.

[/ QUOTE ]

That is CALCULATION. Jesus H. Christ.

I swear I don't know how to explain this so it is any more bleedingly obvious. Maybe Richard Dawkins can do a better [censored] job:

[ QUOTE ]
"Brains may be regarded as analogues in funtion to computers"
<font color="white"> . </font>
Statements like this worry literal-minded critics. They are right, of course, that brains differ in many respects from computers. Their internal methods of working, for instance, happen to be very different from the particular kind of computers that our technology has developed. This in no way reduces the truth of my statements about their being analogous in function. Functionally, the brain plays precisely the role of on-board computer--data processing, pattern recognition, short-term and long-term data storage, operation coordination, and so on.

[/ QUOTE ]

When you throw a ball, the BALL is doing calculations as it travels! Or gravity is doing calculations on the ball. To predict the path of the ball you can solve the differential equations analytically. You can finite difference them. You and guess and correct several times during the flight without knowing anything about any equations at all, nor about any numbers. You can use a "lookup table". You can graphically trace the path on the background and sketch it into the future. You can ask someone else to figure it out for you. You or your brain can use any damn method or combination of methods you or it likes, and ALL of them are calculations.

A clock calculates the time by using rotating flywheels and tightened springs. It doesn't need to "know" anything about numbers to do this. It just does what it's built to do, which is calculate time. The bit of your brain that tells you how to catch a ball does what it does, which is calculate how to catch balls. I can calculate the best fit line to a linear dataset without using multiplication, division, addition or subtraction, nor any of the numbers in the dataset.

I give up. Please do not respond with another post asking me if I have read the literature on how the brain does such and such, i.e. makes it's calculations. I do not care. I am sick of the whole topic.

Rduke55
12-06-2006, 12:49 PM
You said:

[ QUOTE ]
To catch a thrown ball, the brain must perform incredibly complex calculations that involve guaging distances from angular sizes to velocities and acceleration from rates of changes of angular size, shading, etc.

[/ QUOTE ]

and

[ QUOTE ]
Complex differential equations must be solved.

[/ QUOTE ]

I said that they're not complex calculations. They're very simple in fact.
Our point is NOT whether or not the brain does some form of calculation, it's what those calculations entail.
I think you've set up a strawman in your post.

[ QUOTE ]
Why are you making something so simple into something so complicated?

[/ QUOTE ]

And quit trying to take my point on ball catching and make it yours.

[ QUOTE ]
predict ahead of time where to put your damn hand because it takes non-zero time to get it there!

[/ QUOTE ]

But we're talking about moving your body. Where the brain DOESN'T make these predictions.

[ QUOTE ]
I give up. Please do not respond with another post asking me if I have read the literature on how the brain does such and such, i.e. makes it's calculations. I do not care. I am sick of the whole topic.

[/ QUOTE ]

Wow @ your whole post. Are you having a bad day or something? I thought everyone was making great points in this thread and we all were learning something.

P.S. Calm down. Ten deep breaths.

Borodog
12-06-2006, 01:01 PM
I think I said several times that I did not mean that your brain is actually doing complex mathematics. But it is doing some algorithm that must be an approximation of complex mathematics, because it arrives at the correct results. And then you kept repeatedly denying that the brain does calculations. It was frustrating.

And the last time I checked, when you catch a ball, your brain tells your hand to do it, and to do this your brain has to have a plan ahead of time how to accomplish this. That involves prediction. I don't care what the mechanisms are, it doesn't matter. It doesn't matter if it's mostly a matter of "tracking algorithms" or lookup tables or visual extrapolation or a rapid series of guesses and corrections, or any other algorithm. All of them involve prediction and calculation. If you think you can catch a ball without your brain predicting where it's going to be, try catching it blindfolded.

If you are now agreeing that the brain does calculations, then we have no problem, as that is all I have ever meant to claim. You can approximate extemely complex calculations with very, very simple calculations, and if that's what the brain is doing, then more power to it, because it's often the best way to go (e.g. finite differencing complex differential equations, not that I think that's how your brain does it).

Apologies for the frustration. And there was certainly no intention to set up a strawman.

Borodog
12-06-2006, 02:00 PM
One last stab at making my position here clear.

To catch a ball, complex differential equations must be solved. This is undeniable, because the differential equations do in fact describe the path of the ball. They must, or they never would have been developed. Because these complex differential equations describe the path of the ball, we may as well say that the ball itself is solving these equations in its flight. That's why we call them "governing" equations, the equations don't just describe the flight, they describe the flight because they govern the flight. In other words, mathematical descriptions are useful because the world appears to follow the governing equations.

So to catch a ball, complex differential equations must be solved. HOWEVER, what algorithms or techniques your brain uses to accomplish this, is not my concern. Your brain is free to use as simple an algorithm as it can get away with. If that involves "tracking algorithms" or "continuous retinal arcs" or anything else is fine by me. That doesn't change the fact that the end result is that you have caught the ball, which means that you have solved a set of complex differential equations, whether you know it or not.

Phil153
02-20-2007, 11:23 AM
[ QUOTE ]
One last stab at making my position here clear.

To catch a ball, complex differential equations must be solved. This is undeniable, because the differential equations do in fact describe the path of the ball. They must, or they never would have been developed. Because these complex differential equations describe the path of the ball, we may as well say that the ball itself is solving these equations in its flight. That's why we call them "governing" equations, the equations don't just describe the flight, they describe the flight because they govern the flight. In other words, mathematical descriptions are useful because the world appears to follow the governing equations.

So to catch a ball, complex differential equations must be solved. HOWEVER, what algorithms or techniques your brain uses to accomplish this, is not my concern. Your brain is free to use as simple an algorithm as it can get away with. If that involves "tracking algorithms" or "continuous retinal arcs" or anything else is fine by me. That doesn't change the fact that the end result is that you have caught the ball, which means that you have solved a set of complex differential equations, whether you know it or not.

[/ QUOTE ]
Reviving this old thread because Borodog amuses me. I think the physics PhD is rotting your brain /images/graemlins/smile.gif

Here's what you need to understand: The kind of processing involved in catching/throwing a ball is NOT about executing algorithms. It's about matching the current situation with a huge number of previous situations and producing an appropriate response, using the hardware changes that occurred during previous learning experiences.

Think back to when you were a kid and were trying to learn to catch a ball, or bounce a ball, or ski, or rollerblade. How badly did you suck at it? Even after years of intensive training in the form of walking, moving, and observing objects in your world, you don't have the basic algorithms in place to catch a ball with any skill. What does this tell you about the value of algorithms in ball catching? It requires thousands of repetition of a specific activity to gain basic competence in it, to allow your brain hardware to reorganize itself to respond skilfully to a thrown ball. It requires tens of thousands to millions more repetitions to be able to do with expertise.

Why are some people vastly better at catching a ball than others? All undamaged brains can process language with incredible skill, a task vastly more complicated and processor intensive than finding a ball's trajectory via an algorithm. The likely conclusion is that the brain doesn't use an algorithm for either language or for catching or throwing a ball. It uses learned experience hardwired in the architecture itself.

As for throwing a ball - no, that doesn't involve a physics algorithms either. It's about perceiving distance (something we have millions upon millions of units of experience doing) with a vast number of experiences of throwing - and finding the right one. If any calculations go on, they are a distant second to the main mechanism.

Dane S
02-20-2007, 02:55 PM
This thread is awesome. Tech singularity is a very interesting thought even if it's not very plausible in the near term.

Does anyone disagree that a tech singularity is inevitable at SOME point if the human species continues to exist and technology continues to progress? Do you think it will be a good or bad event for humanity?

The biggest hole in Kurzweil's logic seems to me his thoughts on paradigms... he relies on these to continue exponential technology growth on pace through major barriers, BUT isn't the whole point of a paradigm shift that it will be sweeping and come out of NOWHERE (i.e. totally unpredictable in its consequences)? Since these shifts change the face of the world SO drastically, it's almost like any one of these might as well be a singularity when we consider our complete inability to look beyond them (with some Leonardo DaVinci type of guys sometimes being exceptions)&gt; Does my thinking make sense to anyone?

Metric
02-20-2007, 03:28 PM
[ QUOTE ]
The biggest hole in Kurzweil's logic seems to me his thoughts on paradigms... he relies on these to continue exponential technology growth on pace through major barriers, BUT isn't the whole point of a paradigm shift that it will be sweeping and come out of NOWHERE (i.e. totally unpredictable in its consequences)? Since these shifts change the face of the world SO drastically, it's almost like any one of these might as well be a singularity when we consider our complete inability to look beyond them (with some Leonardo DaVinci type of guys sometimes being exceptions)&gt; Does my thinking make sense to anyone?

[/ QUOTE ]
There is plenty of precident for exponential growth to continue through multiple paradigms. The classic example is computing power. Mechanical computation, then vacuum tube technology, then individual transistors, then integrated circuits. Each time the old paradigm hit a limit, pressure was created to find a new paradigm, and the exponential growth continued.

Dane S
02-20-2007, 04:13 PM
Sure, but how do you know what it's heading towards? The paradigm shifts change the way humans view everything right? I'm talking more about the larger shifts like automobiles, PCs, internet than vacuum tubes and transistors. I don't see how anyone would be able to make predictions considering the massive unpredictable effects of these shifts. Sure growth can continue (unless something unforeseen stops it) but who the hell can say what direction we will "grow" in? In 100 years perhaps primitivism will be all the rage and "growth" will mean reorganizing human society to resemble pre-Columbus America. See what I'm saying? It seems like Kurzweil is prejudiced towards some kind of technoutopia being the ultimate destination but I see no reason why this should be the case over an infinite number of alternative scenarios.

Could Kurzweil's fallacy be assigning a goal to all evolution when really it's about adaptation, not progress in any particular direction? Seems like a very religious viewpoint actually the more I think about it.

madnak
02-20-2007, 06:53 PM
[ QUOTE ]
Does anyone disagree that a tech singularity is inevitable at SOME point if the human species continues to exist and technology continues to progress? Do you think it will be a good or bad event for humanity?

[/ QUOTE ]

Given continued exponential growth, there will come a time when unaugmented humans can't even keep track. So I don't disagree. And I definitely consider it a good thing, although I think the upheavals on the way there may be painful.

But there are three problems. First, assuming that a trend will continue infinitely seems unjustified. Second, it won't happen in the near future, but probably after we'll all dead. Third, as you pointed out, we can't make any concrete predictions about the specific nature of this singularity. The reality might be something nobody can imagine today.

Phil153
07-31-2007, 01:41 AM
[ QUOTE ]
[ QUOTE ]
One last stab at making my position here clear.

To catch a ball, complex differential equations must be solved. This is undeniable, because the differential equations do in fact describe the path of the ball. They must, or they never would have been developed. Because these complex differential equations describe the path of the ball, we may as well say that the ball itself is solving these equations in its flight. That's why we call them "governing" equations, the equations don't just describe the flight, they describe the flight because they govern the flight. In other words, mathematical descriptions are useful because the world appears to follow the governing equations.

So to catch a ball, complex differential equations must be solved. HOWEVER, what algorithms or techniques your brain uses to accomplish this, is not my concern. Your brain is free to use as simple an algorithm as it can get away with. If that involves "tracking algorithms" or "continuous retinal arcs" or anything else is fine by me. That doesn't change the fact that the end result is that you have caught the ball, which means that you have solved a set of complex differential equations, whether you know it or not.

[/ QUOTE ]

Reviving this old thread because Borodog amuses me. I think the physics PhD is rotting your brain /images/graemlins/smile.gif

Here's what you need to understand: The kind of processing involved in catching/throwing a ball is NOT about executing algorithms. It's about matching the current situation with a huge number of previous situations and producing an appropriate response, using the hardware changes that occurred during previous learning experiences.

Think back to when you were a kid and were trying to learn to catch a ball, or bounce a ball, or ski, or rollerblade. How badly did you suck at it? Even after years of intensive training in the form of walking, moving, and observing objects in your world, you don't have the basic algorithms in place to catch a ball with any skill. What does this tell you about the value of algorithms in ball catching? It requires thousands of repetition of a specific activity to gain basic competence in it, to allow your brain hardware to reorganize itself to respond skilfully to a thrown ball. It requires tens of thousands to millions more repetitions to be able to do with expertise.

Why are some people vastly better at catching a ball than others? All undamaged brains can process language with incredible skill, a task vastly more complicated and processor intensive than finding a ball's trajectory via an algorithm. The likely conclusion is that the brain doesn't use an algorithm for either language or for catching or throwing a ball. It uses learned experience hardwired in the architecture itself.

As for throwing a ball - no, that doesn't involve a physics algorithms either. It's about perceiving distance (something we have millions upon millions of units of experience doing) with a vast number of experiences of throwing - and finding the right one. If any calculations go on, they are a distant second to the main mechanism.

[/ QUOTE ]
Bump to highlight my complete ownage of Borodog, and because we've got a thread going on about this right now.