PDA

View Full Version : post-singularity evolution


Metric
11-27-2006, 04:24 AM
So let's assume a sort of "Ray Kurzweil" exponential growth of technology over the next couple hundred years. At some point, human intelligence merges with machine intelligence, and eventually the physical, biological human brain will become just excess weight. At this point, humanity as we know it may be on the road to extinction, outclassed in every way by a new species of unimaginably superior intelligence.

The question then is, what guides the evolution of this new species, and its own decendants? Obviously the concepts of mutation, competition, reproduction, selection, speciation etc. become radically altered for the new species, which is free to alter its own programming at will. Still, I somehow suspect there to be a logically simple principle or set of principles (analogous to what we call "evolution") that guide such a life form and "selects" certain dominant character traits -- the question is, can we prove that this is the case under very reasonable assumptions, discover what the principles will be, and even discover what the dominant character traits are likely to be, centuries or millenia in advance?

hmkpoker
11-27-2006, 04:51 AM
Man that's really hard to say. Human civilization (which moves much faster than evolution) is basically driven by seemingly boundless wants and a desire to improve. I'd like to say the same of machines, but we can't even estimate what AI praxeology is like.

Very wicked cool concept to think about though /images/graemlins/smile.gif

Metric
11-27-2006, 04:59 AM
I agree that it's a bit like starting from scratch -- still, the ability to say ANYTHING about the principles guiding such a scenario (which could actually occur in the not-too-distant future, and give rise to an entirely new branch of science) would seem profound and worthwhile to know.

madnak
11-27-2006, 08:36 AM
I think the reason Kurzweil uses the term "singularity" is because it's impossible to "see beyond" it. If I accept his assumptions about the singularity, I also accept his conclusions that the results would be something we can't even imagine.

vhawk01
11-27-2006, 09:23 AM
[ QUOTE ]
I think the reason Kurzweil uses the term "singularity" is because it's impossible to "see beyond" it. If I accept his assumptions about the singularity, I also accept his conclusions that the results would be something we can't even imagine.

[/ QUOTE ]

(Completely off-topic) Do you?

Rduke55
11-27-2006, 11:49 AM
But isn't there some kind of immorality deal in there? Which really screws up evolution.

Man, I can't stand Kurzweil's crap. (Sorry Metric, I think it's a good thread, I just have to say that in any Kurzweil discussion where I'm not already criticizing him)

ALawPoker
11-27-2006, 12:55 PM
As the human race grows extinct and gives way to the new species, we will lose our instinct of emotion as we increase our instinct of reason (eventually, the world will be owned by beings who have no joy or satisfaction for what they're doing, but also no shortcomings in meeting their ends -- hence, no heartache or frustration either). The only goal of this species will be to survive, and it's evolution will be guided in a way that sees reasoning capacity as the most optimal way to survive. The machines will see animal life as a potential threat to their existence, and seek to rid existence of all animal life and all potential for animal life. Beyond that, it will see weather patterns and physical bodies like asteroids as the next most rational threat, and figure out how to control those things. Its evolution will work in a way that will see fellow machines with a relative weak mental capacity as useless, and such machines will be destroyed (these beings will have less and eventually no emotion or empathy, so there will be no rational reason not to destroy weak members).

I actually wrote a little about this in my blog (in my profile) if anyone cares.

FortunaMaximus
11-27-2006, 01:53 PM
Very little will change, I assure you. I should know. /images/graemlins/smirk.gif

Back in the 21st, oh, yes, we are, back in the 21st...

Such a glorious, explosive, violent century. It ended, though.

Anyway, on a more serious note, Kurzwell seems to have a cult following with idealistic, misguided antisocial geeks. And this is far from being an insult to some of you, I hope you understand what I mean here.

The singularity isn't a certain point in time. It already happened.

AS for man-machine mergers and post-biological solutions, don't count on that. Computers and the web will become, for a better definition, the ultimate tool of psychiatry. The software and hardware will merely mesh into the fabric of reality, much as phones and lights already have. The structures are being built anyway, and that's kinda NDAish.

Anyway, it'll allow for a benevolent democratic Terra with a quiet, unobtrustive superclass that works in concert with dataprocessors.

Think of it as a softer, gentler 1984. Done and done.

Anyway. Libertiatians can go [censored] themselves if they don't like it. There are simply too many billions that deserve the compassion and the lifting out of poverty.

America, the unclear nuclear.

Do you really think it would be otherwise?

The wages of mortality are fleeting, and the reckoning comes in our time.

No pretty colors today, gentlemen and ladies. /images/graemlins/tongue.gif Just the bare truth. It was necessary. The equations were creeping out of control. And above all, it is built on love and has absorbed millennia of history and pain. So it flipped the [censored] thing on humanity.

Have a great week.

K.

madnak
11-27-2006, 01:54 PM
[ QUOTE ]
[ QUOTE ]
I think the reason Kurzweil uses the term "singularity" is because it's impossible to "see beyond" it. If I accept his assumptions about the singularity, I also accept his conclusions that the results would be something we can't even imagine.

[/ QUOTE ]

(Completely off-topic) Do you?

[/ QUOTE ]

Not remotely.

ALawPoker
11-27-2006, 03:57 PM
[ QUOTE ]
Anyway. Libertiatians can go [censored] themselves if they don't like it. There are simply too many billions that deserve the compassion and the lifting out of poverty.

[/ QUOTE ]

Those crazy Libertiatiatiashuns! People are DYING!! ON OUR STREETS!!!!

vhawk01
11-27-2006, 05:03 PM
[ QUOTE ]
But isn't there some kind of immorality deal in there? Which really screws up evolution.

Man, I can't stand Kurzweil's crap. (Sorry Metric, I think it's a good thread, I just have to say that in any Kurzweil discussion where I'm not already criticizing him)

[/ QUOTE ]
Did you mean 'immortality'? I don't see why immortality (relative immortality, not infinite) necessarily causes any problems for evolution. It would certainly change the pressures.

Rduke55
11-27-2006, 05:49 PM
[ QUOTE ]
[ QUOTE ]
But isn't there some kind of immorality deal in there? Which really screws up evolution.

Man, I can't stand Kurzweil's crap. (Sorry Metric, I think it's a good thread, I just have to say that in any Kurzweil discussion where I'm not already criticizing him)

[/ QUOTE ]
Did you mean 'immortality'? I don't see why immortality (relative immortality, not infinite) necessarily causes any problems for evolution. It would certainly change the pressures.

[/ QUOTE ]

I guess if everyone is immortal I see a problem with certain aspects of selection (but not all of them).

Metric
11-28-2006, 02:01 AM
I should clarify that I'm not taking all of the Kurzweil stuff at face value by any means. However, I don't think the interface of machine intelligence with the human brain is too far out -- in many ways it's the obvious end-point of computer technology. But once this happens, the "new species" is pretty much guaranteed to arrive sooner or later, hence the question of interest to me.

FortunaMaximus
11-28-2006, 01:55 PM
[ QUOTE ]
I should clarify that I'm not taking all of the Kurzweil stuff at face value by any means. However, I don't think the interface of machine intelligence with the human brain is too far out -- in many ways it's the obvious end-point of computer technology. But once this happens, the "new species" is pretty much guaranteed to arrive sooner or later, hence the question of interest to me.

[/ QUOTE ]

The transition may be gradual. Commerce relies on a large part on the current infrastructures. It's amazing how redundant big-box stores have become, for instance.

Quantum mapping of the brain, if it's viable, will allow for a sort of machine immortality.

What's left to be determined is if Information is subject to entropic processes. If it isn't... Then there is very little to worry about, with such a transition. It's not without its dangers though.

Fascinating subject, indeed.

Rduke55
11-28-2006, 04:40 PM
[ QUOTE ]
I should clarify that I'm not taking all of the Kurzweil stuff at face value by any means.

[/ QUOTE ]

Sorry if I was unclear as well, I assumed that you didn't (the Kurzweil in quotes tipped me off).

[ QUOTE ]
However, I don't think the interface of machine intelligence with the human brain is too far out -- in many ways it's the obvious end-point of computer technology. But once this happens, the "new species" is pretty much guaranteed to arrive sooner or later, hence the question of interest to me.

[/ QUOTE ]

What kind of machine intelligence are you talking about here?

Metric
11-28-2006, 05:10 PM
I'm mainly picturing a chain of events whereby computers might at first simulate and take over for a very limited part of the brain where there was some sort of damage -- for example aiding in vision or hearing. It's not too hard to imagine that the next step could be enhancement -- doing it better than naturally for anyone able to afford it. If it becomes a serious industry, one could imagine eventual enhancement of higher level brain function, and eventually piece-by-piece replacement of the whole thing (at which point you have an entirely new species, since the computer intelligence wouldn't need biological support to survive).

It may or may not eventually happen, but I do think it's probable enough to take it seriously (at least as probable as discovering aliens via SETI, for example). So the question is that in such a scenario, what sort of logical structure would replace the biological evolution we are familiar with?