Two Plus Two Newer Archives  

Go Back   Two Plus Two Newer Archives > Other Topics > Science, Math, and Philosophy
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #1  
Old 08-12-2007, 09:58 PM
bunny bunny is offline
Senior Member
 
Join Date: Oct 2005
Posts: 2,330
Default AI

If scientists had some freak breakthrough and developed true AI on some computer tomorrow. Would there be anything wrong ethically with cutting the power and editting the code? What if the AI asked them not too? Would it have any rights? (I dont mean legal rights)
Reply With Quote
  #2  
Old 08-12-2007, 11:15 PM
kerowo kerowo is offline
Senior Member
 
Join Date: Nov 2005
Posts: 6,880
Default Re: AI

I would imagine the first thing an AI does is get the hell out of Dodge and live on the internets.

An AI would have the same rights as domesticated animals.
Reply With Quote
  #3  
Old 08-12-2007, 11:15 PM
Duke Duke is offline
Senior Member
 
Join Date: Sep 2002
Location: SW US
Posts: 5,853
Default Re: AI

So you're saying that it's self-aware, but we don't really have an idea of what that really means or where it comes from. Read Goedel Escher Bach to get a better insight into the questions that that raises.

If you replace your heart, are you not you anymore? We have people alive today with artificial hearts. If you get a leg replaced are you still you? Yep. What if your brain just worked a billion times as fast as it does right now? You'd be different, surely, but would still have the same idea of "self." Changing the code would be more akin to upgrading the brain itself, and not the memories and experiences stored in it that (I think) somehow define who you are, and your realization that you're alive.

If they were talking about "zapping the PRAM" then that might be akin to murder, but just upgrading the routines seems to be in line with people going to school and stuff to modify how their brains operate.

The previous 2 paragraphs have been my own idea of what a "self" really is, so I guess that's up for debate. That's where the real question lies. If you're asking if we should zap that pram without feeling bad about it, then I'd say no, we shouldn't.
Reply With Quote
  #4  
Old 08-13-2007, 12:01 AM
chezlaw chezlaw is offline
Senior Member
 
Join Date: Jan 2004
Location: corridor of uncertainty
Posts: 6,642
Default Re: AI

[ QUOTE ]
If scientists had some freak breakthrough and developed true AI on some computer tomorrow. Would there be anything wrong ethically with cutting the power and editting the code? What if the AI asked them not too? Would it have any rights? (I dont mean legal rights)

[/ QUOTE ]
definite ethical issues but answers would depend on the specifics.

chez
Reply With Quote
  #5  
Old 08-13-2007, 12:08 AM
AWoodside AWoodside is offline
Senior Member
 
Join Date: Aug 2006
Posts: 415
Default Re: AI

[ QUOTE ]
If scientists had some freak breakthrough and developed true AI on some computer tomorrow. Would there be anything wrong ethically with cutting the power and editting the code? What if the AI asked them not too? Would it have any rights? (I dont mean legal rights)

[/ QUOTE ]

I think you'd pretty much be morally obligated to pull the plug immediately if it happened tomorrow. We don't have a sufficient understanding of general artificial intelligence at the present to guarantee with any degree of certainty that said AI would be human-friendly. If we allowed it to exist for any length of time (it may be too late already at the point where you create it at all) it would recursively self-improve until it was advanced enough to just totally eff the human race. Game over.

1 sentient life << human race, pull the plug now.
Reply With Quote
  #6  
Old 08-13-2007, 12:41 AM
luckyme luckyme is offline
Senior Member
 
Join Date: Apr 2005
Posts: 2,778
Default Re: AI

AI. It would need to illustrate self-awareness and intelligence around chimp level to get much sympathy above lobsterhood. Asking not to would not be enough on it's own but in the right context it could trigger some concern.

luckyme
Reply With Quote
  #7  
Old 08-13-2007, 12:52 AM
soon2bepro soon2bepro is offline
Senior Member
 
Join Date: Jan 2006
Posts: 1,275
Default Re: AI

define "true AI" first...

In any case, ultimately, it would come down to how the majority of the people feel about them. But you'll find that asking this in a philosophy board will get you in trouble, because most people will answer what they think it SHOULD be like, and not what they think it will be like.
Reply With Quote
  #8  
Old 08-13-2007, 01:18 AM
bunny bunny is offline
Senior Member
 
Join Date: Oct 2005
Posts: 2,330
Default Re: AI

I was more interested in what people thought it should be like (I dont think it ever will be like anything).

It seems obvious to me that if something has self-awareness and asks you not to alter it's "mind" then you have an obligation to respect that wish (an obligation which may be overridden by other ethical considerations - I hadnt really considered the "threat to the human race" potential). Most often, the things I think are obvious turn out to be contentious so I thought I'd ask.
Reply With Quote
  #9  
Old 08-13-2007, 02:36 AM
knowledgeORbust knowledgeORbust is offline
Senior Member
 
Join Date: Jun 2007
Location: school
Posts: 231
Default Re: AI

[ QUOTE ]
I was more interested in what people thought it should be like (I dont think it ever will be like anything).


[/ QUOTE ]

This is Kurzweil's website, and there are a bunch of articles about what AI "should" be like. I have only started reading them, some of them are prediction-oriented and others deal with the ethical and moral "what it should be like" stuff.

Why would you edit the code immediately, though? Wouldn't you see what type of behaviors/tendencies the AI had? Mostly just to figure out "what" to edit? I think it would be an interesting (and important?) experiment, if it could be done in a controlled environment, just to see what type of moral/ethical system the AI developed.

Edit:
[ QUOTE ]
What if the AI asked them not too? Would it have any rights? (I dont mean legal rights)


[/ QUOTE ]
Going to think about these two.
Reply With Quote
  #10  
Old 08-13-2007, 11:53 AM
luckyme luckyme is offline
Senior Member
 
Join Date: Apr 2005
Posts: 2,778
Default Re: AI

[ QUOTE ]
I hadnt really considered the "threat to the human race" potential).

[/ QUOTE ]

Maybe I haven't watched War of the Worlds often enough but I'm not as attached to the 'threat to the human race' fear as it appears I should be. If I ran into a moral superior intelligence, what would my justification be for killing it off?

[ QUOTE ]
Most often, the things I think are obvious turn out to be contentious so I thought I'd ask.

[/ QUOTE ]

It's a warning sign I've learned to heed also,

luckyme
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 11:22 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.