#1
|
|||
|
|||
AI
If scientists had some freak breakthrough and developed true AI on some computer tomorrow. Would there be anything wrong ethically with cutting the power and editting the code? What if the AI asked them not too? Would it have any rights? (I dont mean legal rights)
|
#2
|
|||
|
|||
Re: AI
I would imagine the first thing an AI does is get the hell out of Dodge and live on the internets.
An AI would have the same rights as domesticated animals. |
#3
|
|||
|
|||
Re: AI
So you're saying that it's self-aware, but we don't really have an idea of what that really means or where it comes from. Read Goedel Escher Bach to get a better insight into the questions that that raises.
If you replace your heart, are you not you anymore? We have people alive today with artificial hearts. If you get a leg replaced are you still you? Yep. What if your brain just worked a billion times as fast as it does right now? You'd be different, surely, but would still have the same idea of "self." Changing the code would be more akin to upgrading the brain itself, and not the memories and experiences stored in it that (I think) somehow define who you are, and your realization that you're alive. If they were talking about "zapping the PRAM" then that might be akin to murder, but just upgrading the routines seems to be in line with people going to school and stuff to modify how their brains operate. The previous 2 paragraphs have been my own idea of what a "self" really is, so I guess that's up for debate. That's where the real question lies. If you're asking if we should zap that pram without feeling bad about it, then I'd say no, we shouldn't. |
#4
|
|||
|
|||
Re: AI
[ QUOTE ]
If scientists had some freak breakthrough and developed true AI on some computer tomorrow. Would there be anything wrong ethically with cutting the power and editting the code? What if the AI asked them not too? Would it have any rights? (I dont mean legal rights) [/ QUOTE ] definite ethical issues but answers would depend on the specifics. chez |
#5
|
|||
|
|||
Re: AI
[ QUOTE ]
If scientists had some freak breakthrough and developed true AI on some computer tomorrow. Would there be anything wrong ethically with cutting the power and editting the code? What if the AI asked them not too? Would it have any rights? (I dont mean legal rights) [/ QUOTE ] I think you'd pretty much be morally obligated to pull the plug immediately if it happened tomorrow. We don't have a sufficient understanding of general artificial intelligence at the present to guarantee with any degree of certainty that said AI would be human-friendly. If we allowed it to exist for any length of time (it may be too late already at the point where you create it at all) it would recursively self-improve until it was advanced enough to just totally eff the human race. Game over. 1 sentient life << human race, pull the plug now. |
#6
|
|||
|
|||
Re: AI
AI. It would need to illustrate self-awareness and intelligence around chimp level to get much sympathy above lobsterhood. Asking not to would not be enough on it's own but in the right context it could trigger some concern.
luckyme |
#7
|
|||
|
|||
Re: AI
define "true AI" first...
In any case, ultimately, it would come down to how the majority of the people feel about them. But you'll find that asking this in a philosophy board will get you in trouble, because most people will answer what they think it SHOULD be like, and not what they think it will be like. |
#8
|
|||
|
|||
Re: AI
I was more interested in what people thought it should be like (I dont think it ever will be like anything).
It seems obvious to me that if something has self-awareness and asks you not to alter it's "mind" then you have an obligation to respect that wish (an obligation which may be overridden by other ethical considerations - I hadnt really considered the "threat to the human race" potential). Most often, the things I think are obvious turn out to be contentious so I thought I'd ask. |
#9
|
|||
|
|||
Re: AI
[ QUOTE ]
I was more interested in what people thought it should be like (I dont think it ever will be like anything). [/ QUOTE ] This is Kurzweil's website, and there are a bunch of articles about what AI "should" be like. I have only started reading them, some of them are prediction-oriented and others deal with the ethical and moral "what it should be like" stuff. Why would you edit the code immediately, though? Wouldn't you see what type of behaviors/tendencies the AI had? Mostly just to figure out "what" to edit? I think it would be an interesting (and important?) experiment, if it could be done in a controlled environment, just to see what type of moral/ethical system the AI developed. Edit: [ QUOTE ] What if the AI asked them not too? Would it have any rights? (I dont mean legal rights) [/ QUOTE ] Going to think about these two. |
#10
|
|||
|
|||
Re: AI
[ QUOTE ]
I hadnt really considered the "threat to the human race" potential). [/ QUOTE ] Maybe I haven't watched War of the Worlds often enough but I'm not as attached to the 'threat to the human race' fear as it appears I should be. If I ran into a moral superior intelligence, what would my justification be for killing it off? [ QUOTE ] Most often, the things I think are obvious turn out to be contentious so I thought I'd ask. [/ QUOTE ] It's a warning sign I've learned to heed also, luckyme |
|
|