PDA

View Full Version : A dim view on the progress of AI


Leaky Eye
02-17-2006, 04:46 PM
What do AI folks think of this article?

http://www.thenewatlantis.com/archive/11/halpern.htm

diebitter
02-17-2006, 05:19 PM
Pretty interesting. Definitely worth reading.

I personally never bought into the Turing test (that indistinguishability from a human answering marks a machine as intelligent) after the first year in the field. It had its place, but I wanna see machines that think rationally, and not like humans at all. No different decisions if you had to skip lunch and you're grouchy, or feeling better than usual today, or whatever.

Why do we want them to think like humans anyway, you might want to ask yourself - as if the way we think is the highest thing to aspire to.

And I kept smiling at the mention of the T2 answering questions - made me think of Ah-nuld.

Sharkey
02-17-2006, 05:59 PM
The gist of passing the Turing Test is transparency of code.

When conversing with a person, there’s no sense of the responses being generated by an input-output formalism, the behaviorists not withstanding.

Copernicus
02-18-2006, 01:17 AM
"One AI worker who believes that he has evaded the problems posed by the Test is Douglas Lenat, a former professor of computer science at Stanford, and founder and president of Cycorp. “The Turing test is a red herring,” he declared in 2001. “Anthropomorphizing a computer program isn’t a useful goal.” Lenat is dedicated to building a computing system with enough facts about the world, and enough power of drawing inferences from those facts, to be able to arrive at reasonable conclusions about matters it has not been explicitly informed about. Yet this goal suggests that his project, even more than Turing’s, is rightly described as “anthropomorphizing” a computer. Lenat differs from Turing only in that his goal is not to have the computer fool an interrogator into thinking that it is human; he wants it to actually possess the common sense that Turing’s computer only pretends to have.
"

The "only" would seem to indicate a bias in this article, since pretending to have common sense and actually having it (as evidenced by arriving at conclusions not explicitly programed) are indeed quite different and the "only" is out of place.

madnak
02-18-2006, 05:39 AM
[ QUOTE ]
The gist of passing the Turing Test is transparency of code.

When conversing with a person, there’s no sense of the responses being generated by an input-output formalism, the behaviorists not withstanding.

[/ QUOTE ]

I disagree. I can easily notice specific patterns in people's behaviors. I can also often get a line on a person's motivation or thought process.

But with computers, this isn't allowed. I would think an intelligent computer would be even more predictable than a dumb human. And dumb humans are quite predictable.

DougShrapnel
02-18-2006, 08:04 AM
In flight capacities, does an airplane pass for a bird? By that I mean if we where to put an airplane side by side with a bird. We know the bird can fly, would we think that the airplane could fly? The turing test is an insuficient way to test intelligence.

Leaky Eye
02-18-2006, 06:58 PM
Not as insuficient as that analogy...