Philosophy of AI
Philosophy of AI is a history of "big names".
The debates are great fun to watch.
Here are some big names and my take on them.
You don't have to agree with me of course
(The great thing about philosophy is
it's not falsifiable!):
- His books, What Computers Can't Do
and What Computers Still Can't Do
- AI machines only look intelligent
because they are programmed to output their meaningless tokens
as English words.
They have no idea what they are saying.
He seems at times to say AI is impossible.
I say: Fair criticism of much of AI.
Doesn't apply to the new stuff.
Dreyfus leads us to the Symbol-grounding problem.
- His thought experiment
The Chinese Room
- AI is impossible.
Instantiate your algorithm as a roomful
of 1 billion people passing meaningless tokens around.
You're telling me you can have a conversation in English
with China, yet not one of the Chinese understands English?
I say: Yes.
- AI is impossible because of
on the limits of logical systems.
There are propositions that we can see are true
that the AI logical system can't.
I say: Any working AI would not be a logical
It would be stochastic, statistical.
See My comments on Penrose.
- Penrose also
- His books
The Remembered Present
and Bright Air, Brilliant Fire
- AI is the wrong way to do it.
It should be done this way.
I say: Edelman is doing AI.
(And not very well.)
See My comments on Edelman.
- The machine metaphor is incorrect.
Here are types of self-referential system
that cannot be implemented as a machine.
I say: They can be implemented as a machine.
- See the
- Many papers.
Intelligence without Reason
- Traditional AI is the wrong way to do it.
We should do this New type of AI.
I say: I pretty much agree.
Brooks' work is not the final answer of course,
but his analysis is excellent.
Other people disagree.
Today the earwig, tomorrow man?.
- Lots of people (*)
- His book
- AI is coming, and humans will go extinct.
And that won't be necessarily bad.
AIs will be our inheritors.
I say: I like a lot of Moravec, but I have doubts about this.
First, Who's to say we won't become AIs ourselves?
Second, who's going to "mop up" the humans who don't
co-operate with this "evolutionary inevitability"?
Evolution is not in charge now. We are.
And the only way humans will go extinct is by genocide.
- AI is coming, and it's dangerous.
And it's going to happen soon (10/50 years).
I say: AI is a lot harder than that.
Nothing is happening soon.
- My favourite debaters with the AI critics:
The Mind's I
, Hofstadter and
Dennett, 1981. - Library, 155.2.
- A mind-bending collection of essays exploring the possibilities
of Strong AI. If Strong AI was true, could you be immortal?
Could you copy brains?
- Far more fun than science fiction.
The Artificial Intelligence Debate, ed. Stephen Graubard, 1988.
- Library, 006.3.GRA.
- A fairer, but duller, round-up of all sides to the debate.
Symposium on Roger Penrose's Shadows of the Mind
- A debate between Penrose and AI people.
Also essential reading, if you're interested in Penrose,
is the debate in Behavioral and Brain Sciences 13:643-705 (1990).
This latter debate is the one that convinced me that Penrose was wrong.
Darwin's Dangerous Idea,
- Library 146.7,
- The best case for Strong AI that I know of,
embedding it in a biological world view.
Dennett shows how Strong AI is simply the consequence of
ordinary scientific materialism,
and any alternative better fit into evolutionary materialism as well
as AI does.