The Hardest Problem in the History of Science
A talk given
Institute of Contemporary Arts, London, Feb 2000.
See full reference.
Discussion of Artificial Intelligence (AI) tends to swing between two
One is that AI is impossible - the position argued recently by Professor
The other is that AI is easy and will happen soon, and humans could be
with extinction or enslavement - the position argued recently by
Professor Kevin Warwick.
I will argue that both of these positions are mistaken.
That AI is in fact a branch of cognitive science,
part of the understanding of ourselves and other animals.
As such, AI is part of perhaps the hardest, most ambitious
undertaking that science ever has or ever will undertake.
In response to the difficulty of this problem, I will describe the
in AI that has essentially given up on human-level intelligence,
and is working on animal-level intelligence - in the hope of producing
properly grounded definitions of all the terms that are casually
used at the human level - emotion, fear, love, consciousness,
language, representations, memories - but whose precise definition
"Animal-AI" researchers would argue that we will only really know what
things mean when we understand their history and primitive origins.
- Roger Penrose, The Emperor's New Mind,
Shadows of the Mind
- 2 strands:
- Strand 1 - AI is impossible. A machine cannot be intelligent.
Argument is based on Godel's result on the limits of formal systems.
propositions that we can see are true that the AI logical system can't.
- Strand 2 - Here's what you need to be intelligent / conscious
- A type of "machine" which exploits quantum effects.
- AI rejects the 1st. Finds the 2nd interesting but unproven:
- Strand 1 - Any working AI would not be a logical (truth-preserving) system.
It would be probabilistic, statistical,
learning by trying random actions out against the world
and modifying itself.
Full of contradictions. Just like us.
- Strand 2 - Even if the quantum machine is built, it is not shown
what (if any) relation this has to solving the problems of AI.
- I believe Penrose is looking for quantum effects in a vain attempt
to explain the "unity" / apparent wholeness of the mind and of consciousness.
He cannot believe that something that feels whole
might not really be whole.
That a decentralised Network of Mind or Society of Mind
gives rise to a global level that feels unified.
- AI agrees that this is one of the big questions:
How can something that is decentralised feel centralised?
But it is hardly far-fetched that it could.
I think we could feel any way that was evolutionarily useful.
AI is not a good enough model?
- Computers and robots are a rich model,
but do we need a little more to really model the mind?
- Analog? Chemical? Quantum?
- AI is open to this.
AI people want to use the best models and tools.
- If you show us how a quantum machine is good,
we'll do AI on quantum machines.
- The paradigm may be flawed/incomplete,
but the paradigm only changes when something better comes along.
This is not narrow-minded.
This is a sensible way to behave.
- Finally, the computer/robot paradigm is remarkably resilient.
A "program" is a lot more than something like Microsoft Word.
Genetic Programming. Neural Networks. Cellular Automata.
The Church/Turing thesis postulates that
the computational paradigm can model everything.
"If you can write it down, it's an algorithm".
AI is easy? (and AI is dangerous?)
- Kevin Warwick, March of the Machines
- Full human-level intelligence will happen in 10-50 years.
Humans will be enslaved / go extinct.
- Makes claims:
- Building full AI machines is possible in theory.
- This will happen soon.
- All the major problems of AI are near solution.
The main problem is just to "scale up".
- Humans will remain stuck at the H. sapiens level
while this is happening.
- Most AI people agree with 1,
disagree with 2,
disagree vehemently with 3.
We don't even know what questions to ask yet.
Warwick presents examples of only relatively simple machines.
Such enthusiasm about scaling up could have been written at any time
during the last 30 years of AI (and was).
But let's not quibble about the timescale.
Someday we'll make AI's.
Is the AI project really about extinction of humanity in the long run?
- I think not.
Because of the unspoken assumption 4,
that humans stay the same.
- First we consider 3 - Why AI is hard.
- Then we consider 4 - What will happen to humans?
Why AI is hard
- In Warwick's book there is no sense of unsolved problems
The challenge is expressed merely as "scaling up" existing work
to larger numbers of neurons - "Brain Building",
as if something will then magically happen.
- AI was like this before - "Faster machines and we will have AI",
"More memory and we will have AI".
- It's not true - we need theoretical breakthroughs
in understanding the systems we are talking about,
their origin, evolution, development and dynamics.
- Some open issues follow.
Architectures of Mind - What does the whole mind look like?
Network, Hierarchy or Society?
Does I/O link to many brains or one?
Who is in charge?
Where am I? What is consciousness?
Action Selection - As a more specific example of the above.
We know how to solve 1 problem.
How does the creature deal with multiple problems
- "Learning to Learn"
- How does the creature generate goals for itself
in the first place?
Symbol-grounding, Evolution of language. - What is language?
How do creatures processing numerical sensory data
end up processing symbolic "words" with meanings?
What does "chair" mean, internally?
Is it a meaningless token #5099 being passed around,
or is it a whole specialised sub-system,
Do parts of the brain talk to each other?
Do we have an internal language?
Is it English, or is it something more messy?
Will sub-symbolic AI plug in neatly to symbolic AI?
- Robots or simulation?
- Robots are more real, may solve symbol-grounding.
But experiments in simulation
are often more practical.
e.g. Sims evolved a world of beautiful 3-D creatures from scratch.
Large-scale experiments involving the Web?
Open Issues / Why AI is hard
A further reason that I propose:
- Societies. Culture.
- Maybe you need a whole interacting society and culture to be intelligent.
Idea that we aren't that intelligent individually
- most of our intelligence resides
in our culture, in other people,
in our support systems,
the way our world is structured for us, our books, our machines.
We didn't arise alone in a lab like Cog.
There were always millions of us.
We didn't arise in a planet that was full.
There was no other intelligent life.
We had "peace and quiet" for 2 million years,
during which we could slowly evolve our own languages and cultures.
And it was a process of trial and error.
Most societies fail - Jared Diamond,
Guns, Germs and Steel: A Short History of Everybody for the Last 13,000 years
- Evolutionary History.
How will AIs do this?
Their cultures will be under massive pressure from the outside world,
right from the start.
Their societies will fail much more easily.
AI is hard in theory,
and will be even harder to implement in practice.
One of the reasons why is that the planet is full.
The planet is full of humans
- There is such a thing as an answer to the question
"How do they work?"
- They have a culture, a language, etc. already.
- What will we be able to do with these in the future?
Cognitive Science - the understanding of
our minds, and the minds of other animals,
with fully detailed causal models
Mid-3rd millennium - We will become AI's
(if we want, in a free society)
- AI is part of Cognitive Science - the understanding of minds
- Long-term, AI is about us.
- Once you understand, you can change. You can copy.
You can transplant to a new medium.
All the rest that we are familiar with from
non-brain human biology.
Do what you want to.
In a free society no one forces you to.
- Over the next few hundred years, we will understand ourselves,
and so become, in effect, AI's.
- No evidence that:
Solving the unsolved problems in AI.
is harder than:
Socialising the AI machines in a culture that doesn't fail.
Reading brains into a new material
without understanding them.
- The former needs theoretical understanding.
The latter doesn't.
- The former needs the new culture to work.
The latter doesn't. The culture already exists.
- One big advance in a technology for reading the mind
without destroying it, and the latter would happen immediately.
Without the need for a proper Theory of the Mind at all!
Transplant to new components, and trust the brain's ability
to adapt and re-organise to smooth out the details.
Early 21st cent. - Animal-level AI
- Researchers gone back to basics since the 1980s.
- Trying to define: emotion, fear, love, consciousness,
language, representations, memories
- by looking at the animal, the infant.
- Build a pre-linguistic creature that learns from its
environment and "understands" it in a physical sense.
That tries to learn language rather than has language
- Rather than a creature that uses high-level human language
that it doesn't understand to fool us into thinking there is
someone at home. - Eliza.
- Warwick comes from the "Animal-AI" world, but thinks
it has actually solved these issues.
Early 21st cent. - 2 types of AI systems
- Pre-linguistic robots and non-verbal learning systems.
Autonomous robots in unpredictable environments
(e.g. home, street, building site, farm)
as opposed to predictable (factory).
- Learning and adaptability key to survival in unpredictable domain.
- Constant interruptions. Modify plans on the fly.
- Make judgements on incomplete data.
- Have to learn some tasks from scratch, because human
literally can't tell you
how to do them.
- Specialised learning and search systems for processing
data that it doesn't "really" understand.
Early 21st cent. - Animal-like robots
- Jobs in physical world requiring adaptation
to unpredictability but not requiring symbolic intelligence.
- Dangerous. Tedious. Impossible (underwater etc.).
so AI not needed:
- Engineer the environment - factories.
- e.g. Engineer the traffic so cars communicate. Don't need sensors.
- Tele-robotics - Remote control.
No need to give it a brain.
But can you hire enough remote controllers? 24 hours a day.
NASA - 8 light minutes.
- Major cause of early death in the world today.
- Once autonomous cars are safer,
it would be criminal to allow humans to drive.
- Liberate old people, disabled, children,
busy business people.
- Liberate the drunk! And sleepy people.
Robots in the home?
- Robots are already in the home. Washing machine.
Vacuum cleaner. Car. We often have strange definitions of what we call a robot.
- Autonomous lawn mowers, vacuum cleaners exist.
- Network them. A house full of animals,
guarding you and your property.
- Can it get up the stairs?
Much depends on price
of the hardware.
Does this begin to answer the unsolved problems?
Do we get a robotic chimpanzee by 2100?
Does this then begin to converge with the rest of Cognitive Science?
It will be an interesting century.
Conclusion - Intelligence flows on
Warwick's central theme is
"What happens when intelligence
is cracked / liberated?
What will we do with it?"
He imagines a
but never explains where this
is supposed to come from.
So like, we just give up on human freedom and democracy
just because a new technology exists?
There is another view
- Humans are smart, we'll understand the mind,
but we'll be civilized about it.
We'll value everybody, whether they want to be part of the new
experiments or not.
- Intelligence will be liberated into
humans (enhancements, full transplants),
machines, networks, software agents, robots.
- Intelligence / civilization will carry on,
flowing into new bodies (as it has always done).
- (Like all technology) It will find
that it is far, far easier to adapt to
with what already exists
- a planet
humans and cultures and societies and languages
- rather than reinvent the wheel (and, literally, everything else).