Dr. Mark Humphrys School of Computing. Dublin City University. My big idea: Ancient Brain Search:

# Heuristic Search

```
```
Brute force: Uninformed search (depth-first, breadth-first) doesn't scale well to large problems.

Heuristic Search: Informed search. Use problem-specific info.
Instead of systematically searching states in order in which they present themselves:
Restrict search by first choosing branch that is "more likely" to lead to goal.
How do we know "more likely" if don't know where goal is?

```
```
Heuristics give up the idea of ever doing an exhaustive search.
Heuristics give up the idea of guaranteeing to find the best solution.
Instead they try to do a smart search of a huge space in the limited time available.
Any heuristic can fail.
```
```

# Xs and Os again

```
```

### Reduce statespace

We already established there are 9! = 362,880 paths.

Symmetry reduction - Recognise that many states are mirror images of each other and so don't have to search separately.
Symmetry reduction on 1st level reduces no. of paths to 3 . 8!
Symmetry reduction on 2nd level reduces no. of paths to 12 . 7! = 60,480

Symmetry reduction may require a lot of thought and work by the human. But worth it if it reduces state space.
For Xs and Os, or any 2D board, could write an algorithm to do the symmetry reduction (rotate and mirror).
Might not be board game: e.g. 2D traffic lights junction simulation. Could be rotated.

```
```

### Heuristic search

Consider heuristic of: "Number of winning opportunities (that contain at least one of my pieces)" in each situation:
Luger Fig 4.2
Always take highest (or if draw, any joint winner).
Always go for centre move. This dumps 2/3 of entire statespace (or 8/9 if we consider our original naive search).

On next move, there are only actually 2 different replies the opponent can make:
Luger Fig 4.3
Depending which reply he makes, our heuristic tells us what to do next in each case. Oddly, in both cases, it mightn't be the move you think.
The numbers circled show "Number of winning opportunities (that contain at least one of my pieces)"

 Two ways of using the heuristic: Use it to make the actual move. Don't search down multiple levels. Respond to current state by comparing "Number of winning opportunities" among possible immediate moves, and taking that move. Not AI search at all. Just a solution. or: Use it to decide which child to search first. Use it to order the children and do depth-first etc. searches as before.

```

```

# Hill-climbing

"Take state with highest number of winning opportunities" heuristic is a Hill-climbing strategy.
Take the most promising immediate next step, without looking n steps ahead.
Imagine searching for highest point on a curve:

Always go uphill, never downhill, until stop, probably at a local optimum rather than the global optimum.
Unable to get temporarily into a worse state in order to get into better one after that.

```
```

N-puzzle game.
Example of where we might need to go "downhill".
To get to goal, might need to move some pieces temporarily out of position.
Public domain image from here.

```
```
The local optimum problem:
Luger Fig 4.4
Large number is best.
We have parent (4) who is better than any of its children (3).
So will stay at parent (4) and miss global optimum (5).
```
```

# 4.2 - Best-first search

Best-first search
OPEN - current fringe of the search - states ordered by heuristic estimate of goodness / closeness to goal (Evaluation function)
```
```

Best-first search.
Image courtesy of Ralph Morelli.

Numbers show heuristic estimate fitness of each state (lowest number best).
When B is evaluated, its children E and F seem worse / further from goal (our heuristic got it wrong, on either parent or child, or both).
E and F go into sorted queue. C now has lowest number, so we jump to there.
We jump around the graph. Next state to be evaluated may be at any level.
States actually expanded are in bold.
When state O is not goal, we turn to next in queue - P.
We have backtracking when we go down an apparently promising but fruitless path.
Less likely to get stuck in local optimum than if always took best-looking next step (hill-climbing). Eventually some other state probably evaluates as best and we jump to that.

Note we could get stuck down a path if heuristic very inaccurate. e.g. Go down a line where heuristic says we are 2 steps from goal forever, no matter how deep we go.

Q. How might you solve this? (How might you compensate for an inaccurate heuristic?)

```
```

# Evaluation function

Evaluation function f is a heuristic estimate of how good the state is (high f good) / distance to goal (low f good).
Design of heuristic evaluation functions is an empirical problem. Can spend a lot of time designing and re-designing them.
Often no obvious answer. e.g. Write a function that estimates, for any state x in chess, how far that state is from checkmate.

```
```
```
```
ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.

Wikipedia: Sometimes I link to Wikipedia. I have written something In defence of Wikipedia. It is often a useful starting point but you cannot trust it. Linking to it is like linking to a Google search. A starting point, not a destination. I automatically highlight in red all links to Wikipedia and Google search and other possibly-unreliable user-generated content.