## Previously

Triangulating a graph. Is it me or does that just mean removing squares in the graph (yes, no cycles of 4 or more)? You do it by removing vertexes, and you have to add fill in lines.

Where is this heading? **Join Forests** which are used to calculate probabilities more efficiently than variable elimination.

## Join Forests

The cliques of a triangulated graph can be made into a Join Forest. In graph terms a forest is a collection of trees. We know what a tree is. Most of the forests we’re going to see are actually just going to have one tree.

### How are we going to construct these things?

(Start to) Do it by eye in simple cases. Nodes of join forest tree correspond to cliques in a graph. Mostly we’re going to be using hypergraphs as it is computationally more efficient. Specifically *decomposable hypergraphs*.

A hypergraph is decomposable if its most reduced form is a clique hypergraph of a decomposable (i.e. triangulated) graph.

### Graham’s algorithm for variable elimination

- Take a hypergraph.
- Delete a vertex that appears in only one hyperedge.
- Delete a hyperedge that is contained in another hyperedge.
- Repeat.

A hypergraph is decomposable iff this deletes everything.

This will construct a *join tree* for us. What’s one of them? It’s a tree where the nodes correspond to cliques of a hypergraph. And if you take a variable of one of these nodes (e.g. ‘TbOrCa’ in the Asia example) that is in another node, you will find that that variable is included in any nodes between those two. This is easier to see graphically. This is known as the ‘Join Property’.

There’s a formal definition on the slides too.

Graham’s algorithm can be used to construct join forests too:

- Run it
- Whenever a hyperedge gets deleted, connect it on your join tree to the hyperedge that absorbs it.

## Probability Propagation in Join Forests

### What’s wrong with variable elimination for computing marginal probabilities?

Well, you can end up repeating a lot of computational work – keep running variable elimination over and over for different combinations of probabilities is going to be a lot of repetition.

Variable elimination created a cluster tree. A join forest is essentially a cluster tree with redundancy removed. Therefore (apparently) you can run variable elimination in a join forest. Not really sure what the point of that is…And I lost the plot a bit here…