Advertisement

Representing decision trees as 'genes'

Started by July 08, 2007 06:50 PM
24 comments, last by Timkin 17 years, 4 months ago
I have been working on a design that uses decision trees, made from nodes of logical operators (and, or, not), specific actions, and descriptors. At the end of the day, everything computes to a 'true' or 'false' value to determine if action should be taken. Now, the plan was to create this tree in a random fashion, choosing from ands/ors/nots randomly to link together these specific actions and descriptors. The tree length would be arbitrary (as in, random -- but probability of the tree growing decreasing as the tree got bigger (and exponentially decreasing at that)). Now my issue is this -- each one of these decision trees represents a possible thought process. I am trying to optimize this thought process. My first thought was 'genetic algorithms'. I figured I could use constant values to represent the specific actions and logical operators to create a gene sequence (simply storying the binary tree in an array for mating and mutations). The only issue here is, with the trees being arbitrarily long, it would be impossible for mating to occur -- unless I capped the size at N levels deep and just made an array long enough, using the value 0 for children of a leaf. The other possibility was this: for each tree created, parse all sub trees into parts and store them into the 'parts' drawer. For each part in a tree, that parts' weight value would be equal to the aggregate fitness values for all trees it is in. The higher the weighting on a specific part, the better chance of that part being selected for the next generation. This specific implementation seems like it would be far more costly memory wise than the last, and a bit harder to introduce 'mutations' into (though, I suppose it could be done). Does anybody have any ideas on how to tackle this beast? I know it is a bit obscure, but for the sake of the project I am working on, I can't really say much more. Thanks!
Well, the first thing I'd do is search for terms like "genetic algorith" and "decision tree" in google. Doing that brings up some papers that are available online about some of the approaches that have been tried in using genetic algorithms to optimize decision trees.

Usually with genetic algorithms and graphs or trees, there are mutating operators designed to act on entire subtrees. Altering a single node allows for optimization of that particular node, but performing cross-over on nodes that share a common position in a binary tree is almost pointless since different trees might be using that part of the subtree for entirely different purposes. I think I have seen papers that discuss doing things like tracking how effective certain sub-trees or sub-graphs are performing, in the context of evolving neural network topologies.
Advertisement
Yeah. Now I feel dumb. Google seems to have plenty of the answers I am looking for. Thanks!
I seem to recall that one of our members (kirkd) has done some work on evolving decision trees. You might consider sending him a pm.

Cheers,

Timkin
Boolean Algebra (pdf) might be useful for simplifying random expressions.

Summary: ( OR = ∨, AND = ∧, NOT = ¬ )

  • OR is closed: ∀ x, y ∈ B, x ∨ y ∈ B
  • 0 is the identity for OR: x ∨ 0 = 0 ∨ x = x
  • OR is commutative: x ∨ y = y ∨ x
  • OR distributes over AND: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z)
  • AND is closed: ∀ x, y ∈ B, x ∧ y ∈ B
  • 1 is the identity for AND: x ∧ 1 = 1 ∧ x = x
  • AND is commutative: x ∧ y = y ∧ x
  • AND distributes over OR: x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z)
  • Complement (a): x ∨ ¬x = 1
  • Complement (b): x ∧ ¬x = 0

Theorems:
  1. x ∨ 1 = 1
  2. x ∧ 0 = 0
  3. ¬0 = 1
  4. ¬1 = 0
  5. x ∨ x = x
  6. x ∧ x = x
  7. ¬(¬x) = x
  8. x ∨ x ∧ y = x
  9. x ∧ (x ∨ y) = x
  10. x ∨ ¬x ∧ y = x ∨ y
  11. x ∧ (¬x ∨ y) = x ∧ y
  12. x ∨ (y ∨ z) = (x ∨ y) ∨ z
  13. x ∧(y ∧ z) = (x ∧ y) ∧ z
  14. ¬(x ∨ y) = ¬x ∧ ¬y
  15. ¬(x ∧ y) = ¬x ∨ ¬y
--"I'm not at home right now, but" = lights on, but no ones home
How is your data represented? Do you have a set of pairs, say (X, True), (Y, True), (Z, False) where X, Y, Z are inputs and the second component of your pair is the output?

There's probably a better way to do this than using GA's.
Advertisement
Well, because I am utilizing this method for pattern matching (in a broad scope of the word). For example, I want to be able to evolve a tree that can decide whether or not a certain number is within a range. So lets say I give it the range -10 and 10, and then an array like [1, 5, -11, 3, 7, 9]. I want it to go through those numbers, and give back a true false (1/0) based on whether or not it believes the number to be within the range. The fitness function of the tree is basically how many it got right -- and the size of the tree (with much higher weighting on the number correct).

So when you say 'how is your data represented' I am a bit confused. I know you probably think a neural network is more appropriate for this type of pattern classification and learning, but I have more than oversimplified what I am doing (because I cannot say much more).

Thanks.
No, I was thinking of building the tree directly, not having to evolve it, using the decision tree learning algorithm. There's probably an outline of it in AI:AMA.

However, if you're sure you want to use GA's, I can see possible ways of getting around the problem with different sized trees. One, obvious, way would be to cap the depth of the tree, and keep it constant. Of course, the complexity of your boolean function will determine the size of your trees, to some extent. I think this is the method Holland (?) used when evolving Lisp programmes, although I may be misremembering.
Thanks for remembering, Timkin!

I have indeed worked on evolving decision trees. I won't go into the extreme details of it all here (unless you ask), but the general idea I used was to represent the tree with a linked list. Each node in the tree has three links - parent, left_child, and right_child. The root node of the tree has parent set to NULL and each terminal (leaf) has left_child and right_child set to NULL. This type of structure makes parsing the tree very easy. Regarding crossover or mating of two trees, all you have to do is choose a node from each tree at random, and then swap their parent linkages. This will effectively swap the two subtrees.

I'm more than happy to answer any questions you may have, if I can. As I mentioned, I've been doing decision tree evolution for a long time, so I've probably bumped up against some of the obstacles you'll be bumping up against.

-Kirk

Just in case you're interested, here's a link to the publication I got describing this work: http://pubs.acs.org/cgi-bin/abstract.cgi/jcisd8/2004/44/i03/abs/ci034188s.html
I would have a good look at first order logic. You could also build decision trees from an example set. Both methods and others on algorithmically constructing good decision trees can be found in "Artificial Intelligence a modern approach" by Russell and Norvig.

This topic is closed to new replies.

Advertisement