2d and 3d puzzle solvers
Hi, I got two similar AI projects I'm working on.
Trying to build a solver for a 2d sliding puzzle and a 3d Rubiks Cube. I'm trying to build these as what a human would solve like, so I don't need super efficiently.
So, with both puzzles, most humans would solve First row, then work their way to the bottom.
I assume with the 2d puzzle, I can use a 2d array to store everything. But how do I move the pieces toward the goal? And how do I deal with the last two rows, which can require a bit of shuffling.
I have no idea the best method to store a Rubiks cube (I haven't done much 3d math). And also no idea how to handle the rotations of the cubelets (which I assume is going to be based on data structure I use. I know how to solve a cube with the basic algorithms, I just need a starting point.
I assume there is some overlap on the theories between both.
As far as the rubic I would define 6 arrays (one for each face), each 3x3 or 1x9 (one for each square on the face) and then code up (and debug) the transformations on those arrays.. only then would I have a framework ready for some rubic A.I
Seems to me that rubic can be reduced to only 12 meaningfull transformations (4 for each axis)
Remember, dont confuse representation with visualization .. any arbitrary representation can be visualized (rendered) in any arbitrary way
Seems to me that rubic can be reduced to only 12 meaningfull transformations (4 for each axis)
Remember, dont confuse representation with visualization .. any arbitrary representation can be visualized (rendered) in any arbitrary way
you can use a well known search algorithm (like A*) to solve both puzzles.
define the nodes as positions, transitions (edges) as moves leading next positions. you dont actually need a heuristic to solve the puzzles but a reasonable one will help solving faster. a good heuristic may be the sum of distances of tiles to their desired (solved) positions.
here is an example of a puzzle solved by A*
r a f t
define the nodes as positions, transitions (edges) as moves leading next positions. you dont actually need a heuristic to solve the puzzles but a reasonable one will help solving faster. a good heuristic may be the sum of distances of tiles to their desired (solved) positions.
here is an example of a puzzle solved by A*
r a f t
A* can work for both puzzles, but for the Rubik's cube it is not a very good option. The primary issue is that at each node or state of the cube, you have 18 possible rotations to consider: clockwise, counter-clockwise, and 180 for each of the six faces. After the first move this number will drop to 15 if you prevent turning the same face twice. You can drop it further if you force some ordering on the rotations. For example, a left face rotation followed by a right face rotation is equivalent to a right face rotation followed by a left face rotation so you don't need to consider both. This will effectively reduce your options to around 13-14 from each state on average.
This is all good and fine, but with a branching factor of 13, A* will probably consume your computer's memory before you ever come close to a solution. The maximum number of moves necessary to solve an arbitrary cube should be around 21 with about 16 or 17 on average. This turns into an enormous amount of states to monitor and keep track of within A*.
Instead, give IDA* (Iterative Deepening A* - see below) or Fringe Search a try. They are both very easy to code up and in my experience I get about 1000X speed up compared to A*.
As for heuristics you will definitely want a good heuristic for the Rubik's Cube and I would suspect you will benefit tremendously from a heuristic in the sliding puzzle. For the Rubik's Cube good heuristics are the number of moves to get all the corner pieces properly positioned and oriented, or the number of moves required to get all the edges postitioned/oriented, or the max of these two.
-Kirk
IDA* pseudo code
This is all good and fine, but with a branching factor of 13, A* will probably consume your computer's memory before you ever come close to a solution. The maximum number of moves necessary to solve an arbitrary cube should be around 21 with about 16 or 17 on average. This turns into an enormous amount of states to monitor and keep track of within A*.
Instead, give IDA* (Iterative Deepening A* - see below) or Fringe Search a try. They are both very easy to code up and in my experience I get about 1000X speed up compared to A*.
As for heuristics you will definitely want a good heuristic for the Rubik's Cube and I would suspect you will benefit tremendously from a heuristic in the sliding puzzle. For the Rubik's Cube good heuristics are the number of moves to get all the corner pieces properly positioned and oriented, or the number of moves required to get all the edges postitioned/oriented, or the max of these two.
-Kirk
IDA* pseudo code
root = start nodethreshold = root’s g() perform a depth-first search starting at rootif goal not found, set threshold = minimum g() found that is higher than current threshold repeat depth-first search starting at rootdepth-first search(node): if node = goal return goal found if node’s f() > threshold return goal not found else for each child of node, while goal not found, depth-first search(child)
kirkd, i'm afraid you're quite right about your concerns :/
i've made a quick A* implementation of slider puzzle. it performs fairly good for an 8 puzzle (3x3) but fails (lasts too long or runs out of memory :/ ) for larger ones. i used the sum of manhattan distances heuristic as i described above.
the search space is too big for a large sliding puzzle but A* will find its way quickly if the heuristic is good. i cant think of a better heuristic at the moment, can you ?
i've made a quick A* implementation of slider puzzle. it performs fairly good for an 8 puzzle (3x3) but fails (lasts too long or runs out of memory :/ ) for larger ones. i used the sum of manhattan distances heuristic as i described above.
the search space is too big for a large sliding puzzle but A* will find its way quickly if the heuristic is good. i cant think of a better heuristic at the moment, can you ?
The best heuristic I've heard of for the sliding puzzles in the sum of Manhattan distances for the tiles. I'm not sure I've seen anything that beats that.
If you've gotten A* running, it take comparably less effort to get IDA* running, too. I would love to hear a comparison between them for your test cases. I used both (and Fringe Search) to solve a 2x2x2 Rubik's cube. In the time A* took to solve 40-50 randomly scrambled cubes I was able to solve ~5000 using IDA* and even more with Fringe Search. A single cube would take anywhere from 5 seconds to a few minutes depending on the solution depth for A*, while IDA* took less than a second in all cases. It doesn't seem intuitive that repeated searches using IDA* would be beneficial but it comes down to the cost of maintaining and searching the open and closed lists.
-Kirk
If you've gotten A* running, it take comparably less effort to get IDA* running, too. I would love to hear a comparison between them for your test cases. I used both (and Fringe Search) to solve a 2x2x2 Rubik's cube. In the time A* took to solve 40-50 randomly scrambled cubes I was able to solve ~5000 using IDA* and even more with Fringe Search. A single cube would take anywhere from 5 seconds to a few minutes depending on the solution depth for A*, while IDA* took less than a second in all cases. It doesn't seem intuitive that repeated searches using IDA* would be beneficial but it comes down to the cost of maintaining and searching the open and closed lists.
-Kirk
Quote: Original post by kirkd
The primary issue is that at each node or state of the cube, you have 18 possible rotations to consider: clockwise, counter-clockwise, and 180 for each of the six faces.
I would call this 12, the 180 is simply TWO rotations clockwise or counter-clockwise .. it is probably best to specifically alter the cost of a repeated move based on some tunable constant rather than to hard-code the second leg as free
Yep, you could certainly do that. The issue then becomes that you trade a branching factor of 18 for 12 but you also get an increased solution depth. So in the end, it nearly balances out. Shorter solution but more branched search space vs longer solution from a less branched search space. In my experience with 2x2x2 cubes, I get about 1/3 of the moves as 180 turns.
-Kirk
-Kirk
Quote: Original post by kirkd
If you've gotten A* running, it take comparably less effort to get IDA* running, too. I would love to hear a comparison between them for your test cases. I used both (and Fringe Search) to solve a 2x2x2 Rubik's cube. In the time A* took to solve 40-50 randomly scrambled cubes I was able to solve ~5000 using IDA* and even more with Fringe Search. A single cube would take anywhere from 5 seconds to a few minutes depending on the solution depth for A*, while IDA* took less than a second in all cases. It doesn't seem intuitive that repeated searches using IDA* would be beneficial but it comes down to the cost of maintaining and searching the open and closed lists.
i've made some tests with IDA* on 8 puzzle and A* definitely performs better. but i'm not quite sure i implemented IDA* properly. indeed i cant understand how IDA* can perform better then A* on this kind of puzzle. as far as i understood:
1. first IDA* makes a depth-first search with an estimated cutoff depth
2. if a solution is not found cutoff depth is increased to some criteria and depth-search is started from scratch.
3. repeat 2 until a solution is found
when a depth-first search returns without a solution, it means we have visited all the nodes till our cutoff depth
as in each iteration depth-first search starts from scratch, this means IDA* visits some nodes several times. as i read, people say this is no big deal since -because of the branching factor- there are much more nodes at leafs of search. (i really doubt it) i also read that the main advantage of IDA* is that it doesnt need much memory as it doesnt maintain an open and closed lists as A*
is it ok till now ?
in a single depth-first search we need a way to mark the visited nodes, otherwise the cycling is enormous for this kind of combinational puzzle. that also means that we need to cache the nodes: when expanding a node, some of its neigbours may be created before, we should find and use it otherwise we cannot mark the visited nodes.
so ? we still hold all nodes in the memory as A*. furthermore we visit some nodes over and over
am i missing something ?
r a f t
edit: i felt to correct my expression: for the leaf nodes, i doubt it's not a big deal, no doubt there are much more nodes at leafs ;)
[Edited by - raft on November 14, 2007 6:31:11 PM]
IDA* doesn't need to explicitly store 'visited' nodes and so on; it's just implicit in the algorithm, in much the same way that a C++ function doesn't need to know where to 'return' to when it finishes, it just happens automatically. A depth first traversal tries each option in order, so each possible state is evaluated just once. And at any point, you only need to hold the current prospective path in memory. You don't need to create neighbour nodes with a view to examining them later - you create them one at a time when you intend examining them, and discard them when you finish.
Also, there's never any 'need' to look for neighbours that have been already visited, as technically speaking 2 different ways of reaching the same space are still different paths. It just happens that some applications, eg. many types of pathfinding, like to collapse this into one state because they know they can now ignore the least efficient of the two. This isn't the case in all domains, and is really just a specific implementation detail rather than part of the algorithm as such.
The thing with the branching factor is this: even if your branching factor is as low as 2, leaf nodes still outnumber branch nodes (by 1!), and if your branching factor is more like 4 to 10 then the ratio is much higher. There is a bit of waste where you're recomparing stuff, but typical implementations of A* has other forms of waste in having to maintain sorted lists or hashtables etc.
IDA* really does do some things that appear inefficient on an algorithmic level, but may well make up for this in practicality, by not having to maintain lists, allocate memory, etc. I hadn't even thought about it myself until kirkd pointed it out in a recent thread, but there's definitely some theoretical gains to be had there in many situations.
Also, there's never any 'need' to look for neighbours that have been already visited, as technically speaking 2 different ways of reaching the same space are still different paths. It just happens that some applications, eg. many types of pathfinding, like to collapse this into one state because they know they can now ignore the least efficient of the two. This isn't the case in all domains, and is really just a specific implementation detail rather than part of the algorithm as such.
The thing with the branching factor is this: even if your branching factor is as low as 2, leaf nodes still outnumber branch nodes (by 1!), and if your branching factor is more like 4 to 10 then the ratio is much higher. There is a bit of waste where you're recomparing stuff, but typical implementations of A* has other forms of waste in having to maintain sorted lists or hashtables etc.
IDA* really does do some things that appear inefficient on an algorithmic level, but may well make up for this in practicality, by not having to maintain lists, allocate memory, etc. I hadn't even thought about it myself until kirkd pointed it out in a recent thread, but there's definitely some theoretical gains to be had there in many situations.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement