What to do with supervised learning?
I've been working on supervised learning for a non-ANN AI, but I'm not really sure what to do with it. Everything I've tried so far has worked out too easily and I'm not sure how good my AI is since I dont have any difficult tasks to test it with. I have access to a grid I can use for training the AI, but all the test problems I've come up with can be solved in real time on one computer or would use a training set too big to be practical on the grid. Any ideas?
Some test Applets of what I've already tried.
February 14, 2006 07:42 AM
Make it find the shortest path out of a labyrinth.
Now put 20 of these in the labyrinth blocking passage to each others and still manage to make all the 20 bots out of the labyrinth.
Then you will have many friends here :-)
Could you accept 24 bits RGB images as an input ? If yes, there are a lots of non trivial tasks you can try. Even unsolved problems :
Find a face in an image.
Recognize a face.
Classify each pixel as part of moving or part of a still object.
Recognize a room or an object.
Now put 20 of these in the labyrinth blocking passage to each others and still manage to make all the 20 bots out of the labyrinth.
Then you will have many friends here :-)
Could you accept 24 bits RGB images as an input ? If yes, there are a lots of non trivial tasks you can try. Even unsolved problems :
Find a face in an image.
Recognize a face.
Classify each pixel as part of moving or part of a still object.
Recognize a room or an object.
Just playing with a couple of your applets, it would seem to me that you need to extend your AI to handle local optima better. Consider optimisation via simulated annealing rather than a GA for some of those tasks. For instance, in the pole balancing, the pole was balanced, but with a non-zero velocity of the cart. There are ways to respecify the objective function of the learning task to handle this, but it makes learning more difficult. Try keeping the learning task easy, but be prepared to learn, relearn and relearn again and again, but using the previous knowledge and experience to redefine the learning task slightly (so experience changes the objective function, rather than the programmer).
Cheers,
Timkin
Cheers,
Timkin
February 15, 2006 10:15 PM
The pole balancing problem has lots of extensions, like making the pole follow a pattern or balancing multiple poles (of different lengths, or stacked on top of each other) simultaneously.
Try creating an image parser that can recognize multiple fonts/alighnments etc
To AP1 :
Pathfinding never seemed like something learning AI would be used for to me since there are so many good nonlearning algorithms...As for more image parsing I will probably look into that in about a month when my grid membership expires. I wonder if I could create AI to convert 2 images to a 3D representation and recognize 3D objects(is there already an algorithmic method for that?).
To Timkin:
I read up a little on simulated annealing and I think I will add some functions to my base class and try that out. I don't intend to mess with the pole balancer too much since I just use it to compare my different AIs(and I'm a little tired of it), but the GA lunar lander has local min issues too. How would I have the AI change its own task? I don't imagine it would be very difficult to define a new fitness function to make the pole balancer want to stay more still, but I wouldn't even know where to begin having the AI do it for me...
I've got a lot of good ideas in general, but I'm still wondering what I could do on a grid(grid = really low memory, really high processing time). I'm going to present my work to some people and I want to use supervised learning and GAs and possibly simulated annealing now too.
I doubt anyone noticed since its at the bottom of the list and it requires you do something before the AI runs, but the supervised learning lunar lander had a problem with the fitness function(it thought landing sideways was good).Well, it's fixed now.
Pathfinding never seemed like something learning AI would be used for to me since there are so many good nonlearning algorithms...As for more image parsing I will probably look into that in about a month when my grid membership expires. I wonder if I could create AI to convert 2 images to a 3D representation and recognize 3D objects(is there already an algorithmic method for that?).
To Timkin:
I read up a little on simulated annealing and I think I will add some functions to my base class and try that out. I don't intend to mess with the pole balancer too much since I just use it to compare my different AIs(and I'm a little tired of it), but the GA lunar lander has local min issues too. How would I have the AI change its own task? I don't imagine it would be very difficult to define a new fitness function to make the pole balancer want to stay more still, but I wouldn't even know where to begin having the AI do it for me...
I've got a lot of good ideas in general, but I'm still wondering what I could do on a grid(grid = really low memory, really high processing time). I'm going to present my work to some people and I want to use supervised learning and GAs and possibly simulated annealing now too.
I doubt anyone noticed since its at the bottom of the list and it requires you do something before the AI runs, but the supervised learning lunar lander had a problem with the fitness function(it thought landing sideways was good).Well, it's fixed now.
I played with your demos and, not to be rude, but I'm surprised at your comment that the problems were too simple. In the Lunar Lander demo, after 5000 generations the best lander still plummets to the ground. In the function approximation demo, if I make a simple cubic it cannot accomodate.
I think these are all very interesting problems and your demos are very nice, but I think that you're not quite done yet.
-Kirk
[Edited by - kirkd on February 18, 2006 11:02:49 AM]
I think these are all very interesting problems and your demos are very nice, but I think that you're not quite done yet.
-Kirk
[Edited by - kirkd on February 18, 2006 11:02:49 AM]
First off, the lunar lander that has generations is not supervised learning. I was only talking about my supervised learning when I said it was too easy and at the time that I said that I hadn't finished the lunar lander so I had very little. The function approximater is not parametric and rounds a little so if you put 2 points over top of each other it can't find a function for it because it isn't a function. Also the code that draws the function connects the points every 10 pixels so if the functions goes up and comes back down really steep it looks wrong. I'll get to fixing that right now.
edit: fixed, I also turned the learning rate up a little on the function class
[Edited by - Alrecenk on February 18, 2006 2:43:23 PM]
edit: fixed, I also turned the learning rate up a little on the function class
[Edited by - Alrecenk on February 18, 2006 2:43:23 PM]
Good point on the lunar lander. I didn't discriminate between it and supervised learning. That also makes sense on the function approximator. I did go back and set only a few points (4) for a cubic and it got it. Once I start putting more than that it tends not to do so well.
Another option would be in the image recognition world. Optical character recognition is one for which you can find many data sets. For that matter, you could go to the UCI Machine Learning Repository (http://www.ics.uci.edu/~mlearn/MLRepository.html) where there are many, many data sets that you could work with. Potentially not as interesting as things like the lunar lander, but still pattern recogntion.
Another thing you might be interested in (as well as anyone in the forum) is the Comparative Evaluation of Prediction Algorithms contest (http://www.coepra.org/). Here you can enter you best pattern recognition attempt on a standard dataset and compare to how well others do. Me and some friends from my company have signed up.
-kirk
Another option would be in the image recognition world. Optical character recognition is one for which you can find many data sets. For that matter, you could go to the UCI Machine Learning Repository (http://www.ics.uci.edu/~mlearn/MLRepository.html) where there are many, many data sets that you could work with. Potentially not as interesting as things like the lunar lander, but still pattern recogntion.
Another thing you might be interested in (as well as anyone in the forum) is the Comparative Evaluation of Prediction Algorithms contest (http://www.coepra.org/). Here you can enter you best pattern recognition attempt on a standard dataset and compare to how well others do. Me and some friends from my company have signed up.
-kirk
There are many hard problems for supervised learners.
Just check the UCI machine learning repository for a collection of standard training sets that re used in the machine learning community to compare different learning algorithms.
http://www.ics.uci.edu/~mlearn/MLSummary.html
If you think they are easy data sets, just try to compare your results with the results obtained by a state of the art machine learning method such as Support Vector Machines (SVM).
Hope this gives you some ideas...
Moreover, if you want a hard problem, try the Toxicology one ( http://www.predictive-toxicology.org/ptc/ ). It is a dababase of chemical molecules, and the supervised learner has to learn which of them can produce cancer.
Just check the UCI machine learning repository for a collection of standard training sets that re used in the machine learning community to compare different learning algorithms.
http://www.ics.uci.edu/~mlearn/MLSummary.html
If you think they are easy data sets, just try to compare your results with the results obtained by a state of the art machine learning method such as Support Vector Machines (SVM).
Hope this gives you some ideas...
Moreover, if you want a hard problem, try the Toxicology one ( http://www.predictive-toxicology.org/ptc/ ). It is a dababase of chemical molecules, and the supervised learner has to learn which of them can produce cancer.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement