Combining NNs
One of the standard ways in Machine Learning to improve the accuracy in Neural Network prediction is to combine a number of networks, and then consider all of their o/ps to arrive at the final answer.
The original recurrent networks (level 0 generalizers) have been trained on a 1000 point long time series. I want the outputs of these nets to be fed into a second network(level 1 generalizer) which in turn will give me a weighted average of the level 0 outputs, which should ideally be a better estimate than that produced by any individual level 0 generalizer).
Now to estimate the weights of the level 1 generalizer, should I use that part of the training set that hasnt been used for level 0 training, or do i use something like the 500th point to 1400 point of the time series, given that 1-1000 was used for level 0 training, and 1001-1400 was never seen by the level 0 networks.
thanks
PS :- a lot of my questions in the forum go unanswered. Is it because I dont frame them properly? I am not a native speaker of English, so plz do tell me if the question isnt clear. I will try to rephrase it.
Quote: Original post by sidhantdash
PS :- a lot of my questions in the forum go unanswered. Is it because I dont frame them properly? I am not a native speaker of English, so plz do tell me if the question isnt clear. I will try to rephrase it.
This is a forum dedicated to Game AI, rather than the more broad spectrum covering academic/research AI (and some of its business applications). Hence, most people probably aren't interested in discussing topics that aren't obviously related to Game AI, or they simply don't have the knowledge. I would suggest that if you're not getting the answers you're looking for in a timely fashion (and you know the content isn't game related) then you try one of the newsgroups (such as comp.ai.neural iirc) or a more technical forum. Personally, while I do have expertise in these areas, my time that can be devoted online is very limited (and usually crammed in between things going on at work) and hence I prefer to dedicate that limited time to Game AI questions, since that is the purpose of this forum. I suspect this is a similar case for those other members who I know have knowledge in these areas. Personally, if I find some extra spare time, I will generally try and post something to non-game threads, but this usually means it takes a few days.
As for your current question, I personally would not use another ANN as your level 1 generaliser, since what you're doing is trying to determine an a posteriori distribution over performance of the level 0 generalisers, given training data. I'd consider a Bayesian Model Averaging approach, or any of the usual mixture of experts approaches. That's not to say you can't do it with an ANN, but it wouldn't be my personal first choice.
If you do choose to do it with an ANN, train the level 1 generaliser on the outputs of the level 1 generalisers from the original training set.
Cheers,
Timkin
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement