The dummy data points takes the form:
{x,y}
I have a simple linear heuristic/hypothesis function, h, which takes a guess at y, given x:
h(x) = w0 + w1x
where w is the weight I'm trying to hone.
The cost function is:
C(w) = 0.5 * (h(x) - y)^2 (summed over the dataset)
The update scheme for w in gradient descent, as I understand it, is:
wi += rate * (d/dwi)C(w)
Where (d/dwi)C(w) is the partial derivative of C with respect to the weight wi.
(d/dwi)C(w) = (h(x) - y)xi
So, what I'm trying, unsuccessfully, is updating each weight component (some pseudocode)
for i = 0 to numWeights: sum = 0 for j = 0 to numDataPoints: sum += ( h( dataset.x ) - dataset.y ) * dataset.x<br> weights += rate * sum <br></pre></div><!–ENDSCRIPT–><br><br>Obviously I'm missing something huge as the weights explode to infinity very quickly, even with relatively small values of rate.<br><br>Any help is appreciated!<br><br>