Nonlinear regression using the Gauss-Newton algorithm in C

Link

Ninja say what!?!
#1
Hey guys,

I'm being asked to program this up in C. If any of you know of any good references for the Gauss-Newton algorithm, please post them for me to look into. It'd REALLY help me out a lot.

Thanks guys.
 

Link

Ninja say what!?!
#2
ok...so I looked into the Newton-Raphson method for non-linear regression, and it looks like I'll need to take a derivative. The equation I need the derivative for is:

\(
\left(\frac{\sum_{i=1}^{n}y_i \frac{x_i}{k+x_i}}{\sum_{i=1}^{n}(\frac{x_i}{k+x_i})^2}\right)\sum_{i=1}^{n}\left[ \left(\frac{x_i}{k+x_i} \right)\left( \frac{x_i}{(k+x_i)^2}\right)\right ]-\sum_{i=1}^{n}\left(y_i\frac{x_i}{(k+x_i)^2} \right)
\)

with respect to k (i.e. \(\frac{\partial }{\partial k}\)). I know mathematica and maple should be able to do this, but don't have the software. Can someone please run this for me to get the derivative???
 
#4
I just finished this problem, and I followed these two handouts pretty closely.

http://www.stat.berkeley.edu/classes/s243/nlin.pdf
http://web.itu.edu.tr/~msahin/mat202e/Answer#4.pdf

I took the problem as saying that we were allowed to implement the model-equation and its partial derivatives (with respect to the parameters to be estimated) as functions in C. (That is, we could just calculate the partials by hand and code them in directly. You could use Mathematica if you want to, but it seems like overkill.) We would then be able to call those functions to compute things like the residuals or the i,jth element of the Jacobian.

The hardest part for me was figuring out how to accommodate an arbitrary number of parameters, observations, and independent variables in my program. I ended up using a function pointer for the model-equation and an array of function pointers to store the partial derivatives. This might not be the best way to approach the problem, but it worked for me.

I'm going to go collapse now, but I'll try to check in this afternoon to see if you have more questions. Good luck!!
 

Link

Ninja say what!?!
#5
Wow....I'm surprised that i) you're awake so early (or late) and ii) that you figured it out so well. Thank you so much for the help!

BTW, did you figure out what he meant by capability for step halving???
 
#6
(late, really late)

All it means by half-stepping is to try half the iteration step if the actual iteration step yields values of the parameters that give a larger sum-of-squared residuals.
 
#7
One last bit of advice: I found this method to be pretty sensitive to the initial values. So, if you think you coded everything correctly, but are still getting a huge sum-of-squared-residuals, try different starting values. You could also implement a .C version of the grid.expand function in the first handout.
 

Dason

Ambassador to the humans
#8
(late, really late)

All it means by half-stepping is to try half the iteration step if the actual iteration step yields values of the parameters that give a larger sum-of-squared residuals.
It's relatively useful to do half-stepping. You can try to do it multiple times if it doesn't work but you'll only want to do it a few times otherwise you'll decrease the step to 0 and the algorithm will think it converged!

One last bit of advice: I found this method to be pretty sensitive to the initial values.
It is a quite annoying algorithm to get started for some data sets...