Levenberg-marquardt curve fit not converging

ggilmour

New member
Joined
Jun 8, 2006
Messages
4
I'm working on a program to fit a curve, and I'm having some problems with it. I've got a couple different curves I need to fit data to and most aren't giving me trouble, but the algorithm keeps getting snagged on this one.

R(x) = a * x ^ (b-1) * e ^ (-x/c), where a, b, and c are unknown constants. It seems like no matter what my guesses are with this thing it diverges all over the place. I'm using a public-domain piece of VB code to handle the levenberg-marquardt algorithm, so all I have to feed it are the partial derivatives with respect to the parameters I want to determine (a, b, c).

Is there anything I can do to coax this thing into converging? Right now I'm getting closer by guessing than my software can get.

Thanks for the help!

-Greg[/code]
 
Have you tried initializing with least-squares estimates computed from logs of the data? A plot of the data should give you an idea about a starting value for the shape parameter b.

If the sample size is large, the difference between non-linear least-squares and exponentiated log linear least squares isn't large.
 
Getting closer.

Since I'm modelling a biochemical process and I've got a bit of control data I've got an OK idea of where my parameters should be falling. Since my last post I constrained b to the theoretical value (3). I'd prefer not to leave it that way, as using an approximate value for one of my parameters will throw off the accuracy of the others, but I'm getting at least reasonable results now.

I'll try getting estimates from linear least squares on the log also. I was avoiding it before because the log of the equation isn't going to be linear (the log has a log(x) and a -x/c term), but it turns out c is pretty large (200-800) and I'm only concerned over t = [0,20], so it should be close enough to estimate. If I use that to get a more accurate value for b, I think I'll be able to stop constraining it and things should work out. If not I'll be back.

Thanks for your help!

-Greg
 
1) Are you sure the PB Software works?
2) Are you sure you are coding the partial derivatives correctly?
3) What is the nature of your data? Number of Points?
 
Re: Getting closer.

ggilmour said:
I'll try getting estimates from linear least squares on the log also. I was avoiding it before because the log of the equation isn't going to be linear (the log has a log(x) and a -x/c term), but it turns out c is pretty large (200-800) and I'm only concerned over t = [0,20], so it should be close enough to estimate.

Linear regression means linear in the parameters - not the independent variables. So when you take logs, your model is \(\displaystyle log(R(x))= Y = A +BX_1+CX_2\), where I have made an obvious reparametrization. That looks like the equation of a plane, and you are estimating slopes and an intercept.
 
1) Yes. I've used it before and had it work, I also read through the code and there doesn't seem to be anything wrong with it.

2) Yes. I've checked them repeatedly, and the fit works nicely with "perfect" data (i.e. values generated from the function I'm fitting to), just not so well with the real data on occasion. It seems tempermental though, where a single extra or subtracted point will make or break the fit.

3) The data is the A-wave segment of an electroretinogram, so voltage as a function of time, sampled at 1000hz. 10 data points per trial and between 4 and 10 trials depending on the step.
 
Re: Getting closer.

royhaas said:
Linear regression means linear in the parameters - not the independent variables. So when you take logs, your model is \(\displaystyle log(R(x))= Y = A +BX_1+CX_2\), where I have made an obvious reparametrization. That looks like the equation of a plane, and you are estimating slopes and an intercept.

So since it's linear in the parameters, does that mean there are fewer "snags" in the SOS function for the minimizing algorithm to get caught on? Makes sense (I think), but I'm only going off of what I can remember from 1st year calc and stats plus whatever I can glean out of wikipedia.

Thanks again for the replies.
 
Re: Getting closer.

ggilmour said:
So since it's linear in the parameters, does that mean there are fewer "snags" in the SOS function for the minimizing algorithm to get caught on? Makes sense (I think), but I'm only going off of what I can remember from 1st year calc and stats plus whatever I can glean out of wikipedia.

The only "snags" in a multiple linear regression occur when there are near singularities, i.e., close to a linear relationship among the columns of the X matrix. That implies that the sum-of-squares matrix will have eigenvalues close to zero. So in your case it would only occur if "X" and "log(X)" were close in absolute value.
 
Top