Numerical Recipes Forum  

Go Back   Numerical Recipes Forum > Numerical Recipes Third Edition Forum > Methods: All Chapters in NR3

Reply
 
Thread Tools Rate Thread Display Modes
  #1  
Old 04-03-2012, 08:36 AM
LRaiz LRaiz is offline
Registered User
 
Join Date: Apr 2012
Posts: 3
9.7.2 Bug in Newton Method?

I am puzzled by the code at the end of page 481 beginning 482 that examines potential spurious convergence by checking for zero gradient. When gradient is not sufficiently small newt still sets check=false. This value is supposed to indicate that root was found but function value test failed convergence criteria a few lines earlier. I would expect the code to return an indication of internal error in such a case. Is it a bug or do I miss something?
Reply With Quote
  #2  
Old 04-04-2012, 07:58 AM
Saul Teukolsky Saul Teukolsky is offline
Numerical Recipes Author
 
Join Date: Dec 2001
Posts: 210
Here's what is happening:
1) lnsrch sets check to true if it is unable to make any further progress in finding a better value of x.
2) newt then tests convergence on function values f(x). If there is convergence, check is ignored by setting it to false.
3) Otherwise, if check was true on exit from lnsrch, the gradient is tested and possibly spurious convergence is reported (check = true) if it is close to zero.
4) Otherwise (check false on exit from lnsrch) convegence on x is tested.

Hope this helps.
Saul Teukolsky
Reply With Quote
  #3  
Old 04-06-2012, 10:17 AM
LRaiz LRaiz is offline
Registered User
 
Join Date: Apr 2012
Posts: 3
Saul, Thanks for your reply.
I'll try to clarify by describing the behavior that I observed while debugging my example:
1. lnsrch is not able to make progress and sets check to true.
2. newt tests and discovers lack of convergence w.r.t. function values.
3. Since the check was set to true gradient is tested and discovered that it is NOT sufficiently close to zero.
3.1 When gradient test fails newt sets check to false (as if a root was found) and then immediately returns leaving a caller with a false impression that everything was fine.

Hope that now you have a better sense of the situation.

I also run into another issue related to lnsrch implementation of backtracking. A check for sufficient function decrease is done by a line:
} else if ( f <= fold + ALF * alam * slope) return;
In my example ALF * alam * slope is non zero. However its value is sufficiently small that adding it to fold looses all significant digits. Right hand side become equal to fold while fold is equal to f; lnsrch does not detect that it is not making progress and that leads to extra iterations by newt. I could imagine that in some cases such premature returns from lnsrch may even lead to insufficient backtracking and harmful consequences in newt.
I wonder if it would make sense to use 'less' instead of 'less or equal' comparison on the line in question?

Last edited by LRaiz; 04-06-2012 at 12:03 PM. Reason: corrected typos
Reply With Quote
  #4  
Old 04-06-2012, 01:57 PM
Saul Teukolsky Saul Teukolsky is offline
Numerical Recipes Author
 
Join Date: Dec 2001
Posts: 210
It sounds like newt is doing as well as can be expected on your problem. lnsrch can't backtrack, so you have essentially converged on x and you are not at a minimum. This is the best newt can do for your given inputs.

In the text after newt, there is a warning about scaling of x and f. If these quantities are not of order unity, you can have the problems you report. The fix is to define new f's and x's by scaling your old ones with constants before you invoke newt. Also, if your inputs are not smooth in the mathematical sense and you ask for too much precision, newt may fail.

Saul Teukolsky
Reply With Quote
  #5  
Old 04-07-2012, 10:41 AM
LRaiz LRaiz is offline
Registered User
 
Join Date: Apr 2012
Posts: 3
1. You are correct Saul that newt is doing as well as can be expected on my problem. I appreciate your responses and value the suggestion of scaling both unknowns and function values. The only thing I am suggesting is that since newt has validated that neither function values nor gradient are zero then may be it should pass alone some sort of an indication about loss of precision/roundoff problem to the caller. By the way - my equations are C2 w.r.t. to unknowns.

2. I also would appreciate very much if you express your expert opinion if a replacement of '<=' with '< in lnsrch might have some undesired consequences.
Reply With Quote
  #6  
Old 04-09-2012, 07:56 AM
Saul Teukolsky Saul Teukolsky is offline
Numerical Recipes Author
 
Join Date: Dec 2001
Posts: 210
1) It is very difficult to put in a "warning" of the kind you suggest. Testing for convergence on f (function values) is error-prone, unless you force the user to specify ahead of time what "sufficiently small" f means. This comes back to scaling f to be of order unity. So most root finding routines focus on convergence on x, on the assumption that scaling x isn't as difficult or as crucial. Bottom line: you are supposed to inspect f on exit from newt.

2) I haven't looked into what might happen if you change <= to <. The theorem on which this algorithm is based uses <= (see the reference to the book by Dennis and Schnabel).

Saul Teukolsky
Reply With Quote
Reply

Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 05:46 AM.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.