Optimize¶
-
class
SimPEG.optimization.
Minimize
(**kwargs)[source]¶ Bases:
object
Minimize is a general class for derivative based optimization.
-
name
= 'General Optimization Algorithm'¶ The name of the optimization algorithm
-
maxIter
= 20¶ Maximum number of iterations
-
maxIterLS
= 10¶ Maximum number of iterations for the line-search
-
maxStep
= inf¶ Maximum step possible, used in scaling before the line-search.
-
LSreduction
= 0.0001¶ Expected decrease in the line-search
-
LSshorten
= 0.5¶ Line-search step is shortened by this amount each time.
-
tolF
= 0.1¶ Tolerance on function value decrease
-
tolX
= 0.1¶ Tolerance on norm(x) movement
-
tolG
= 0.1¶ Tolerance on gradient norm
-
eps
= 1e-05¶ Small value
-
stopNextIteration
= False¶ Stops the optimization program nicely.
-
debug
= False¶ Print debugging information
-
debugLS
= False¶ Print debugging information for the line-search
-
comment
= ''¶
-
counter
= None¶ Set this to a SimPEG.utils.Counter() if you want to count things
-
parent
= None¶ This is the parent of the optimization routine.
-
print_type
= None¶
-
factor
= 1.0¶
-
property
callback
¶
-
minimize
(evalFunction, x0)[source]¶ Minimizes the function (evalFunction) starting at the location x0.
- Parameters
evalFunction (callable) – function handle that evaluates: f, g, H = F(x)
x0 (numpy.ndarray) – starting location
- Return type
- Returns
x, the last iterate of the optimization algorithm
evalFunction is a function handle:
(f[, g][, H]) = evalFunction(x, return_g=False, return_H=False ) def evalFunction(x, return_g=False, return_H=False): out = (f,) if return_g: out += (g,) if return_H: out += (H,) return out if len(out) > 1 else out[0]
The algorithm for general minimization is as follows:
startup(x0) printInit() while True: doStartIteration() f, g, H = evalFunction(xc) printIter() if stoppingCriteria(): break p = findSearchDirection() p = scaleSearchDirection(p) xt, passLS = modifySearchDirection(p) if not passLS: xt, caught = modifySearchDirectionBreak(p) if not caught: return xc doEndIteration(xt) printDone() finish() return xc
-
startup
(x0)[source]¶ startup is called at the start of any new minimize call.
This will set:
x0 = x0 xc = x0 iter = iterLS = 0
- Parameters
x0 (numpy.ndarray) – initial x
- Return type
- Returns
None
If you have things that also need to run in the method startup, you can create a method:
def _startup*(self, ... ): pass
Where the * can be any string. If present, _startup* will be called at the start of the default startup call. You may also completely overwrite this function.
-
doStartIteration
()[source]¶ doStartIteration is called at the start of each minimize iteration.
- Return type
- Returns
None
If you have things that also need to run in the method doStartIteration, you can create a method:
def _doStartIteration*(self, ... ): pass
Where the * can be any string. If present, _doStartIteration* will be called at the start of the default doStartIteration call. You may also completely overwrite this function.
-
printInit
(inLS=False)[source]¶ printInit is called at the beginning of the optimization routine.
If there is a parent object, printInit will check for a parent.printInit function and call that.
-
printIter
(inLS=False)[source]¶ printIter is called directly after function evaluations.
If there is a parent object, printIter will check for a parent.printIter function and call that.
If you have things that also need to run in the method printIter, you can create a method:
def _printIter*(self, ... ): pass
Where the * can be any string. If present, _printIter* will be called at the start of the default printIter call. You may also completely overwrite this function.
-
printDone
(inLS=False)[source]¶ printDone is called at the end of the optimization routine.
If there is a parent object, printDone will check for a parent.printDone function and call that.
-
finish
()[source]¶ finish is called at the end of the optimization.
- Return type
- Returns
None
If you have things that also need to run in the method finish, you can create a method:
def _finish*(self, ... ): pass
Where the * can be any string. If present, _finish* will be called at the start of the default finish call. You may also completely overwrite this function.
-
projection
(p)[source]¶ projects the search direction.
by default, no projection is applied.
- Parameters
p (numpy.ndarray) – searchDirection
- Return type
- Returns
p, projected search direction
If you have things that also need to run in the method projection, you can create a method:
def _projection*(self, ... ): pass
Where the * can be any string. If present, _projection* will be called at the start of the default projection call. You may also completely overwrite this function.
-
findSearchDirection
()[source]¶ findSearchDirection should return an approximation of:
\[H p = - g\]Where you are solving for the search direction, p
The default is:
\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]And corresponds to SteepestDescent.
The latest function evaluations are present in:
self.f, self.g, self.H
- Return type
- Returns
p, Search Direction
-
scaleSearchDirection
(p)[source]¶ scaleSearchDirection should scale the search direction if appropriate.
Set the parameter maxStep in the minimize object, to scale back the gradient to a maximum size.
- Parameters
p (numpy.ndarray) – searchDirection
- Return type
- Returns
p, Scaled Search Direction
-
nameLS
= 'Armijo linesearch'¶ The line-search name
-
modifySearchDirection
(p)[source]¶ modifySearchDirection changes the search direction based on some sort of linesearch or trust-region criteria.
By default, an Armijo backtracking linesearch is preformed with the following parameters:
maxIterLS, the maximum number of linesearch iterations
LSreduction, the expected reduction expected, default: 1e-4
LSshorten, how much the step is reduced, default: 0.5
If the linesearch is completed, and a descent direction is found, passLS is returned as True.
Else, a modifySearchDirectionBreak call is preformed.
- Parameters
p (numpy.ndarray) – searchDirection
- Return type
- Returns
(xt, passLS) numpy.ndarray, bool
-
modifySearchDirectionBreak
(p)[source]¶ Code is called if modifySearchDirection fails to find a descent direction.
The search direction is passed as input and this function must pass back both a new searchDirection, and if the searchDirection break has been caught.
By default, no additional work is done, and the evalFunction returns a False indicating the break was not caught.
- Parameters
p (numpy.ndarray) – searchDirection
- Return type
- Returns
(xt, breakCaught) numpy.ndarray, bool
-
doEndIteration
(xt)[source]¶ doEndIteration is called at the end of each minimize iteration.
By default, function values and x locations are shuffled to store 1 past iteration in memory.
self.xc must be updated in this code.
- Parameters
xt (numpy.ndarray) – tested new iterate that ensures a descent direction.
- Return type
- Returns
None
If you have things that also need to run in the method doEndIteration, you can create a method:
def _doEndIteration*(self, ... ): pass
Where the * can be any string. If present, _doEndIteration* will be called at the start of the default doEndIteration call. You may also completely overwrite this function.
-
-
class
SimPEG.optimization.
Remember
[source]¶ Bases:
object
This mixin remembers all the things you tend to forget.
You can remember parameters directly, naming the str in Minimize, or pass a tuple with the name and the function that takes Minimize.
For Example:
opt.remember('f',('norm_g', lambda M: np.linalg.norm(M.g))) opt.minimize(evalFunction, x0) opt.recall('f')
The param name (str) can also be located in the parent (if no conflicts), and it will be looked up by default.
-
_rememberThese
= []¶
-
-
class
SimPEG.optimization.
SteepestDescent
(**kwargs)[source]¶ Bases:
SimPEG.optimization.Minimize
,SimPEG.optimization.Remember
-
name
= 'Steepest Descent'¶
-
findSearchDirection
()[source]¶ findSearchDirection should return an approximation of:
\[H p = - g\]Where you are solving for the search direction, p
The default is:
\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]And corresponds to SteepestDescent.
The latest function evaluations are present in:
self.f, self.g, self.H
- Return type
- Returns
p, Search Direction
-
-
class
SimPEG.optimization.
BFGS
(**kwargs)[source]¶ Bases:
SimPEG.optimization.Minimize
,SimPEG.optimization.Remember
-
name
= 'BFGS'¶
-
nbfgs
= 10¶
-
property
bfgsH0
¶ Approximate Hessian used in preconditioning the problem.
Must be a SimPEG.Solver
-
findSearchDirection
()[source]¶ findSearchDirection should return an approximation of:
\[H p = - g\]Where you are solving for the search direction, p
The default is:
\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]And corresponds to SteepestDescent.
The latest function evaluations are present in:
self.f, self.g, self.H
- Return type
- Returns
p, Search Direction
-
-
class
SimPEG.optimization.
GaussNewton
(**kwargs)[source]¶ Bases:
SimPEG.optimization.Minimize
,SimPEG.optimization.Remember
-
name
= 'Gauss Newton'¶
-
findSearchDirection
()[source]¶ findSearchDirection should return an approximation of:
\[H p = - g\]Where you are solving for the search direction, p
The default is:
\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]And corresponds to SteepestDescent.
The latest function evaluations are present in:
self.f, self.g, self.H
- Return type
- Returns
p, Search Direction
-
-
class
SimPEG.optimization.
InexactGaussNewton
(**kwargs)[source]¶ Bases:
SimPEG.optimization.BFGS
,SimPEG.optimization.Minimize
,SimPEG.optimization.Remember
Minimizes using CG as the inexact solver of
\[\mathbf{H p = -g}\]By default BFGS is used as the preconditioner.
Use nbfgs to set the memory limitation of BFGS.
To set the initial H0 to be used in BFGS, set bfgsH0 to be a SimPEG.Solver
-
name
= 'Inexact Gauss Newton'¶
-
maxIterCG
= 5¶
-
tolCG
= 0.1¶
-
property
approxHinv
¶ The approximate Hessian inverse is used to precondition CG.
Default uses BFGS, with an initial H0 of bfgsH0.
Must be a scipy.sparse.linalg.LinearOperator
-
findSearchDirection
()[source]¶ findSearchDirection should return an approximation of:
\[H p = - g\]Where you are solving for the search direction, p
The default is:
\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]And corresponds to SteepestDescent.
The latest function evaluations are present in:
self.f, self.g, self.H
- Return type
- Returns
p, Search Direction
-
-
class
SimPEG.optimization.
ProjectedGradient
(**kwargs)[source]¶ Bases:
SimPEG.optimization.Minimize
,SimPEG.optimization.Remember
-
name
= 'Projected Gradient'¶
-
maxIterCG
= 5¶
-
tolCG
= 0.1¶
-
lower
= -inf¶
-
upper
= inf¶
-
-
class
SimPEG.optimization.
NewtonRoot
(**kwargs)[source]¶ Bases:
object
Newton Method - Root Finding
root = newtonRoot(fun,x);
Where fun is the function that returns the function value as well as the gradient.
For iterative solving of dh = -Jr, use O.solveTol = TOL. For direct solves, use SOLVETOL = 0 (default)
Rowan Cockett 16-May-2013 16:29:51 University of British Columbia rcockett@eos.ubc.ca
-
tol
= 1e-06¶
-
maxIter
= 20¶
-
stepDcr
= 0.5¶
-
maxLS
= 30¶
-
comments
= False¶
-
doLS
= True¶
-
solverOpts
= {}¶
-
-
class
SimPEG.optimization.
StoppingCriteria
[source]¶ Bases:
object
docstring for StoppingCriteria
-
iteration
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : maxIter = %3d <= iter = %3d'}¶
-
iterationLS
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : maxIterLS = %3d <= iterLS = %3d'}¶
-
armijoGoldstein
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'optimal', 'str': '%d : ft = %1.4e <= alp*descent = %1.4e'}¶
-
tolerance_f
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'optimal', 'str': '%d : |fc-fOld| = %1.4e <= tolF*(1+|f0|) = %1.4e'}¶
-
moving_x
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'optimal', 'str': '%d : |xc-x_last| = %1.4e <= tolX*(1+|x0|) = %1.4e'}¶
-
tolerance_g
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'optimal', 'str': '%d : |proj(x-g)-x| = %1.4e <= tolG = %1.4e'}¶
-
norm_g
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : |proj(x-g)-x| = %1.4e <= 1e3*eps = %1.4e'}¶
-
bindingSet
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : probSize = %3d <= bindingSet = %3d'}¶
-
bindingSet_LS
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : probSize = %3d <= bindingSet = %3d'}¶
-
phi_d_target_Minimize
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : phi_d = %1.4e <= phi_d_target = %1.4e '}¶
-
phi_d_target_Inversion
= {'left': <function StoppingCriteria.<lambda>>, 'right': <function StoppingCriteria.<lambda>>, 'stopType': 'critical', 'str': '%d : phi_d = %1.4e <= phi_d_target = %1.4e '}¶
-
-
class
SimPEG.optimization.
IterationPrinters
[source]¶ Bases:
object
docstring for IterationPrinters
-
iteration
= {'format': '%3d', 'title': '#', 'value': <function IterationPrinters.<lambda>>, 'width': 5}¶
-
f
= {'format': '%1.2e', 'title': 'f', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
norm_g
= {'format': '%1.2e', 'title': '|proj(x-g)-x|', 'value': <function IterationPrinters.<lambda>>, 'width': 15}¶
-
totalLS
= {'format': '%d', 'title': 'LS', 'value': <function IterationPrinters.<lambda>>, 'width': 5}¶
-
iterationLS
= {'format': '%3d.%d', 'title': '#', 'value': <function IterationPrinters.<lambda>>, 'width': 5}¶
-
LS_ft
= {'format': '%1.2e', 'title': 'ft', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
LS_t
= {'format': '%0.5f', 'title': 't', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
LS_armijoGoldstein
= {'format': '%1.2e', 'title': 'f + alp*g.T*p', 'value': <function IterationPrinters.<lambda>>, 'width': 16}¶
-
itType
= {'format': '%s', 'title': 'itType', 'value': <function IterationPrinters.<lambda>>, 'width': 8}¶
-
aSet
= {'format': '%d', 'title': 'aSet', 'value': <function IterationPrinters.<lambda>>, 'width': 8}¶
-
bSet
= {'format': '%d', 'title': 'bSet', 'value': <function IterationPrinters.<lambda>>, 'width': 8}¶
-
comment
= {'format': '%s', 'title': 'Comment', 'value': <function IterationPrinters.<lambda>>, 'width': 12}¶
-
beta
= {'format': '%1.2e', 'title': 'beta', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
phi_d
= {'format': '%1.2e', 'title': 'phi_d', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
phi_m
= {'format': '%1.2e', 'title': 'phi_m', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
phi_s
= {'format': '%1.2e', 'title': 'phi_s', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
phi_x
= {'format': '%1.2e', 'title': 'phi_x', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
phi_y
= {'format': '%1.2e', 'title': 'phi_y', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-
phi_z
= {'format': '%1.2e', 'title': 'phi_z', 'value': <function IterationPrinters.<lambda>>, 'width': 10}¶
-