pyproximal.optimization.bregman.Bregman#
- pyproximal.optimization.bregman.Bregman(proxf, proxg, x0, solver, A=None, alpha=1.0, niterouter=10, warm=False, tolx=1e-10, tolf=1e-10, bregcallback=None, show=False, **kwargs_solver)[source]#
Bregman iterations with Proximal Solver
Solves one of the following minimization problem using Bregman iterations and a Proximal solver of choice for the inner iterations:
\[1. \; \mathbf{x} = \argmin_\mathbf{x} f(\mathbf{x}) + \alpha g(\mathbf{x})\]or
\[2. \; \mathbf{x} = \argmin_\mathbf{x} f(\mathbf{x}) + \alpha g(\mathbf{A}\mathbf{x})\]where \(f(\mathbf{x})\) and \(g(\mathbf{x})\) are any convex function that has a known proximal operator and \(\mathbf{A}\) is a linear operator. The function \(g(\mathbf{y})\) is converted into its equivalent Bregman distance \(D_g^{\mathbf{q}^{k}}(\mathbf{y}, \mathbf{y}_k) = g(\mathbf{y}) - g(\mathbf{y}^k) - (\mathbf{q}^{k})^T (\mathbf{y} - \mathbf{y}_k)\).
If \(f(x)\) has a uniquely defined gradient the
pyproximal.optimization.primal.ProximalGradient
andpyproximal.optimization.primal.AcceleratedProximalGradient
solvers can be used to solve the first problem, otherwise thepyproximal.optimization.primal.ADMM
is required. On the other hand, only thepyproximal.optimization.primal.LinearizedADMM
solver can be used for the second problem.- Parameters
- proxf
pyproximal.ProxOperator
Proximal operator of f function
- proxg
pyproximal.ProxOperator
Proximal operator of g function
- x0
numpy.ndarray
Initial vector
- solver
pyprox.optimization.primal
Solver used to solve the inner loop optimization problems
- A
pylops.LinearOperator
, optional Linear operator of g
- alpha
float
, optional Scalar of g function
- niterouter
int
, optional Number of iterations of outerloop
- warm
bool
, optional Warm start - i.e., previous estimate is used as starting guess of the current optimization (
True
) or not - i.e., provided starting guess is used as starting guess of every optimization (False
).- tolx
callable
, optional Tolerance on solution update, stop when \(||\mathbf{x}^{k+1} - \mathbf{x}^k||_2<tol_x\) is satisfied
- tolf
callable
, optional Tolerance on
f
function, stop when \(f(\mathbf{x}^{k+1})_2<tol_f\) is satisfied- bregcallback
callable
, optional Function with signature (
callback(x)
) to call after each Bregman iteration wherex
is the current model vector- show
bool
, optional Display iterations log
- **kwargs_solver
dict
, optional Arbitrary keyword arguments for chosen solver
- proxf
- Returns
- x
numpy.ndarray
Inverted model
- x
Notes
The Bregman iterations can be expressed with the following recursion
\[\begin{split}\mathbf{x}^{k+1} = \argmin_{\mathbf{x}} \quad f + \alpha g - \alpha (\mathbf{q}^{k})^T \mathbf{x}\\ \mathbf{q}^{k+1} = \mathbf{q}^{k} - \frac{1}{\alpha} \nabla f(\mathbf{x}^{k+1})\end{split}\]where the minimization problem can be solved using one the proximal solvers in the
pyproximal.optimization.primal
module.