pyproximal.optimization.bregman.Bregman#

pyproximal.optimization.bregman.Bregman(proxf, proxg, x0, solver, A=None, alpha=1.0, niterouter=10, warm=False, tolx=1e-10, tolf=1e-10, bregcallback=None, show=False, **kwargs_solver)[source]#

Bregman iterations with Proximal Solver

Solves one of the following minimization problem using Bregman iterations and a Proximal solver of choice for the inner iterations:

\[1. \; \mathbf{x} = \argmin_\mathbf{x} f(\mathbf{x}) + \alpha g(\mathbf{x})\]

or

\[2. \; \mathbf{x} = \argmin_\mathbf{x} f(\mathbf{x}) + \alpha g(\mathbf{A}\mathbf{x})\]

where \(f(\mathbf{x})\) and \(g(\mathbf{x})\) are any convex function that has a known proximal operator and \(\mathbf{A}\) is a linear operator. The function \(g(\mathbf{y})\) is converted into its equivalent Bregman distance \(D_g^{\mathbf{q}^{k}}(\mathbf{y}, \mathbf{y}_k) = g(\mathbf{y}) - g(\mathbf{y}^k) - (\mathbf{q}^{k})^T (\mathbf{y} - \mathbf{y}_k)\).

If \(f(x)\) has a uniquely defined gradient the pyproximal.optimization.primal.ProximalGradient and pyproximal.optimization.primal.AcceleratedProximalGradient solvers can be used to solve the first problem, otherwise the pyproximal.optimization.primal.ADMM is required. On the other hand, only the pyproximal.optimization.primal.LinearizedADMM solver can be used for the second problem.

Parameters
proxfpyproximal.ProxOperator

Proximal operator of f function

proxgpyproximal.ProxOperator

Proximal operator of g function

x0numpy.ndarray

Initial vector

solverpyprox.optimization.primal

Solver used to solve the inner loop optimization problems

Apylops.LinearOperator, optional

Linear operator of g

alphafloat, optional

Scalar of g function

niterouterint, optional

Number of iterations of outerloop

warmbool, optional

Warm start - i.e., previous estimate is used as starting guess of the current optimization (True) or not - i.e., provided starting guess is used as starting guess of every optimization (False).

tolxcallable, optional

Tolerance on solution update, stop when \(||\mathbf{x}^{k+1} - \mathbf{x}^k||_2<tol_x\) is satisfied

tolfcallable, optional

Tolerance on f function, stop when \(f(\mathbf{x}^{k+1})_2<tol_f\) is satisfied

bregcallbackcallable, optional

Function with signature (callback(x)) to call after each Bregman iteration where x is the current model vector

showbool, optional

Display iterations log

**kwargs_solverdict, optional

Arbitrary keyword arguments for chosen solver

Returns
xnumpy.ndarray

Inverted model

Notes

The Bregman iterations can be expressed with the following recursion

\[\begin{split}\mathbf{x}^{k+1} = \argmin_{\mathbf{x}} \quad f + \alpha g - \alpha (\mathbf{q}^{k})^T \mathbf{x}\\ \mathbf{q}^{k+1} = \mathbf{q}^{k} - \frac{1}{\alpha} \nabla f(\mathbf{x}^{k+1})\end{split}\]

where the minimization problem can be solved using one the proximal solvers in the pyproximal.optimization.primal module.