pyproximal.optimization.primal.GeneralizedProximalGradient#

pyproximal.optimization.primal.GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None, epsg=1.0, niter=10, acceleration=None, callback=None, show=False)[source]#

Generalized Proximal gradient

Solves the following minimization problem using Generalized Proximal gradient algorithm:

\[\mathbf{x} = \argmin_\mathbf{x} \sum_{i=1}^n f_i(\mathbf{x}) + \sum_{j=1}^m \epsilon_j g_j(\mathbf{x}),~~n,m \in \mathbb{N}^+\]

where the \(f_i(\mathbf{x})\) are smooth convex functions with a uniquely defined gradient and the \(g_j(\mathbf{x})\) are any convex function that have a known proximal operator.

Parameters
proxfsList of pyproximal.ProxOperator

Proximal operators of the \(f_i\) functions (must have grad implemented)

proxgsList of pyproximal.ProxOperator

Proximal operators of the \(g_j\) functions

x0numpy.ndarray

Initial vector

taufloat or numpy.ndarray, optional

Positive scalar weight, which should satisfy the following condition to guarantees convergence: \(\tau \in (0, 1/L]\) where L is the Lipschitz constant of \(\sum_{i=1}^n \nabla f_i\). When tau=None, backtracking is used to adaptively estimate the best tau at each iteration.

epsgfloat or np.ndarray, optional

Scaling factor(s) of g function(s)

niterint, optional

Number of iterations of iterative scheme

acceleration: :obj:`str`, optional

Acceleration (vandenberghe or fista)

callbackcallable, optional

Function with signature (callback(x)) to call after each iteration where x is the current model vector

showbool, optional

Display iterations log

Returns
xnumpy.ndarray

Inverted model

Notes

The Generalized Proximal point algorithm can be expressed by the following recursion:

\[\begin{split}\text{for } j=1,\cdots,n, \\ ~~~~\mathbf z_j^{k+1} = \mathbf z_j^{k} + \epsilon_j \left[prox_{\frac{\tau^k}{\omega_j} g_j}\left(2 \mathbf{x}^{k} - \mathbf{z}_j^{k} - \tau^k \sum_{i=1}^n \nabla f_i(\mathbf{x}^{k})\right) - \mathbf{x}^{k} \right] \\ \mathbf{x}^{k+1} = \sum_{j=1}^n \omega_j f_j \\\end{split}\]

where \(\sum_{j=1}^n \omega_j=1\). In the current implementation \(\omega_j=1/n\).