pyproximal.optimization.primal.GeneralizedProximalGradient(proxfs, proxgs, x0, tau, epsg=1.0, weights=None, eta=1.0, niter=10, acceleration=None, callback=None, show=False)[source]#

Generalized Proximal gradient

Solves the following minimization problem using Generalized Proximal gradient algorithm:

\[\mathbf{x} = \argmin_\mathbf{x} \sum_{i=1}^n f_i(\mathbf{x}) + \sum_{j=1}^m \epsilon_j g_j(\mathbf{x}),~~n,m \in \mathbb{N}^+\]

where the \(f_i(\mathbf{x})\) are smooth convex functions with a uniquely defined gradient and the \(g_j(\mathbf{x})\) are any convex function that have a known proximal operator.

proxfslist of pyproximal.ProxOperator

Proximal operators of the \(f_i\) functions (must have grad implemented)

proxgslist of pyproximal.ProxOperator

Proximal operators of the \(g_j\) functions


Initial vector


Positive scalar weight, which should satisfy the following condition to guarantees convergence: \(\tau \in (0, 1/L]\) where L is the Lipschitz constant of \(\sum_{i=1}^n \nabla f_i\).

epsgfloat or np.ndarray, optional

Scaling factor(s) of g function(s)

weightsfloat, optional

Weighting factors of g functions. Must sum to 1.

etafloat, optional

Relaxation parameter (must be between 0 and 1, 0 excluded). Note that this will be only used when acceleration=None.

niterint, optional

Number of iterations of iterative scheme

acceleration: :obj:`str`, optional

Acceleration (None, vandenberghe or fista)

callbackcallable, optional

Function with signature (callback(x)) to call after each iteration where x is the current model vector

showbool, optional

Display iterations log


Inverted model


The Generalized Proximal point algorithm can be expressed by the following recursion:

\[\begin{split}\text{for } j=1,\cdots,n, \\ ~~~~\mathbf z_j^{k+1} = \mathbf z_j^{k} + \eta \left[prox_{\frac{\tau^k \epsilon_j}{w_j} g_j}\left(2 \mathbf{x}^{k} - \mathbf{z}_j^{k} - \tau^k \sum_{i=1}^n \nabla f_i(\mathbf{x}^{k})\right) - \mathbf{x}^{k} \right] \\ \mathbf{x}^{k+1} = \sum_{j=1}^n w_j \mathbf z_j^{k+1} \\\end{split}\]

where \(\sum_{j=1}^n w_j=1\). In the current implementation, \(w_j=1/n\) when not provided.