pyproximal.optimization.primaldual.AdaptivePrimalDual(proxf, proxg, A, x0, tau, mu, alpha=0.5, eta=0.95, s=1.0, delta=1.5, z=None, niter=10, tol=1e-10, callback=None, show=False)[source]#

Solves the minimization problem in pyproximal.optimization.primaldual.PrimalDual using an adaptive version of the first-order primal-dual algorithm of [1]. The main advantage of this method is that step sizes $$\tau$$ and $$\mu$$ are changing through iterations, improving the overall speed of convergence of the algorithm.

Parameters
proxfpyproximal.ProxOperator

Proximal operator of f function

proxgpyproximal.ProxOperator

Proximal operator of g function

Apylops.LinearOperator

Linear operator of g

x0numpy.ndarray

Initial vector

taufloat

Stepsize of subgradient of $$f$$

mufloat

Stepsize of subgradient of $$g^*$$

alphafloat, optional

Initial adaptivity level (must be between 0 and 1)

etafloat, optional

Scaling of adaptivity level to be multipled to the current alpha every time the norm of the two residuals start to diverge (must be between 0 and 1)

sfloat, optional

Scaling of residual balancing principle

deltafloat, optional

Balancing factor. Step sizes are updated only when their ratio exceeds this value.

znumpy.ndarray, optional

niterint, optional

Number of iterations of iterative scheme

tolint, optional

Tolerance on residual norms

callbackcallable, optional

Function with signature (callback(x)) to call after each iteration where x is the current model vector

showbool, optional

Display iterations log

Returns
xnumpy.ndarray

Inverted model

stepstuple

Tau, mu and alpha evolution through iterations

Notes

The Adative Primal-dual algorithm share the the same iterations of the original pyproximal.optimization.primaldual.PrimalDual solver. The main difference lies in the fact that the step sizes tau and mu are adaptively changed at each iteration leading to faster converge.

Changes are applied by tracking the norm of the primal and dual residuals. When their mutual ratio increases beyond a certain treshold delta the step lenghts are updated to balance the minimization and maximization part of the overall optimization process.

1

T., Goldstein, M., Li, X., Yuan, E., Esser, R., Baraniuk, “Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems”, ArXiv, 2013.