- pyproximal.optimization.primaldual.AdaptivePrimalDual(proxf, proxg, A, x0, tau, mu, alpha=0.5, eta=0.95, s=1.0, delta=1.5, z=None, niter=10, tol=1e-10, callback=None, show=False)#
Adaptive Primal-dual algorithm
Solves the minimization problem in
pyproximal.optimization.primaldual.PrimalDualusing an adaptive version of the first-order primal-dual algorithm of . The main advantage of this method is that step sizes \(\tau\) and \(\mu\) are changing through iterations, improving the overall speed of convergence of the algorithm.
Proximal operator of f function
Proximal operator of g function
Linear operator of g
Stepsize of subgradient of \(f\)
Stepsize of subgradient of \(g^*\)
Initial adaptivity level (must be between 0 and 1)
Scaling of adaptivity level to be multipled to the current alpha every time the norm of the two residuals start to diverge (must be between 0 and 1)
Scaling of residual balancing principle
Balancing factor. Step sizes are updated only when their ratio exceeds this value.
Number of iterations of iterative scheme
Tolerance on residual norms
Function with signature (
callback(x)) to call after each iteration where
xis the current model vector
Display iterations log
The Adative Primal-dual algorithm share the the same iterations of the original
pyproximal.optimization.primaldual.PrimalDualsolver. The main difference lies in the fact that the step sizes
muare adaptively changed at each iteration leading to faster converge.
Changes are applied by tracking the norm of the primal and dual residuals. When their mutual ratio increases beyond a certain treshold
deltathe step lenghts are updated to balance the minimization and maximization part of the overall optimization process.
T., Goldstein, M., Li, X., Yuan, E., Esser, R., Baraniuk, “Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems”, ArXiv, 2013.