pyproximal.optimization.primal.LinearizedADMM#
- pyproximal.optimization.primal.LinearizedADMM(proxf, proxg, A, x0, tau, mu, niter=10, callback=None, show=False)[source]#
Linearized Alternating Direction Method of Multipliers
Solves the following minimization problem using Linearized Alternating Direction Method of Multipliers (also known as Douglas-Rachford splitting):
\[\mathbf{x} = \argmin_\mathbf{x} f(\mathbf{x}) + g(\mathbf{A}\mathbf{x})\]where \(f(\mathbf{x})\) and \(g(\mathbf{x})\) are any convex function that has a known proximal operator and \(\mathbf{A}\) is a linear operator.
- Parameters
- proxf
pyproximal.ProxOperator
Proximal operator of f function
- proxg
pyproximal.ProxOperator
Proximal operator of g function
- A
pylops.LinearOperator
Linear operator
- x0
numpy.ndarray
Initial vector
- tau
float
, optional Positive scalar weight, which should satisfy the following condition to guarantee convergence: \(\mu \in (0, \tau/\lambda_{max}(\mathbf{A}^H\mathbf{A})]\).
- mu
float
, optional Second positive scalar weight, which should satisfy the following condition to guarantees convergence: \(\mu \in (0, \tau/\lambda_{max}(\mathbf{A}^H\mathbf{A})]\).
- niter
int
, optional Number of iterations of iterative scheme
- callback
callable
, optional Function with signature (
callback(x)
) to call after each iteration wherex
is the current model vector- show
bool
, optional Display iterations log
- proxf
- Returns
- x
numpy.ndarray
Inverted model
- z
numpy.ndarray
Inverted second model
- x
Notes
The Linearized-ADMM algorithm can be expressed by the following recursion:
\[\begin{split}\mathbf{x}^{k+1} = \prox_{\mu f}(\mathbf{x}^{k} - \frac{\mu}{\tau} \mathbf{A}^H(\mathbf{A} \mathbf{x}^k - \mathbf{z}^k + \mathbf{u}^k))\\ \mathbf{z}^{k+1} = \prox_{\tau g}(\mathbf{A} \mathbf{x}^{k+1} + \mathbf{u}^k)\\ \mathbf{u}^{k+1} = \mathbf{u}^{k} + \mathbf{A}\mathbf{x}^{k+1} - \mathbf{z}^{k+1}\end{split}\]
Examples using pyproximal.optimization.primal.LinearizedADMM
#
Relaxed Mumford-Shah regularization