You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: paper/paper.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,11 +50,11 @@ Currently, the following solvers are implemented:
50
50
All solvers rely on first derivatives of $f$ and $c$, and optionally on their second derivatives in the form of Hessian-vector products.
51
51
If second derivatives are not available, quasi-Newton approximations can be used.
52
52
In addition, the proximal mapping of the nonsmooth part $h$, or adequate models thereof, must be evaluated.
53
-
At each iteration, a step is computed by solving a subproblem of the form \eqref{eq:nlp} inexactly, in which $f$, $h$, and $c$ are replaced with appropriate models about the current iterate.
53
+
At each iteration, a step is computed by solving a subproblem of the form \eqref{eq:nlp} inexactly, in which $f$, $h$, and $c$ are replaced with appropriate models around the current iterate.
54
54
The solvers R2, R2DH and TRDH are particularly well suited to solve the subproblems, though they are general enough to solve \eqref{eq:nlp}.
55
-
All solvers are implemented in place, so re-solves incur no allocations.
55
+
All solvers are allocation-free, so re-solves incur no additional allocations.
56
56
To illustrate our claim of extensibility, a first version of the AL solver was implemented by an external contributor.
57
-
Furthermore, a nonsmooth penalty approach, described in [@diouane-gollier-orban-2024] is currently being developed, that relies on the library to efficiently solve the subproblems.
57
+
Furthermore, a nonsmooth penalty approach, described in [@diouane-gollier-orban-2024], is currently being developed, that relies on the library to efficiently solve the subproblems.
58
58
59
59
<!-- ## Requirements of the ShiftedProximalOperators.jl -->
60
60
<!---->
@@ -85,7 +85,7 @@ Given $f$ and $h$, the companion package [RegularizedProblems.jl](https://github
85
85
reg_nlp =RegularizedNLPModel(f, h)
86
86
```
87
87
88
-
They can also be paired into a *Regularized Nonlinear Least-Squares Model*if $f(x) = \tfrac{1}{2} \|F(x)\|^2$ for some residual $F: \mathbb{R}^n \to \mathbb{R}^m$, in the case of the **LM** and **LMTR** solvers.
88
+
They can also be paired into a *Regularized Nonlinear Least-Squares Model*, used by the **LM** and **LMTR** solvers, if $f(x) = \tfrac{1}{2} \|F(x)\|^2$ for some residual $F: \mathbb{R}^n \to \mathbb{R}^m$.
89
89
90
90
```julia
91
91
reg_nls =RegularizedNLSModel(F, h)
@@ -96,7 +96,7 @@ This design makes for a convenient source of problem instances for benchmarking
96
96
97
97
## Support for both exact and approximate Hessian
98
98
99
-
In contrast with[ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl), methods such as **R2N** and **TR** methods support exact Hessians as well as several Hessian approximations of $f$.
99
+
In contrast to[ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl), methods such as **R2N** and **TR** support exact Hessians as well as several Hessian approximations of $f$.
100
100
Hessian–vector products $v \mapsto Hv$ can be obtained via automatic differentiation through [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl) or implemented manually.
101
101
Limited-memory and diagonal quasi-Newton approximations can be selected from [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
102
102
This design allows solvers to exploit second-order information without explicitly forming dense or sparse Hessians, which is often expensive in time and memory, particularly at large scale.
@@ -105,7 +105,7 @@ This design allows solvers to exploit second-order information without explicitl
105
105
106
106
We illustrate the capabilities of [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) on a Support Vector Machine (SVM) model with a $\ell_{1/2}^{1/2}$ penalty for image classification [@aravkin-baraldi-orban-2024].
107
107
108
-
Below is a condensed example showing how to define and solve the problem, and perform a solve followed by a re-solve:
108
+
Below is a condensed example showing how to define the problem and perform a solve followed by a re-solve:
We compare **TR**, **R2N**, **LM** and **LMTR** from our library on the SVM problem.
130
130
Experiments were performed on macOS (arm64) on an Apple M2 (8-core) machine, using Julia 1.11.7.
131
131
132
-
The table reports the convergence status of each solver, the number of evaluations of $f$, the number of evaluations of $\nabla f$, the number of proximal operator evaluations, the elapsed time and the final objective value.
132
+
The table reports the convergence status of each solver, the number of evaluations of $f$, the number of evaluations of $\nabla f$, the number of proximal operator evaluations, the elapsed time, and the final objective value.
133
133
For TR and R2N, we use limited-memory SR1 Hessian approximations.
134
134
The subproblem solver is **R2**.
135
135
@@ -144,7 +144,7 @@ Note that the final objective values differ due to the nonconvexity of the probl
144
144
However, it requires more proximal evaluations, but these are inexpensive.
145
145
**LMTR** and **LM** require the fewest function evaluations, but incur many Jacobian–vector products, and are the slowest in terms of time.
146
146
147
-
Ongoing research aims to reduce the number of proximal evaluations.
147
+
Ongoing research aims to reduce the number of proximal evaluations, for instance by allowing inexact proximal computations [@allaire-le-digabel-orban-2025].
0 commit comments