You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: paper/paper.md
+5-12Lines changed: 5 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,12 +72,11 @@ On the other hand, Hessian approximations of these functions, including quasi-Ne
72
72
73
73
Finally, nonsmooth terms $h$ can be modeled using [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which provides a broad collection of nonsmooth functions, together with [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl), which provides shifted proximal mappings for nonsmooth functions.
74
74
75
-
This modularity makes it easy to benchmark existing solvers available in the repository [@diouane-habiboullah-orban-2024;@aravkin-baraldi-orban-2022;@aravkin-baraldi-orban-2024;@leconte-orban-2023-2].
75
+
## Support for Hessians of the smooth part $f$
76
76
77
-
## Support for Hessians
78
-
79
-
In contrast to first-order methods package like [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) enables the use of second-order information, which can significantly improve convergence rates, especially for ill-conditioned problems.
80
-
A way to use Hessians is via automatic differentiation tools such as [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl).
77
+
In contrast to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) methods such as **R2N** and **TR** supports Hessians of $f$, which can significantly improve convergence rates, especially for ill-conditioned problems.
78
+
Hessians can be obtained via automatic differentiation through [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl) or supplied directly as Hessian–vector products $v \mapsto Hv$.
79
+
Because forming explicit dense (or sparse) Hessians is often prohibitively expensive in both computation and memory, particularly in high-dimensional settings.
81
80
82
81
## Requirements of the RegularizedProblems.jl package
83
82
@@ -88,7 +87,7 @@ The package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/Re
88
87
reg_nlp =RegularizedNLPModel(f, h)
89
88
```
90
89
91
-
This design makes it a convenient source of reproducible problem instances for testing and benchmarking algorithms in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl).
90
+
This design makes it a convenient source of reproducible problem instances for testing and benchmarking algorithms in the repository [@diouane-habiboullah-orban-2024;@aravkin-baraldi-orban-2022;@aravkin-baraldi-orban-2024;@leconte-orban-2023-2].
92
91
93
92
## Requirements of the ShiftedProximalOperators.jl package
94
93
@@ -132,12 +131,6 @@ Solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers
132
131
133
132
This is crucial for large-scale problems where exact subproblem solutions are prohibitive.
134
133
135
-
## Support for Hessians as Linear Operators
136
-
137
-
The second-order methods in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) can use Hessian approximations represented as linear operators via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
138
-
Explicitly forming Hessians as dense or sparse matrices is often prohibitively expensive, both computationally and in terms of memory, especially in high-dimensional settings.
139
-
In contrast, many problems admit efficient implementations of Hessian–vector or Jacobian–vector products, either through automatic differentiation tools or limited-memory quasi-Newton updates, making the linear-operator approach more scalable and practical.
140
-
141
134
## In-place methods
142
135
143
136
All solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) are implemented in an in-place fashion, minimizing memory allocations during the resolution process.
0 commit comments