Skip to content

Commit 0f5a89c

Browse files
Enhance documentation by refining the section on Hessians, clarifying support for second-order information
1 parent f523011 commit 0f5a89c

1 file changed

Lines changed: 5 additions & 12 deletions

File tree

paper/paper.md

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,11 @@ On the other hand, Hessian approximations of these functions, including quasi-Ne
7272

7373
Finally, nonsmooth terms $h$ can be modeled using [ProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ProximalOperators.jl), which provides a broad collection of nonsmooth functions, together with [ShiftedProximalOperators.jl](https://github.com/JuliaSmoothOptimizers/ShiftedProximalOperators.jl), which provides shifted proximal mappings for nonsmooth functions.
7474

75-
This modularity makes it easy to benchmark existing solvers available in the repository [@diouane-habiboullah-orban-2024;@aravkin-baraldi-orban-2022;@aravkin-baraldi-orban-2024;@leconte-orban-2023-2].
75+
## Support for Hessians of the smooth part $f$
7676

77-
## Support for Hessians
78-
79-
In contrast to first-order methods package like [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) enables the use of second-order information, which can significantly improve convergence rates, especially for ill-conditioned problems.
80-
A way to use Hessians is via automatic differentiation tools such as [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl).
77+
In contrast to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) methods such as **R2N** and **TR** supports Hessians of $f$, which can significantly improve convergence rates, especially for ill-conditioned problems.
78+
Hessians can be obtained via automatic differentiation through [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl) or supplied directly as Hessian–vector products $v \mapsto Hv$.
79+
Because forming explicit dense (or sparse) Hessians is often prohibitively expensive in both computation and memory, particularly in high-dimensional settings.
8180

8281
## Requirements of the RegularizedProblems.jl package
8382

@@ -88,7 +87,7 @@ The package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/Re
8887
reg_nlp = RegularizedNLPModel(f, h)
8988
```
9089

91-
This design makes it a convenient source of reproducible problem instances for testing and benchmarking algorithms in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl).
90+
This design makes it a convenient source of reproducible problem instances for testing and benchmarking algorithms in the repository [@diouane-habiboullah-orban-2024;@aravkin-baraldi-orban-2022;@aravkin-baraldi-orban-2024;@leconte-orban-2023-2].
9291

9392
## Requirements of the ShiftedProximalOperators.jl package
9493

@@ -132,12 +131,6 @@ Solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers
132131

133132
This is crucial for large-scale problems where exact subproblem solutions are prohibitive.
134133

135-
## Support for Hessians as Linear Operators
136-
137-
The second-order methods in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) can use Hessian approximations represented as linear operators via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
138-
Explicitly forming Hessians as dense or sparse matrices is often prohibitively expensive, both computationally and in terms of memory, especially in high-dimensional settings.
139-
In contrast, many problems admit efficient implementations of Hessian–vector or Jacobian–vector products, either through automatic differentiation tools or limited-memory quasi-Newton updates, making the linear-operator approach more scalable and practical.
140-
141134
## In-place methods
142135

143136
All solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) are implemented in an in-place fashion, minimizing memory allocations during the resolution process.

0 commit comments

Comments
 (0)