diff --git a/.github/ISSUE_TEMPLATE/10-bug-report.yml b/.github/ISSUE_TEMPLATE/10-bug-report.yml index 9ac2b59..b93c5b6 100644 --- a/.github/ISSUE_TEMPLATE/10-bug-report.yml +++ b/.github/ISSUE_TEMPLATE/10-bug-report.yml @@ -11,7 +11,7 @@ body: Please, before submitting, make sure that: - There is not an [existing issue](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/issues) with the same question - - You have read the [contributing guide](https://JuliaSmoothOptimizers.github.io/DCISolver.jl/dev/90-contributing/) + - You have read the [contributing guide](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/docs/src/90-contributing.md) - You are following the [code of conduct](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/CODE_OF_CONDUCT.md) The form below should help you in filling out this issue. - type: textarea diff --git a/.github/ISSUE_TEMPLATE/20-feature-request.yml b/.github/ISSUE_TEMPLATE/20-feature-request.yml index 2958ca1..9c3159a 100644 --- a/.github/ISSUE_TEMPLATE/20-feature-request.yml +++ b/.github/ISSUE_TEMPLATE/20-feature-request.yml @@ -9,7 +9,7 @@ body: Please, before submitting, make sure that: - There is not an [existing issue](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/issues) with the same question - - You have read the [contributing guide](https://JuliaSmoothOptimizers.github.io/DCISolver.jl/dev/90-contributing/) + - You have read the [contributing guide](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/docs/src/90-contributing.md) - You are following the [code of conduct](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/CODE_OF_CONDUCT.md) The form below should help you in filling out this issue. - type: textarea diff --git a/.github/ISSUE_TEMPLATE/30-usage.yml b/.github/ISSUE_TEMPLATE/30-usage.yml index 3de16c3..669cdd1 100644 --- a/.github/ISSUE_TEMPLATE/30-usage.yml +++ b/.github/ISSUE_TEMPLATE/30-usage.yml @@ -11,7 +11,7 @@ body: - You have checked the [documentation](https://JuliaSmoothOptimizers.github.io/DCISolver.jl) and haven't found enough information - There is not an [existing issue](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/issues) with the same question - - You have read the [contributing guide](https://JuliaSmoothOptimizers.github.io/DCISolver.jl/dev/90-contributing/) + - You have read the [contributing guide](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/docs/src/90-contributing.md) - You are following the [code of conduct](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/CODE_OF_CONDUCT.md) The form below should help you in filling out this issue. - type: textarea diff --git a/.github/ISSUE_TEMPLATE/99-general.yml b/.github/ISSUE_TEMPLATE/99-general.yml index 2fc024f..b39a2a4 100644 --- a/.github/ISSUE_TEMPLATE/99-general.yml +++ b/.github/ISSUE_TEMPLATE/99-general.yml @@ -9,7 +9,7 @@ body: Please, before submitting, make sure that: - There is not an [existing issue](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/issues) with the same question - - You have read the [contributing guide](https://JuliaSmoothOptimizers.github.io/DCISolver.jl/dev/90-contributing/) + - You have read the [contributing guide](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/docs/src/90-contributing.md) - You are following the [code of conduct](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/CODE_OF_CONDUCT.md) The form below should help you in filling out this issue. - type: textarea diff --git a/.lychee.toml b/.lychee.toml index 6a232a9..c72b3d3 100644 --- a/.lychee.toml +++ b/.lychee.toml @@ -3,6 +3,7 @@ exclude = [ "@cite", "^https://github.com/.*/releases/tag/v.*$", "^https://doi.org/FIXME$", + "^https://doi.org/10.1137/070679557$", "^https://JuliaSmoothOptimizers.github.io/DCISolver.jl/stable$", "zenodo.org/badge/DOI/FIXME$" ] diff --git a/README.md b/README.md index 8532874..b9b4062 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ optimization problems of the form It uses other JuliaSmoothOptimizers packages for development. In particular, [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl) is used for defining the problem, and [SolverCore](https://github.com/JuliaSmoothOptimizers/SolverCore.jl) for the output. -It uses [LDLFactorizations.jl](https://github.com/JuliaSmoothOptimizers/LDLFactorizations.jl) by default to compute the factorization in the tangent step. [HSL.jl](https://github.com/JuliaSmoothOptimizers/HSL.jl) provides alternative linear solvers if [libHSL](https://licences.stfc.ac.uk/product/libhsl) can be downloaded. +It uses [LDLFactorizations.jl](https://github.com/JuliaSmoothOptimizers/LDLFactorizations.jl) by default to compute the factorization in the tangent step. [HSL.jl](https://github.com/JuliaSmoothOptimizers/HSL.jl) provides alternative linear solvers if libHSL can be downloaded. The feasibility steps are factorization-free and use iterative methods from [Krylov.jl](https://github.com/JuliaSmoothOptimizers/Krylov.jl) ## References @@ -39,7 +39,7 @@ If you use DCISolver.jl in your work, please cite using the reference given in [ ## Contributing -If you want to make contributions of any kind, please first that a look into our [contributing guide directly on GitHub](docs/src/90-contributing.md) or the [contributing page on the website](https://JuliaSmoothOptimizers.github.io/DCISolver.jl/dev/90-contributing/) +If you want to make contributions of any kind, please first take a look at our [contributing guide directly on GitHub](docs/src/90-contributing.md). --- diff --git a/docs/src/2-benchmark.md b/docs/src/2-benchmark.md index d288a2e..1ce192f 100644 --- a/docs/src/2-benchmark.md +++ b/docs/src/2-benchmark.md @@ -35,7 +35,7 @@ cutest_problems = (CUTEstModel(p) for p in pnames) length(cutest_problems) # number of problems ``` -We compare here DCISolver with [Ipopt](https://link.springer.com/article/10.1007/s10107-004-0559-y) (Wächter, A., & Biegler, L. T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical programming, 106(1), 25-57.), via the [NLPModelsIpopt.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsIpopt.jl) thin wrapper, with DCISolver on a subset of CUTEst problems. +We compare here DCISolver with [Ipopt](https://doi.org/10.1007/s10107-004-0559-y) (Wächter, A., & Biegler, L. T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical programming, 106(1), 25-57.), via the [NLPModelsIpopt.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsIpopt.jl) thin wrapper, with DCISolver on a subset of CUTEst problems. ``` @example ex1 using DCISolver, NLPModelsIpopt @@ -135,7 +135,7 @@ print_pp_column(:neval_jac, stats) # with respect to number of jacobian evaluati ## CUTEst benchmark with Knitro -In this second part, we present the result of a similar benchmark with a maximum of 10000 variables and constraints (82 problems), and including the solver [`KNITRO`](https://link.springer.com/chapter/10.1007/0-387-30065-1_4) (Byrd, R. H., Nocedal, J., & Waltz, R. A. (2006). K nitro: An integrated package for nonlinear optimization. In Large-scale nonlinear optimization (pp. 35-59). Springer, Boston, MA.) via [`NLPModelsKnitro.jl`](https://github.com/JuliaSmoothOptimizers/NLPModelsKnitro.jl). The script is included in [/benchmark/script10000_knitro.jl)](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/benchmark/script10000_knitro.jl). We report here a performance profile with respect -to the elapsed time to solve the problems and to the sum of evaluations of objective and constrain functions, see [/benchmark/figures.jl)](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/benchmark/figures.jl) for the code generating the profile wall. +In this second part, we present the result of a similar benchmark with a maximum of 10000 variables and constraints (82 problems), and including the solver [`KNITRO`](https://doi.org/10.1007/0-387-30065-1_4) (Byrd, R. H., Nocedal, J., & Waltz, R. A. (2006). K nitro: An integrated package for nonlinear optimization. In Large-scale nonlinear optimization (pp. 35-59). Springer, Boston, MA.) via [`NLPModelsKnitro.jl`](https://github.com/JuliaSmoothOptimizers/NLPModelsKnitro.jl). The script is included in [/benchmark/script10000_knitro.jl)](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/benchmark/script10000_knitro.jl). We report here a performance profile with respect +to the elapsed time to solve the problems and to the sum of evaluations of objective and constraint functions, see [/benchmark/figures.jl](https://github.com/JuliaSmoothOptimizers/DCISolver.jl/blob/main/benchmark/figures.jl) for the code generating the profile wall. ![Benchmark](./assets/ipopt_knitro_dcildl_82.png) diff --git a/docs/src/91-developer.md b/docs/src/91-developer.md index 8c78363..fface7a 100644 --- a/docs/src/91-developer.md +++ b/docs/src/91-developer.md @@ -111,7 +111,6 @@ We try to keep a linear history in this repo, so it is important to keep your br ### Before creating a pull request !!! tip "Atomic git commits" - Try to create "atomic git commits" (recommended reading: [The Utopic Git History](https://blog.esciencecenter.nl/the-utopic-git-history-d44b81c09593)). - Make sure the tests pass. - Make sure the pre-commit tests pass. diff --git a/docs/src/index.md b/docs/src/index.md index 40fafcf..39a3668 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -37,7 +37,7 @@ We refer to [jso.dev](https://jso.dev) for tutorials on the NLPModel API. This f The DCI algorithm is an iterative method that has the flavor of a projected gradient algorithm and could be characterized as a relaxed feasible point method with dynamic control of infeasibility. It is a combination of two steps: a tangent step and a feasibility step. -It uses [LDLFactorizations.jl](https://github.com/JuliaSmoothOptimizers/LDLFactorizations.jl) by default to compute the factorization in the tangent step. [HSL.jl](https://github.com/JuliaSmoothOptimizers/HSL.jl) provides alternative linear solvers if [libHSL](https://licences.stfc.ac.uk/product/libhsl) can be downloaded. +It uses [LDLFactorizations.jl](https://github.com/JuliaSmoothOptimizers/LDLFactorizations.jl) by default to compute the factorization in the tangent step. [HSL.jl](https://github.com/JuliaSmoothOptimizers/HSL.jl) provides alternative linear solvers if libHSL can be downloaded. The feasibility steps are factorization-free and use iterative methods from [Krylov.jl](https://github.com/JuliaSmoothOptimizers/Krylov.jl). ## Example