ExaModels now supports and by default calls NLPModels.obj, etc., asynchronously for GPU callbacks. I wonder if it would make sense to have a separate API or allow querying if callbacks for a given model::AbstractNLPModel run asynchronously. Not particularly suggesting anything
ExaModels now supports and by default calls
NLPModels.obj, etc., asynchronously for GPU callbacks. I wonder if it would make sense to have a separate API or allow querying if callbacks for a givenmodel::AbstractNLPModelrun asynchronously. Not particularly suggesting anything