Handle out-of-memory scenarios gracefully instead of crashing the app when ExecuTorch runtime fails to allocate memory for model operations.
Current State
- App crashes when loading large models on memory-constrained devices
- ExecuTorch runtime throws unrecoverable errors during memory allocation
- No way to check available memory before loading a model
- Users have no warning before crash occurs
Goals
- Catch memory allocation failures before they crash the app
- Provide memory estimation API so users can check before loading
Handle out-of-memory scenarios gracefully instead of crashing the app when ExecuTorch runtime fails to allocate memory for model operations.
Current State
Goals