Skip to content

Non-adherence to resource parameters #3873

@fruce-ki

Description

@fruce-ki

It seems that the parameters for multithreading and RAM are not obeyed throughout.

The screenshot of my resource monitor is from a sorting run with global_job_kwargs = dict(n_jobs = 1, mp_context = 'spawn', chunk_memory = '10G', total_memory = None) as overall settings, and 'n_jobs' : 1 and 'memory_limit': 0.5 in the SC2 settings.

Image

In a workstation with 20 physical Intel cores and >300G RAM, this run should have barely been visible on the resource monitor, if the resource restrictions were respected. But apparently some stages of the execution ask for and receive 100% of the resources. The shot is during SC2, but I get similar situations with the preprocessing and postprocessing modules.

I've tried 'memory_limit': 0.1 with not much difference: Right now I'm sitting at 80% RAM during find spikes (circus-omp-svd) (no parallelization):, when the allowed RAM should be just 10% if I understand correctly.

This is quite a problem for me. On one hand, this often oversubscribes resources and I come back to a killed terminal session, making it impossible to reliably queue processing of many recordings overnight or over the over the weekend. On the other hand it also prevents me from running any other resource intensive tasks while processing a recording, as I know that those resources will at some point become unavailable and cause stuff to crash.

Am I doing something wrong with my parameters? Is there a way to keep SI from taking up all the resources and instead take up only a predictable amount that would allow me to run other stuff in parallel with this?

I'm on version 0.102.2

Metadata

Metadata

Assignees

No one assigned

    Labels

    concurrencyRelated to parallel processing

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions