Closed
Conversation
fix: improve error handling when `make_fetch` referential integrity fails
dimitri-yatsenko
requested changes
Jun 20, 2025
| to be passed down to each ``make()`` call. Computation arguments should be | ||
| specified within the pipeline e.g. using a `dj.Lookup` table. | ||
| :type make_kwargs: dict, optional | ||
| :param schedule_jobs: if True, run schedule_jobs before doing populate (default: True), |
Member
There was a problem hiding this comment.
Rather than baking this operation into populate, which makes the logic more convoluted, consider making schedule_jobs a separate, explicit process.
| ) | ||
| finally: | ||
| if purge_invalid_jobs: | ||
| self.purge_invalid_jobs() |
Member
There was a problem hiding this comment.
Rather than purging jobs from within schedule_jobs, consider implementing it as a separate explicit step, reducing interdependencies.
|
This PR is stale because it has been open for 45 days with no activity. |
Member
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR introduces significant changes to the logic of DataJoint's jobs reservation/orchestration scheme - namely the
autopopulatemechanism.The PR aims to address issue described in #1243 - following the proposed solution 1.
I have tested this new autopopulate 2.0 mechanism in some production pipeline settings, and it works great!
In short, the new logic is outlined below
Enhancing the Jobs Table in DataJoint-Python
To address current limitations, we'll enhance the jobs table by introducing new job statuses and modifying the
populate()logic. This approach aims to improve efficiency and maintain data freshness.Modifying the Jobs Table
Expand the job statuses within the jobs table to include:
scheduled: For jobs that are identified and queued for execution.success: To record jobs that have completed without errors.Dedicated
schedule_jobsStepIntroduce a new, dedicated step called
schedule_jobs. This method will be responsible for populating the jobs table with new entries marked asscheduled.(table.key_source - table).fetch("KEY")to identify new jobs. While this operation can be computationally expensive, it mirrors the current approach for job discovery.schedule_jobswill include a configurable rate-limiting logic. For instance, it can skip scheduling if the most recent scheduling event occurred within a defined time period (e.g., 10 seconds).New
populate()LogicThe
populate()function will be updated to:schedule_jobscan be called at the beginning of thepopulate()process to ensure the jobs table is up-to-date before work commences.key_source,populate()will fetch keys directly from the jobs table that have ascheduledstatus.make()will be called. Upon completion, the job's status in the jobs table will be updated to eithererrororsuccess.Addressing Stale or Out-of-Sync Jobs Data
The jobs table can become stale or out-of-sync if not updated frequently or if upstream data changes.
successjobs can also become "invalid."purge_invalid_jobsMethod: To handle this, a newpurge_invalid_jobsmethod will be added. This method will identify and remove these invalid entries from the jobs table, ensuring data integrity.Keeping the Jobs Table "Fresh"
Maintaining a "fresh" jobs table is crucial for efficient operations:
schedule_jobswill ensure that new tasks are promptly added to the queue.purge_invalid_jobswill keep the table clean and free of irrelevant or invalid entries.Trade-off: Both
schedule_jobsandpurge_invalid_jobswill involve hittingkey_source, which can be resource-intensive. Users (or system administrators) will need to balance the desired level of "freshness" against the associated resource consumption to optimize performance.For more detailed description of the new logic, see here