Skip to content

Fixes #23646: Fix memory leak in scan_dags_job_background by adding singleton guard#27057

Open
RajdeepKushwaha5 wants to merge 10 commits intoopen-metadata:mainfrom
RajdeepKushwaha5:fix/scan-dags-memory-leak-singleton
Open

Fixes #23646: Fix memory leak in scan_dags_job_background by adding singleton guard#27057
RajdeepKushwaha5 wants to merge 10 commits intoopen-metadata:mainfrom
RajdeepKushwaha5:fix/scan-dags-memory-leak-singleton

Conversation

@RajdeepKushwaha5
Copy link
Copy Markdown
Contributor

@RajdeepKushwaha5 RajdeepKushwaha5 commented Apr 5, 2026

Describe your changes:

Fixes #23646

Each time an ingestion pipeline is deployed via the OpenMetadata UI, scan_dags_job_background() spawns a new multiprocessing.Process (ScanDagsTask). Each process:

  1. Imports the entire Airflow scheduler stack (~120Mi of memory)
  2. Creates a SchedulerJob with heartrate=0, which the main scheduler marks as "failed"
  3. Is never join()ed by the parent — so it becomes a zombie process whose memory is never released

After N deploys, the webserver pod accumulates N × ~120Mi of leaked memory and N orphaned "failed" SchedulerJob entries in the Airflow database.

Fix: Add a per-worker singleton guard with a reaper thread to scan_dags_job_background():

  • Per-worker guard — a threading.Lock + module-level _current_scan reference prevents spawning multiple concurrent scan processes from the same Python worker
  • Reaper thread — each scan spawns a lightweight daemon thread (_reap_scan) that join()s the process when it finishes, releasing resources and preventing zombies
  • Deferred rescan — if a deploy arrives while a scan is already running, a _rescan_requested flag is set; the reaper automatically starts one follow-up scan after the current one completes, ensuring newly deployed DAGs are always discovered
  • Race-safe — the rescan check is inside the _current_scan is process identity guard, so a stale reaper (whose process was already replaced) cannot spawn duplicates
  • No daemon=True on ScanDagsTask — Airflow's scheduler internals fork child processes to parse DAGs, which Python forbids from daemon processes (AssertionError: daemonic processes are not allowed to have children). The reaper thread is daemonized instead.

Before (broken):

def scan_dags_job_background():
    process = ScanDagsTask()
    process.start()
    # process is never joined — zombie, memory leaked

After (fixed):

_scan_lock = threading.Lock()
_current_scan: Optional[ScanDagsTask] = None
_rescan_requested: bool = False

def _start_scan():
    """Start a new ScanDagsTask and spawn a reaper thread to join it."""
    global _current_scan, _rescan_requested
    _rescan_requested = False
    process = ScanDagsTask()
    process.start()
    _current_scan = process
    reaper = threading.Thread(target=_reap_scan, args=(process,), daemon=True)
    reaper.start()

def _reap_scan(process: ScanDagsTask):
    """Wait for the scan process to finish; start a follow-up if requested."""
    process.join()
    with _scan_lock:
        global _current_scan
        if _current_scan is process:
            _current_scan = None
            if _rescan_requested:
                logger.info("Running queued rescan after previous scan finished")
                _start_scan()

def scan_dags_job_background():
    with _scan_lock:
        if _current_scan is not None and _current_scan.is_alive():
            global _rescan_requested
            _rescan_requested = True
            logger.info("DAG scan already in progress, queued rescan")
            return
        _start_scan()

2 files changed: utils.py (+45, -4), test_scan_dags_singleton.py (new, 7 test cases).

Type of change:

  • Bug fix

Checklist:

  • I have read the CONTRIBUTING document.
  • My PR title is Fixes #23646: Fix memory leak in scan_dags_job_background by adding singleton guard
  • I have commented on my code, particularly in hard-to-understand areas.
  • For JSON Schema changes: I updated the migration scripts or explained why it is not needed.
  • I have added a test that covers the exact scenario we are fixing. For complex issues, comment the issue number in the test for future reference.

Note on testing: The memory leak requires a running Airflow webserver + Kubernetes pod to reproduce (deploy pipeline N times, monitor memory via kubectl top pod). Unit tests in tests/unit/test_scan_dags_singleton.py verify the singleton guard logic by mocking ScanDagsTask:

  1. First call starts a process
  2. Concurrent call while scan is alive sets rescan flag (no duplicate spawn)
  3. Finished scan gets replaced on next call
  4. Reaper thread triggers follow-up scan when rescan was requested
  5. Reaper clears state when no follow-up is needed
  6. Process is never created with daemon=True (Airflow children would crash)
  7. Stale reaper cannot spawn duplicates (race condition guard)

…d by adding singleton guard

scan_dags_job_background() spawns a new multiprocessing.Process per deploy
call. Each process imports the full Airflow scheduler stack (~120Mi) and is
never join()ed, so zombie processes accumulate and memory is never released.

Fix: track the running process with a threading.Lock, join() the previous
process before starting a new one, skip if a scan is already in progress,
and set daemon=True so zombies are cleaned up on parent exit.
Copilot AI review requested due to automatic review settings April 5, 2026 20:15
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes unbounded process/memory growth in the Airflow managed APIs by preventing scan_dags_job_background() from spawning a new scheduler-scan multiprocessing.Process on every deploy call.

Changes:

  • Adds a module-level lock and “current scan” process reference to guard concurrent invocations.
  • Joins the previous scan process (when finished) before starting a new one, and skips starting a new scan if one is already running.
  • Runs the scan process as a daemon and updates the function docstring to reflect the approach.

"""
process = ScanDagsTask()
process.start()
global _current_scan # noqa: PLW0603
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# noqa: PLW0603 won’t silence pylint (this package uses # pylint: disable=... in multiple places, e.g. api/routes/health.py:35-37). If pylint is part of CI for this module, it may still flag global _current_scan; consider using the equivalent # pylint: disable=global-statement (or project-standard suppression) instead of noqa.

Suggested change
global _current_scan # noqa: PLW0603
global _current_scan # pylint: disable=global-statement

Copilot uses AI. Check for mistakes.
Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
…dd tests

- Remove daemon=True: ScanDagsTask spawns child processes (Airflow
  scheduler internals), which is forbidden for daemon processes
- Add _rescan_requested flag: ensures deploys during an active scan
  queue a follow-up scan instead of silently dropping
- Clarify docstring: guard is per-worker, not cross-Gunicorn
- Replace noqa with pylint disable to match project conventions
- Add unit tests covering singleton guard behavior
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
After joining a finished scan, check _rescan_requested before starting
a new process. If no rescan was queued (flag is False), return early
instead of unconditionally spawning a new scan. This ensures deploys
that arrive during an active scan actually trigger a follow-up scan.

Updated tests to cover both paths: rescan-requested starts new scan,
no-rescan-requested returns without spawning.
Copilot AI review requested due to automatic review settings April 5, 2026 20:28
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Comment thread openmetadata-airflow-apis/tests/unit/test_scan_dags_singleton.py
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
…test

- Extract _start_scan() and _reap_scan() helpers from main function
- Reaper thread join()s the scan process and automatically starts a
  follow-up scan if _rescan_requested was set, ensuring deploys during
  an active scan are never lost — even without another deploy call
- Simplify scan_dags_job_background() to just guard + delegate
- Strengthen test_no_daemon_flag_on_process: assert process.daemon
  stays False after construction (catches post-init daemon=True)
- Add test_reaper_starts_follow_up_when_rescan_requested
- Add test_reaper_clears_current_scan_without_follow_up
Copilot AI review requested due to automatic review settings April 5, 2026 20:48
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

Comment thread openmetadata-airflow-apis/tests/unit/test_scan_dags_singleton.py Outdated
Comment thread openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py Outdated
A stale reaper thread (for process A) must not trigger a rescan if
another scan (process B) has already replaced it. Move the
_rescan_requested check inside the 'if _current_scan is process'
block so only the reaper for the current scan can start a follow-up.

Add test_stale_reaper_does_not_spawn_duplicate to cover the scenario.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Copilot AI review requested due to automatic review settings April 5, 2026 20:52
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

Comment on lines +217 to +223
def _start_scan():
"""Start a new ScanDagsTask and spawn a reaper thread to join it."""
global _current_scan, _rescan_requested # pylint: disable=global-statement
_rescan_requested = False
process = ScanDagsTask()
process.start()
_current_scan = process
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_start_scan() mutates _current_scan/_rescan_requested but relies on callers already holding _scan_lock. To avoid accidental future calls without the lock (which would introduce races), consider either acquiring _scan_lock inside _start_scan() or adding an explicit comment/assertion that it must only be called under the lock.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

Hi there 👋 Thanks for your contribution!

The OpenMetadata team will review the PR shortly! Once it has been labeled as safe to test, the CI workflows
will start executing and we'll be able to make sure everything is working as expected.

Let us know if you need any help!

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🟡 Playwright Results — all passed (25 flaky)

✅ 3599 passed · ❌ 0 failed · 🟡 25 flaky · ⏭️ 207 skipped

Shard Passed Failed Flaky Skipped
🟡 Shard 1 455 0 2 2
🟡 Shard 2 640 0 3 32
🟡 Shard 3 645 0 4 26
🟡 Shard 4 621 0 6 47
🟡 Shard 5 608 0 1 67
🟡 Shard 6 630 0 9 33
🟡 25 flaky test(s) (passed on retry)
  • Pages/AuditLogs.spec.ts › should apply both User and EntityType filters simultaneously (shard 1, 1 retry)
  • Pages/UserCreationWithPersona.spec.ts › Create user with persona and verify on profile (shard 1, 1 retry)
  • Features/BulkEditEntity.spec.ts › Glossary (shard 2, 1 retry)
  • Features/BulkImport.spec.ts › Keyboard Delete selection (shard 2, 1 retry)
  • Features/ChangeSummaryBadge.spec.ts › AI badge should NOT appear for manually-edited descriptions (shard 2, 1 retry)
  • Features/LandingPageWidgets/FollowingWidget.spec.ts › Check followed entity present in following widget (shard 3, 1 retry)
  • Features/Permissions/GlossaryPermissions.spec.ts › Team-based permissions work correctly (shard 3, 1 retry)
  • Flow/AddRoleAndAssignToUser.spec.ts › Verify assigned role to new user (shard 3, 1 retry)
  • Flow/ExploreDiscovery.spec.ts › Should display deleted assets when showDeleted is checked and deleted is not present in queryFilter (shard 3, 1 retry)
  • Pages/Customproperties-part2.spec.ts › entityReferenceList shows item count, scrollable list, no expand toggle (shard 4, 1 retry)
  • Pages/DescriptionVisibility.spec.ts › Customized Table detail page Description widget shows long description (shard 4, 1 retry)
  • Pages/Domains.spec.ts › Rename domain with subdomains attached verifies subdomain accessibility (shard 4, 1 retry)
  • Pages/Domains.spec.ts › Rename domain with assets (tables, topics, dashboards) preserves associations (shard 4, 1 retry)
  • Pages/Domains.spec.ts › Subdomain rename does not affect parent domain and updates nested children (shard 4, 1 retry)
  • Pages/DomainUIInteractions.spec.ts › Add expert to domain via UI (shard 4, 1 retry)
  • Pages/ExploreTree.spec.ts › Verify Database and Database Schema available in explore tree (shard 5, 1 retry)
  • Features/AutoPilot.spec.ts › Create Service and check the AutoPilot status (shard 6, 1 retry)
  • Features/AutoPilot.spec.ts › Create Service and check the AutoPilot status (shard 6, 2 retries)
  • Pages/Lineage/LineageFilters.spec.ts › Verify lineage schema filter selection (shard 6, 1 retry)
  • Pages/Lineage/LineageRightPanel.spec.ts › Verify custom properties tab IS visible for supported type: searchIndex (shard 6, 1 retry)
  • Pages/ProfilerConfigurationPage.spec.ts › Non admin user (shard 6, 1 retry)
  • Pages/Teams.spec.ts › Add New Team in BusinessUnit Team (shard 6, 1 retry)
  • Pages/Users.spec.ts › Permissions for table details page for Data Consumer (shard 6, 1 retry)
  • Pages/Users.spec.ts › Check permissions for Data Steward (shard 6, 1 retry)
  • VersionPages/EntityVersionPages.spec.ts › Directory (shard 6, 1 retry)

📦 Download artifacts

How to debug locally
# Download playwright-test-results-<shard> artifact and unzip
npx playwright show-trace path/to/trace.zip    # view trace

Copilot AI review requested due to automatic review settings April 12, 2026 04:37
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

Comment on lines +233 to +240
process.join()
with _scan_lock:
global _current_scan # pylint: disable=global-statement
if _current_scan is process:
_current_scan = None
if _rescan_requested:
logger.info("Running queued rescan after previous scan finished")
_start_scan()
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_reap_scan() runs in the daemon reaper thread and, when _rescan_requested is set, it calls _start_scan() which creates/starts a new multiprocessing.Process. On Linux the default multiprocessing start method is fork, and forking from a non-main thread can deadlock or leave the child in an inconsistent state (because only the calling thread is replicated while locks from other threads remain held). This makes the queued-rescan path potentially unsafe in the Airflow webserver/Gunicorn environment.

Consider restructuring so process creation only happens from the main thread (e.g., have the reaper only join() + clear _current_scan, and trigger the follow-up scan via a main-thread code path), or switch ScanDagsTask creation to a thread-safe start method/context (e.g., spawn) specifically for the follow-up scan.

Copilot uses AI. Check for mistakes.
Copilot flagged that _reap_scan() calling _start_scan() forks a new
multiprocessing.Process from a non-main thread.  On Linux with the
default 'fork' start-method, this can deadlock because only the
calling thread is replicated while locks held by other threads remain
permanently locked in the child.

Fix: the reaper thread now only join()s the process and clears module
state.  The _rescan_requested machinery is removed — newly deployed
DAGs are discovered by the next deploy-triggered scan or by Airflow's
periodic scheduler.
@gitar-bot
Copy link
Copy Markdown

gitar-bot bot commented Apr 12, 2026

Code Review ✅ Approved 6 resolved / 6 findings

Fixes memory leak in scan_dags_job_background by adding singleton guard, addressing six concurrency and process management issues including daemon thread crashes, silent scan skips, and stale reaper threads. No remaining issues found.

✅ 6 resolved
Bug: daemon=True will crash ScanDagsTask when it spawns child processes

📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:236 📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:165-176 📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:207-209
Python's multiprocessing forbids daemon processes from creating child processes — doing so raises AssertionError: daemonic processes are not allowed to have children.

ScanDagsTask.run() delegates to DagFileProcessorManager (Airflow 3.0+), SchedulerJobRunner (Airflow 2.6+), or SchedulerJob (older), all of which internally fork child processes to parse DAG files. Setting daemon=True on the ScanDagsTask process will cause it to crash immediately when it tries to spawn those children.

The singleton guard and join() logic are correct and sufficient to prevent zombies. The daemon=True flag should be removed.

Edge Case: Silently skipping scan may lose deploy-triggered DAG refreshes

📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:229-232
When a scan is already in progress and a new deploy triggers scan_dags_job_background(), the call is silently skipped (line 231-232). If the running scan started before the new DAG file was written to disk, the new DAG won't be picked up until the next manual deploy or scheduled scan.

Consider queuing a single follow-up scan (e.g., a boolean _rescan_requested flag checked after join) so that at most one additional scan runs after the current one completes. This ensures newly deployed DAGs are always picked up without spawning unbounded processes.

Bug: _rescan_requested flag is set but never read to trigger a rescan

📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:214 📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:229-231 📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:238 📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:244
The _rescan_requested flag is set to True when a scan is already in progress (line 238) and cleared when a new scan starts (line 244), but nothing ever reads the flag to actually trigger a follow-up scan. The docstring (lines 229-231) promises that "the request is deferred ... so newly deployed DAGs are picked up once the current scan finishes," but this never happens — the flag is write-only.

This means the original "silently skipping" problem from the previous review is not actually fixed; it's just given a flag name. A deploy that arrives while a scan is running will still be lost.

To actually honour the flag, after the current scan finishes (detected at lines 236-242), check _rescan_requested and start a new scan if it's True. For example:

with _scan_lock:
    if _current_scan is not None:
        if _current_scan.is_alive():
            _rescan_requested = True
            logger.info("DAG scan already in progress, queued rescan")
            return
        _current_scan.join(timeout=5)
        _current_scan = None
        if not _rescan_requested:
            return  # no rescan needed, original caller already ran

    _rescan_requested = False
    process = ScanDagsTask()
    process.start()
    _current_scan = process

Alternatively, the docstring should be updated to stop claiming deferred rescans occur.

Bug: Deploy after finished scan silently skips DAG scanning

📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:236-244 📄 openmetadata-airflow-apis/tests/unit/test_scan_dags_singleton.py:73-87
When scan_dags_job_background() is called for a new deploy and finds a previously finished (not alive) scan process with _rescan_requested == False, it joins the old process but then returns at line 244 without starting a new scan for the current deploy.

Concrete scenario:

  1. Deploy A → starts scan process, scan completes.
  2. Deploy B → calls scan_dags_job_background(). _current_scan is not None (stale finished process), is_alive() returns False. Code joins it, sets _current_scan = None, checks _rescan_requested which is False → returns without scanning.
  3. Deploy B's DAG is never picked up by the scheduler until some future deploy happens to trigger a scan.

Since scan_dags_job_background is only called on deploy (no periodic trigger), this means any deploy that arrives after the previous scan has finished will be silently dropped. This is the common case — most deploys don't overlap with a running scan.

The if not _rescan_requested: return guard at lines 243-244 conflates two situations: (a) a stale process being cleaned up with no new work needed (not a real scenario — the function is only called when there IS work to do), and (b) a new deploy request that should always trigger a scan.

The test test_no_new_scan_when_finished_without_rescan_flag asserts this broken behavior as correct.

Edge Case: Deferred rescan has no automatic trigger mechanism

📄 openmetadata-airflow-apis/openmetadata_managed_apis/api/utils.py:237-240
When a deploy arrives while a scan is running, _rescan_requested is set to True and the function returns. However, there is no callback, timer, or polling mechanism to trigger the deferred rescan once the current scan finishes. The rescan only happens if another deploy request arrives later and calls scan_dags_job_background() again.

Scenario:

  1. Deploy A → starts scan.
  2. Deploy B (while A's scan is alive) → sets _rescan_requested = True, returns.
  3. A's scan finishes. No further deploys arrive.
  4. Deploy B's DAG is never scanned.

This is a real concern because the whole point of the _rescan_requested flag is to handle rapid successive deploys, which by definition may not have a third deploy to trigger the queued rescan.

Consider adding a lightweight polling thread or a completion callback that checks _rescan_requested after the process finishes and re-invokes the scan if needed.

...and 1 more resolved from earlier reviews

Options

Display: compact → Showing less information.

Comment with these commands to change:

Compact
gitar display:verbose         

Was this helpful? React with 👍 / 👎 | Gitar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

safe to test Add this label to run secure Github workflows on PRs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Memory leaks for openmetadata-dependencies-web

3 participants