Skip to content

Introduce Kernel#Barrier as a top level scheduling operation.#426

Merged
ioquatix merged 6 commits intomainfrom
kernel-barrier
Oct 8, 2025
Merged

Introduce Kernel#Barrier as a top level scheduling operation.#426
ioquatix merged 6 commits intomainfrom
kernel-barrier

Conversation

@ioquatix
Copy link
Copy Markdown
Member

@ioquatix ioquatix commented Oct 8, 2025

Starting multiple concurrent tasks and waiting for them to finish is a common pattern. This change introduces a small ergonomic helper, Barrier, defined in Kernel, that encapsulates this behavior: it creates an Async::Barrier, yields it to a block, waits for completion (using Sync to run a reactor if needed), and ensures remaining tasks are stopped on exit.

Types of Changes

  • New feature.

Contribution

@samuel-williams-shopify samuel-williams-shopify enabled auto-merge (squash) October 8, 2025 10:43
@ioquatix ioquatix merged commit ec27366 into main Oct 8, 2025
73 of 79 checks passed
@ioquatix ioquatix deleted the kernel-barrier branch October 8, 2025 10:44
@ioquatix
Copy link
Copy Markdown
Member Author

ioquatix commented Oct 8, 2025

Async::Idler - Load-Based Admission Control

Overview

Async::Idler implements a simple, effective load-based admission control mechanism that schedules work when the system is idle. It prevents system overload by rate-limiting task creation based on the scheduler's current load.

Algorithm

The idler uses a proportional controller with memory to manage task admission:

Key Components

  1. @current: Adaptive backoff time that grows/shrinks based on system load
  2. @maximum_load: Target load threshold (default: 0.8 = 80%)
  3. @backoff: Minimum backoff time (default: 0.001s = 1ms)

Control Logic

if load <= @maximum_load
  # Underload: Allow task but prevent bursts
  if @current > @backoff
    # Preventive sleep using accumulated backoff
    sleep(@current - @backoff)
    
    # Gentle decay toward baseline
    alpha = 0.99
    @current *= alpha + (1.0 - alpha) * (load / @maximum_load)
  end
  # Proceed with task creation
else
  # Overload: Increase backoff and sleep
  @current *= (load / @maximum_load)
  sleep(@current)
end

Design Rationale

Problem: Burst After Load Drop

When load is high, @current grows. When tasks complete, load drops suddenly. Without preventive measures, the system creates a massive burst of new tasks before detecting the load increase.

Solution: Memory-Based Rate Limiting

  1. Preventive Sleep: Even when load is acceptable, sleep proportionally to @current (which remembers recent overload)
  2. Gentle Decay: Use alpha = 0.99 to decay @current slowly (1% per iteration) rather than aggressively following load
  3. Proportional Growth: When overloaded, grow @current proportionally to load / @maximum_load

This creates a soft ceiling that prevents bursts while maintaining good load tracking.

Performance Characteristics

Benchmarked with realistic workload (1000 tasks, each doing 10k microsleeps):

Without Preventive Sleep

  • 4 bursts, max 566 tasks per burst
  • Load error: 5.2%
  • Avg 228 tasks/burst

With Preventive Sleep (Current Implementation)

  • 572 bursts, max 3 tasks per burst
  • Load error: 1.4%
  • Avg 1.0 tasks/burst
  • 99.5% reduction in max burst size

Key Properties

Simplicity

  • Single adaptive parameter (@current)
  • No complex state tracking
  • Predictable behavior

Effectiveness

  • Near-perfect load tracking (1.4% error)
  • Excellent burst control (max 3 tasks)
  • Smooth task admission (avg 1 task/burst)

Predictability

  • Gentle decay prevents oscillation
  • Memory of recent load prevents bursts
  • Proportional response to overload

Usage

idler = Async::Idler.new(0.8)  # 80% load threshold

# Schedule work when idle
idler.async do
  # Your work here
end

The idler automatically:

  1. Checks scheduler load
  2. Sleeps if load is high or was recently high
  3. Creates task when safe
  4. Adjusts backoff based on system behavior

Thread Safety

The idler uses a Mutex to ensure thread-safe access to shared state (@current), allowing multiple threads to safely share a single idler instance.

Related

  • Async::Scheduler#load - Provides the load metric (busy_time / total_time)
  • Async::Barrier - Can use idler for safe, load-aware task creation

Benchmark code:

require_relative "lib/async"

Barrier do |barrier|
	1000.times.map do |i|
		barrier.async do
			puts "Starting #{i}: Load=#{Fiber.scheduler.load.round(3)}"
			10000.times{sleep(0.000001)}
			puts "Finished #{i}"
		end
	end
end

@travisbell
Copy link
Copy Markdown

Barrier is awesome! So nice to have this as a cleaner interface, yay! 🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants