Skip to content

Commit 102d665

Browse files
committed
fix(apf): no longer hold lock around dequeue
Holding the spinlock while calling the dequeue method caused massive regressions in the hotplugging tests. It was discovered that the root was the spinlock disabling preempt, this had a knock on effect when calling the dequeue. Signed-off-by: Jack Thomson <jackabt@amazon.com>
1 parent e1dde69 commit 102d665

1 file changed

Lines changed: 62 additions & 0 deletions

File tree

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
From 75ee570d5d68306eef0fc2e4b2055ca81fc267f3 Mon Sep 17 00:00:00 2001
2+
From: Jack Thomson <jackabt@amazon.com>
3+
Date: Wed, 25 Mar 2026 11:56:37 +0000
4+
Subject: [PATCH] fixup! KVM: async_pf: Convert queued count to atomic and
5+
tighten locking
6+
7+
Don't hold async_pf.lock over kvm_arch_can_dequeue_async_page_present().
8+
9+
The original commit wrapped the entire while loop with spin_lock, which
10+
causes kvm_arch_can_dequeue_async_page_present() to run with preemption
11+
disabled.
12+
13+
The fix uses the upstream pattern: lockless list_empty_careful() in the
14+
while condition with the lock only held around the actual list mutations.
15+
A double-check under the lock handles the race between the lockless
16+
check and lock acquisition.
17+
18+
Signed-off-by: Jack Thomson <jackabt@amazon.com>
19+
---
20+
virt/kvm/async_pf.c | 14 +++++++-------
21+
1 file changed, 7 insertions(+), 7 deletions(-)
22+
23+
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
24+
index 2e4f89db4d47..3a2341054781 100644
25+
--- a/virt/kvm/async_pf.c
26+
+++ b/virt/kvm/async_pf.c
27+
@@ -343,14 +343,17 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
28+
{
29+
struct kvm_async_pf *work;
30+
31+
- spin_lock(&vcpu->async_pf.lock);
32+
- while (!list_empty(&vcpu->async_pf.done) &&
33+
+ while (!list_empty_careful(&vcpu->async_pf.done) &&
34+
kvm_arch_can_dequeue_async_page_present(vcpu)) {
35+
+ spin_lock(&vcpu->async_pf.lock);
36+
+ if (list_empty(&vcpu->async_pf.done)) {
37+
+ spin_unlock(&vcpu->async_pf.lock);
38+
+ break;
39+
+ }
40+
work = list_first_entry(&vcpu->async_pf.done, typeof(*work),
41+
link);
42+
list_del(&work->link);
43+
- if (!work->wakeup_all)
44+
- list_del(&work->queue);
45+
+ list_del(&work->queue);
46+
atomic_dec(&vcpu->async_pf.queued);
47+
spin_unlock(&vcpu->async_pf.lock);
48+
49+
@@ -359,10 +362,7 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
50+
kvm_arch_async_page_present(vcpu, work);
51+
52+
kvm_flush_and_free_async_pf_work(work);
53+
-
54+
- spin_lock(&vcpu->async_pf.lock);
55+
}
56+
- spin_unlock(&vcpu->async_pf.lock);
57+
}
58+
59+
/*
60+
--
61+
2.43.0
62+

0 commit comments

Comments
 (0)