Skip to content

Commit 8c09412

Browse files
Paolo Abenikuba-moo
authored andcommitted
selftests: mptcp: more stable simult_flows tests
By default, the netem qdisc can keep up to 1000 packets under its belly to deal with the configured rate and delay. The simult flows test-case simulates very low speed links, to avoid problems due to slow CPUs and the TCP stack tend to transmit at a slightly higher rate than the (virtual) link constraints. All the above causes a relatively large amount of packets being enqueued in the netem qdiscs - the longer the transfer, the longer the queue - producing increasingly high TCP RTT samples and consequently increasingly larger receive buffer size due to DRS. When the receive buffer size becomes considerably larger than the needed size, the tests results can flake, i.e. because minimal inaccuracy in the pacing rate can lead to a single subflow usage towards the end of the connection for a considerable amount of data. Address the issue explicitly setting netem limits suitable for the configured link speeds and unflake all the affected tests. Fixes: 1a418cb ("mptcp: simult flow self-tests") Cc: stable@vger.kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-1-4b5462b6f016@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
1 parent f43ed0c commit 8c09412

File tree

1 file changed

+7
-4
lines changed

1 file changed

+7
-4
lines changed

tools/testing/selftests/net/mptcp/simult_flows.sh

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -237,10 +237,13 @@ run_test()
237237
for dev in ns2eth1 ns2eth2; do
238238
tc -n $ns2 qdisc del dev $dev root >/dev/null 2>&1
239239
done
240-
tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1
241-
tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2
242-
tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1
243-
tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2
240+
241+
# keep the queued pkts number low, or the RTT estimator will see
242+
# increasing latency over time.
243+
tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1 limit 50
244+
tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2 limit 50
245+
tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1 limit 50
246+
tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2 limit 50
244247

245248
# time is measured in ms, account for transfer size, aggregated link speed
246249
# and header overhead (10%)

0 commit comments

Comments
 (0)