Testcontainers version
4.12.0
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host arch
x86
.NET version
10.0.104
Docker version
Client: Podman Engine
Version: 5.8.2
API Version: 5.8.2
Go Version: go1.25.9 X:nodwarf5
Git Commit: 5b263b5f5b48004a87caac44e67349a8266d9ef4
Built: Tue Apr 14 01:00:00 2026
Build Origin: Fedora Project
OS/Arch: linux/amd64
Docker info
host:
arch: amd64
buildahVersion: 1.43.1
cgroupControllers:
- cpuset
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.2.1-2.fc43.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.2.1, commit: '
cpuUtilization:
idlePercent: 94.37
systemPercent: 1.6
userPercent: 4.03
cpus: 16
databaseBackend: sqlite
distribution:
distribution: fedora
variant: kde
version: "43"
eventLogger: journald
freeLocks: 2016
hostname: FlexLabs-Rog
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 524288
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 524288
size: 65536
kernel: 6.19.12-200.fc43.x86_64
linkmode: dynamic
logDriver: journald
memFree: 1477091328
memTotal: 50406883328
networkBackend: netavark
networkBackendInfo:
backend: netavark
defaultNetwork: podman
dns:
package: aardvark-dns-1.17.1-1.fc43.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.17.1
package: netavark-1.17.2-1.fc43.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.17.2
ociRuntime:
name: crun
package: crun-1.27-1.fc43.x86_64
path: /usr/bin/crun
version: |-
crun version 1.27
commit: a718a92cc9a94955a5a550b6fdec1378c247ec50
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20260120.g386b5f5-1.fc43.x86_64
version: |
pasta 0^20260120.g386b5f5-1.fc43.x86_64
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 8589889536
swapTotal: 8589930496
uptime: 84h 51m 49.00s (Approximately 3.50 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
store:
configFile: /home/artiom/.config/containers/storage.conf
containerStore:
number: 7
paused: 0
running: 1
stopped: 6
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/artiom/.local/share/containers/storage
graphRootAllocated: 1997604388864
graphRootUsed: 607421964288
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 95
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/artiom/.local/share/containers/storage/volumes
version:
APIVersion: 5.8.2
BuildOrigin: Fedora Project
Built: 1776124800
BuiltTime: Tue Apr 14 01:00:00 2026
GitCommit: 5b263b5f5b48004a87caac44e67349a8266d9ef4
GoVersion: go1.25.9 X:nodwarf5
Os: linux
OsArch: linux/amd64
Version: 5.8.2
What happened?
Containers that use WithResourceMapping fail to start intermittently when the Docker endpoint is Podman. The failure manifests as an HTTP 500 from the PUT /containers/{id}/archive endpoint:
Docker.DotNet.DockerApiException : Docker API responded with status code='InternalServerError',
response='{"cause":"broken pipe","message":"passing bulk input to subprocess: write |1: broken pipe","response":500}'
The failure is timing-dependent: it is rare on an idle host, but becomes near-certain when other containers using the same image are already running. Deleting the competing containers sometimes causes the next run to succeed.
Steps to reproduce
-
Configure a Podman endpoint (rootless Podman is the typical scenario on Linux).
-
Pull the mongo image so it is cached locally.
-
Run the MongoDb replica-set tests while at least one other mongo container exists on the host:
dotnet test tests/Testcontainers.MongoDb.Tests/Testcontainers.MongoDb.Tests.csproj
-
Observe the four replica-set test classes fail with the broken-pipe 500 above; the twelve non-replica-set tests pass.
The replica-set tests are the only ones that call WithResourceMapping (via MongoDbBuilder.WithReplicaSet()), which is why they are the only failures.
Expected behaviour
Container starts successfully and the replica set is initialised, identical to behaviour on Docker.
Actual behaviour
StartAsync() throws DockerApiException with HTTP 500 and "passing bulk input to subprocess: write |1: broken pipe".
Root cause
This is a confirmed race condition in Podman's containers/buildah copier package — tracked upstream as containers/buildah#6573.
Podman's PUT /containers/{id}/archive handler pipes the HTTP request body into a tar subprocess via io.Copy. The subprocess wraps the pipe with tar.Reader and calls Next() until io.EOF. tar.Reader returns io.EOF as soon as it has consumed the two 512-byte end-of-archive zero blocks — but the HTTP body typically contains additional trailing zero-padding that the producer appended to round up to a record boundary. The subprocess exits (and closes its read end of the pipe) while the sender is still writing that padding: EPIPE → HTTP 500.
The race is exacerbated under storage pressure. When other containers share the same image layers, Podman's overlay storage is busier; the write path slows relative to the read path, widening the window in which the subprocess can exit before the sender finishes. This is consistent with the observed flakiness pattern.
The upstream Buildah fix (containers/buildah#6678, merged 2026-02-11) drains the full pipe before the subprocess exits, but as of this writing it is not included in any tagged Buildah or Podman release. The same issue has been observed in the testcontainers-java ecosystem (testcontainers/testcontainers-java#6640).
On our side, TarOutputMemoryStream uses SharpZipLib's TarOutputStream with its default blockFactor of 20, which produces a record size of 10 240 bytes. After the two EOF zero blocks, SharpZipLib pads the archive to the next record boundary — roughly 8 KB of trailing zeros for a small file like the keyfile init script. Those trailing zeros are the bytes that trigger the race.
Proposed fix
Set blockFactor: 1 in TarOutputMemoryStream's base constructor. This makes record size equal to block size (512 bytes), so no padding is written beyond the two EOF blocks. The race window closes because the sender has nothing left to write once the subprocess has consumed the archive. The change is a single constructor argument and is safe on Docker; all conformant tar readers accept a block factor of 1 archive.
Relevant log output
Docker.DotNet.DockerApiException : Docker API responded with status code='InternalServerError', response='{"cause":"broken pipe","message":"passing bulk input to subprocess: write |1: broken pipe","response":500}
'.
at Docker.DotNet.DockerClient.HandleIfErrorResponseAsync(HttpStatusCode statusCode, HttpResponseMessage response, IEnumerable`1 handlers)
at Docker.DotNet.DockerClient.MakeRequestAsync[T](IEnumerable`1 errorHandlers, HttpMethod method, String path, IQueryString queryString, IRequestContent body, IDictionary`2 headers, TimeSpan timeout, CancellationToken cancellationToken)
at Docker.DotNet.ContainerOperations.ExtractArchiveToContainerAsync(String id, CopyToContainerParameters parameters, Stream stream, CancellationToken cancellationToken)
at DotNet.Testcontainers.Clients.TestcontainersClient.CopyAsync(String id, IResourceMapping resourceMapping, CancellationToken ct) (file:///home/artiom/Dev/testcontainers-dotnet/src/Testcontainers/Clients/TestcontainersClient.cs#L229,0)
at DotNet.Testcontainers.Clients.TestcontainersClient.RunAsync(IContainerConfiguration configuration, CancellationToken ct) (file:///home/artiom/Dev/testcontainers-dotnet/src/Testcontainers/Clients/TestcontainersClient.cs#L352,0)
at DotNet.Testcontainers.Containers.DockerContainer.UnsafeCreateAsync(CancellationToken ct) (file:///home/artiom/Dev/testcontainers-dotnet/src/Testcontainers/Containers/DockerContainer.cs#L538,0)
at DotNet.Testcontainers.Containers.DockerContainer.StartAsync(CancellationToken ct) (file:///home/artiom/Dev/testcontainers-dotnet/src/Testcontainers/Containers/DockerContainer.cs#L340,0)
at Testcontainers.MongoDb.MongoDbContainerTest.InitializeAsync() (file:///home/artiom/Dev/testcontainers-dotnet/tests/Testcontainers.MongoDb.Tests/MongoDbContainerTest.cs#L18,0)
Additional information
No response
Testcontainers version
4.12.0
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host arch
x86
.NET version
10.0.104
Docker version
Docker info
host: arch: amd64 buildahVersion: 1.43.1 cgroupControllers: - cpuset - cpu - io - memory - pids cgroupManager: systemd cgroupVersion: v2 conmon: package: conmon-2.2.1-2.fc43.x86_64 path: /usr/bin/conmon version: 'conmon version 2.2.1, commit: ' cpuUtilization: idlePercent: 94.37 systemPercent: 1.6 userPercent: 4.03 cpus: 16 databaseBackend: sqlite distribution: distribution: fedora variant: kde version: "43" eventLogger: journald freeLocks: 2016 hostname: FlexLabs-Rog idMappings: gidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 524288 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 524288 size: 65536 kernel: 6.19.12-200.fc43.x86_64 linkmode: dynamic logDriver: journald memFree: 1477091328 memTotal: 50406883328 networkBackend: netavark networkBackendInfo: backend: netavark defaultNetwork: podman dns: package: aardvark-dns-1.17.1-1.fc43.x86_64 path: /usr/libexec/podman/aardvark-dns version: aardvark-dns 1.17.1 package: netavark-1.17.2-1.fc43.x86_64 path: /usr/libexec/podman/netavark version: netavark 1.17.2 ociRuntime: name: crun package: crun-1.27-1.fc43.x86_64 path: /usr/bin/crun version: |- crun version 1.27 commit: a718a92cc9a94955a5a550b6fdec1378c247ec50 rundir: /run/user/1000/crun spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL os: linux pasta: executable: /usr/bin/pasta package: passt-0^20260120.g386b5f5-1.fc43.x86_64 version: | pasta 0^20260120.g386b5f5-1.fc43.x86_64 Copyright Red Hat GNU General Public License, version 2 or later <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. remoteSocket: exists: true path: /run/user/1000/podman/podman.sock rootlessNetworkCmd: pasta security: apparmorEnabled: false capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: true seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: "" package: "" version: "" swapFree: 8589889536 swapTotal: 8589930496 uptime: 84h 51m 49.00s (Approximately 3.50 days) variant: "" plugins: authorization: null log: - k8s-file - none - passthrough - journald network: - bridge - macvlan - ipvlan volume: - local registries: search: - registry.fedoraproject.org - registry.access.redhat.com - docker.io store: configFile: /home/artiom/.config/containers/storage.conf containerStore: number: 7 paused: 0 running: 1 stopped: 6 graphDriverName: overlay graphOptions: {} graphRoot: /home/artiom/.local/share/containers/storage graphRootAllocated: 1997604388864 graphRootUsed: 607421964288 graphStatus: Backing Filesystem: btrfs Native Overlay Diff: "true" Supports d_type: "true" Supports shifting: "false" Supports volatile: "true" Using metacopy: "false" imageCopyTmpDir: /var/tmp imageStore: number: 95 runRoot: /run/user/1000/containers transientStore: false volumePath: /home/artiom/.local/share/containers/storage/volumes version: APIVersion: 5.8.2 BuildOrigin: Fedora Project Built: 1776124800 BuiltTime: Tue Apr 14 01:00:00 2026 GitCommit: 5b263b5f5b48004a87caac44e67349a8266d9ef4 GoVersion: go1.25.9 X:nodwarf5 Os: linux OsArch: linux/amd64 Version: 5.8.2What happened?
Containers that use
WithResourceMappingfail to start intermittently when the Docker endpoint is Podman. The failure manifests as an HTTP 500 from thePUT /containers/{id}/archiveendpoint:The failure is timing-dependent: it is rare on an idle host, but becomes near-certain when other containers using the same image are already running. Deleting the competing containers sometimes causes the next run to succeed.
Steps to reproduce
Configure a Podman endpoint (rootless Podman is the typical scenario on Linux).
Pull the
mongoimage so it is cached locally.Run the MongoDb replica-set tests while at least one other
mongocontainer exists on the host:Observe the four replica-set test classes fail with the broken-pipe 500 above; the twelve non-replica-set tests pass.
The replica-set tests are the only ones that call
WithResourceMapping(viaMongoDbBuilder.WithReplicaSet()), which is why they are the only failures.Expected behaviour
Container starts successfully and the replica set is initialised, identical to behaviour on Docker.
Actual behaviour
StartAsync()throwsDockerApiExceptionwith HTTP 500 and"passing bulk input to subprocess: write |1: broken pipe".Root cause
This is a confirmed race condition in Podman's
containers/buildahcopier package — tracked upstream as containers/buildah#6573.Podman's
PUT /containers/{id}/archivehandler pipes the HTTP request body into a tar subprocess viaio.Copy. The subprocess wraps the pipe withtar.Readerand callsNext()untilio.EOF.tar.Readerreturnsio.EOFas soon as it has consumed the two 512-byte end-of-archive zero blocks — but the HTTP body typically contains additional trailing zero-padding that the producer appended to round up to a record boundary. The subprocess exits (and closes its read end of the pipe) while the sender is still writing that padding: EPIPE → HTTP 500.The race is exacerbated under storage pressure. When other containers share the same image layers, Podman's overlay storage is busier; the write path slows relative to the read path, widening the window in which the subprocess can exit before the sender finishes. This is consistent with the observed flakiness pattern.
The upstream Buildah fix (containers/buildah#6678, merged 2026-02-11) drains the full pipe before the subprocess exits, but as of this writing it is not included in any tagged Buildah or Podman release. The same issue has been observed in the testcontainers-java ecosystem (testcontainers/testcontainers-java#6640).
On our side,
TarOutputMemoryStreamuses SharpZipLib'sTarOutputStreamwith its defaultblockFactorof 20, which produces a record size of 10 240 bytes. After the two EOF zero blocks, SharpZipLib pads the archive to the next record boundary — roughly 8 KB of trailing zeros for a small file like the keyfile init script. Those trailing zeros are the bytes that trigger the race.Proposed fix
Set
blockFactor: 1inTarOutputMemoryStream's base constructor. This makes record size equal to block size (512 bytes), so no padding is written beyond the two EOF blocks. The race window closes because the sender has nothing left to write once the subprocess has consumed the archive. The change is a single constructor argument and is safe on Docker; all conformant tar readers accept a block factor of 1 archive.Relevant log output
Additional information
No response