There are cases where no exposed or mapped ports are required. A common case being the network mode "host". For me it's happening when i run rootless Podman in a sidecar container using the Gitlab CI Kubernetes Executor, but here seems to be a ticket where Jenkins and Docker is used: #7903
In those cases, starting a container with exposed ports will fail, because the container runtime will not actually expose or map any ports. However, testcontainers-java requires the ports to be mapped. This happens without specifying a network mode, so it's not related to this issue: #5151
A workaround for this problem is to remove any exposed ports. However, this workaround has major disadvantages:
org.testcontainers.containers.ContainerLaunchException: Container startup failed for image nats:latest
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:361)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:334)
at de.guidecom.testcontainers.nats.NatsTestcontainer.start(NatsTestcontainer.java:87)
at de.guidecom.cloud.nats.MainTest.doesStuff(MainTest.kt:18)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:346)
... 6 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:565)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:356)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 7 more
Caused by: org.testcontainers.shaded.org.awaitility.core.ConditionTimeoutException: org.testcontainers.containers.GenericContainer expected the predicate to return <true> but it returned <false> for input of <InspectContainerResponse(args=[-DV, -c, /nats.conf], config=ContainerConfig(attachStderr=false, attachStdin=false, attachStdout=false, cmd=[-DV, -c, /nats.conf], domainName=, entrypoint=[/nats-server], env=[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/, container=podman, HOME=, HOSTNAME=runner-ub6jpcss-project-7826-concurrent-0qqpgk], exposedPorts=null, hostName=runner-ub6jpcss-project-7826-concurrent-0qqpgk, image=docker.io/library/nats:latest, labels={org.testcontainers=true, org.testcontainers.lang=java, org.testcontainers.sessionId=5a628e82-b8af-44ed-a414-06d3a2526a54, org.testcontainers.version=1.19.0}, macAddress=null, networkDisabled=null, onBuild=null, stdinOpen=false, portSpecs=null, stdInOnce=false, tty=false, user=, volumes=null, workingDir=/, healthCheck=null), created=2023-12-13T09:29:32.231638461Z, driver=overlay, execDriver=null, hostConfig=HostConfig(binds=[], blkioWeight=0, blkioWeightDevice=null, blkioDeviceReadBps=null, blkioDeviceWriteBps=null, blkioDeviceReadIOps=null, blkioDeviceWriteIOps=null, memorySwappiness=0, nanoCPUs=0, capAdd=[], capDrop=[], containerIDFile=, cpuPeriod=0, cpuRealtimePeriod=0, cpuRealtimeRuntime=0, cpuShares=0, cpuQuota=0, cpusetCpus=, cpusetMems=, devices=[], deviceCgroupRules=null, deviceRequests=null, diskQuota=null, dns=[], dnsOptions=[], dnsSearch=[], extraHosts=[], groupAdd=[], ipcMode=host, cgroup=, links=[], logConfig=LogConfig(type=json-file, config=null), lxcConf=null, memory=0, memorySwap=0, memoryReservation=0, kernelMemory=null, networkMode=host, oomKillDisable=false, init=null, autoRemove=false, oomScoreAdj=1000, portBindings={}, privileged=false, publishAllPorts=false, readonlyRootfs=false, restartPolicy=no, ulimits=[Ulimit(name=RLIMIT_NPROC, soft=4194304, hard=4194304), Ulimit(name=RLIMIT_NOFILE, soft=1048576, hard=1048576)], cpuCount=0, cpuPercent=0, ioMaximumIOps=0, ioMaximumBandwidth=0, volumesFrom=null, mounts=null, pidMode=private, isolation=null, securityOpts=[], storageOpt=null, cgroupParent=, volumeDriver=, shmSize=65536000, pidsLimit=0, runtime=oci, tmpFs=null, utSMode=host, usernsMode=, sysctls=null, consoleSize=[0, 0], cgroupnsMode=), hostnamePath=/tmp/containers-user-1000/containers/overlay-containers/ff41f49d1d13de2f81af4bf2b45374f53bcf6d8dceea8e1b066318a99364a5a0/userdata/hostname, hostsPath=/tmp/containers-user-1000/containers/overlay-containers/ff41f49d1d13de2f81af4bf2b45374f53bcf6d8dceea8e1b066318a99364a5a0/userdata/hosts, logPath=/home/podman/.local/share/containers/storage/overlay-containers/ff41f49d1d13de2f81af4bf2b45374f53bcf6d8dceea8e1b066318a99364a5a0/userdata/ctr.log, id=ff41f49d1d13de2f81af4bf2b45374f53bcf6d8dceea8e1b066318a99364a5a0, sizeRootFs=0, imageId=sha256:c26f92862c0582fc993931f218754dad07ee673b6546207c14c567d225b6e63a, mountLabel=, name=/sharp_bell, restartCount=0, networkSettings=NetworkSettings(bridge=, sandboxId=, hairpinMode=false, linkLocalIPv6Address=, linkLocalIPv6PrefixLen=0, ports={}, sandboxKey=, secondaryIPAddresses=null, secondaryIPv6Addresses=null, endpointID=, gateway=, portMapping=null, globalIPv6Address=, globalIPv6PrefixLen=0, ipAddress=, ipPrefixLen=0, ipV6Gateway=, macAddress=, networks={host=ContainerNetwork(ipamConfig=null, links=[], aliases=null, networkID=host, endpointId=, gateway=, ipAddress=, ipPrefixLen=0, ipV6Gateway=, globalIPv6Address=, globalIPv6PrefixLen=0, macAddress=)}), path=/nats-server, processLabel=, resolvConfPath=/tmp/containers-user-1000/containers/overlay-containers/ff41f49d1d13de2f81af4bf2b45374f53bcf6d8dceea8e1b066318a99364a5a0/userdata/resolv.conf, execIds=[], state=InspectContainerResponse.ContainerState(status=running, running=true, paused=false, restarting=false, oomKilled=false, dead=false, pid=103, exitCode=0, error=, startedAt=2023-12-13T09:29:32.675435132Z, finishedAt=0001-01-01T00:00:00Z, health=HealthState(status=, failingStreak=0, log=null)), volumes=null, volumesRW=null, node=null, mounts=[], graphDriver=GraphDriver(name=overlay, data=GraphData(rootDir=null, deviceId=null, deviceName=null, deviceSize=null, dir=null)), platform=linux)> within 5 seconds.
at org.testcontainers.shaded.org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
at org.testcontainers.shaded.org.awaitility.core.AbstractHamcrestCondition.await(AbstractHamcrestCondition.java:86)
at org.testcontainers.shaded.org.awaitility.core.ConditionFactory.until(ConditionFactory.java:985)
at org.testcontainers.shaded.org.awaitility.core.ConditionFactory.until(ConditionFactory.java:691)
at org.testcontainers.shaded.org.awaitility.core.ConditionFactory.until(ConditionFactory.java:708)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:467)
... 9 more
Module
Core
Testcontainers version
1.19.2
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host Arch
x86
Docker version
What happened?
There are cases where no exposed or mapped ports are required. A common case being the network mode "host". For me it's happening when i run rootless Podman in a sidecar container using the Gitlab CI Kubernetes Executor, but here seems to be a ticket where Jenkins and Docker is used: #7903
In those cases, starting a container with exposed ports will fail, because the container runtime will not actually expose or map any ports. However, testcontainers-java requires the ports to be mapped. This happens without specifying a network mode, so it's not related to this issue: #5151
A workaround for this problem is to remove any exposed ports. However, this workaround has major disadvantages:
Relevant log output
Additional Information
No response