You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When trying to figure out why my containers weren't getting killed in my test suite I stumbled across a feature request for killing containers #716 . Looking into the resolution it seems that after the timeout, badly behaved containers should be killed automatically. I used some curl commands against the docker socket and discovered that this is in fact the case:
However, I also noticed that when using the timeout argument for the testcontainers stop() call it would hang without ever sending a kill signal. Replicating this directly on the API with curl I also experienced a hang where curl failed to return:
Aug 11 20:37:34 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:37:34.107891186Z" level=debug msg="Calling POST /containers/9821f44d0bd99f6b81994a6e65bb8595bcac5fa8e3a986bef62cd9beffd6f352/stop?t=500" spanID=fc2f641654e7d8b0 traceID=0cdd886dbc5de570b4c482e0223ebfef
Aug 11 20:37:34 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:37:34.107986460Z" level=debug msg="form data: {\"t\":500}" spanID=fc2f641654e7d8b0 traceID=0cdd886dbc5de570b4c482e0223ebfef
Aug 11 20:37:34 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:37:34.108026160Z" level=debug msg="Sending kill signal 15 to container 9821f44d0bd99f6b81994a6e65bb8595bcac5fa8e3a986bef62cd9beffd6f352"
Aug 11 20:38:39 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:39.053118647Z" level=debug msg="Calling HEAD /_ping" spanID=4652424db9f4f634 traceID=6be9e5bca64e1c9bce4fa20e2f43b50b
Aug 11 20:38:39 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:39.056477004Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=4d8d7adde89aa41a traceID=2a8e0b3ce961167b637aa8e36498eb32
Aug 11 20:38:39 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:39.058273867Z" level=debug msg="Calling HEAD /_ping" spanID=153662e033ad9fe2 traceID=fda55fb90d27a7dff822416023363101
Aug 11 20:38:39 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:39.058792928Z" level=debug msg="Calling GET /v1.45/volumes" spanID=6d54861f00800702 traceID=ea850bdcfb0750e9b91554f0b7f608e3
Aug 11 20:38:39 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:39.058838492Z" level=debug msg=VolumeStore.Find ByType=service.CustomFilter ByValue=0x574ac89ee400 spanID=6d54861f00800702 traceID=ea850bdcfb0750e9b91554f0b7f608e3
Aug 11 20:38:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:56.724908041Z" level=debug msg="Calling HEAD /_ping" spanID=393dfaccc5a0a01a traceID=9589b536a62de6d38fde302fe2a40a2d
Aug 11 20:38:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:56.725812586Z" level=debug msg="Calling GET /v1.45/volumes" spanID=2e0ca5d8b3053f02 traceID=d3325ddefa45c7a0ea96f15b4ebe2f52
Aug 11 20:38:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:56.725878792Z" level=debug msg=VolumeStore.Find ByType=service.CustomFilter ByValue=0x574ac89ee400 spanID=2e0ca5d8b3053f02 traceID=d3325ddefa45c7a0ea96f15b4ebe2f52
Aug 11 20:38:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:56.726844146Z" level=debug msg="Calling HEAD /_ping" spanID=f12d1136c39a3a87 traceID=c16fe00616b53c3477e9253788a5f6b4
Aug 11 20:38:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:38:56.727116037Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=445c83bfee6e34f2 traceID=ba8acfbddf3cf75768c9769ae8d8b056
Aug 11 20:39:06 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:06.773389383Z" level=debug msg="Calling HEAD /_ping" spanID=c2669e5bafbe2520 traceID=519d206f7645139f84042fb037c79c02
Aug 11 20:39:06 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:06.773953656Z" level=debug msg="Calling GET /v1.45/containers/9821f44d0bd99f6b81994a6e65bb8595bcac5fa8e3a986bef62cd9beffd6f352/json" spanID=34c30778749f0cc8 traceID=6dd775f1dda1f3da5d76269ac564de01
Aug 11 20:39:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:56.752807773Z" level=debug msg="Calling HEAD /_ping" spanID=d501c1202ea7c909 traceID=3e184804e687a245b1b66f05cb9c9b22
Aug 11 20:39:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:56.755178975Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=ba9728af80043dac traceID=224a6f75b3d813d9ccaed59c3ccb084a
Aug 11 20:39:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:56.757521060Z" level=debug msg="Calling HEAD /_ping" spanID=28d904df4026eecb traceID=f5c4d6eadea10a8d28908dd903d72238
Aug 11 20:39:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:56.758221441Z" level=debug msg="Calling GET /v1.45/volumes" spanID=7181a443e02ca797 traceID=7fa8a93c8a2e7ecf406a1466d02de701
Aug 11 20:39:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:39:56.758277330Z" level=debug msg=VolumeStore.Find ByType=service.CustomFilter ByValue=0x574ac89ee400 spanID=7181a443e02ca797 traceID=7fa8a93c8a2e7ecf406a1466d02de701
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.018381152Z" level=debug msg="Calling HEAD /_ping" spanID=a69c05430602286f traceID=a8776facbd812c05fc11670e3655a7e1
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.019004068Z" level=debug msg="Calling GET /v1.45/events?filters=%7B%22event%22%3A%7B%22create%22%3Atrue%2C%22delete%22%3Atrue%2C%22destroy%22%3Atrue%2C%22die%22%3Atrue%2C%22import%22%3Atrue%2C%22kill%22%3Atrue%2C%22load%22%3Atrue%2C%22pause%22%3Atrue%2C%22prune%22%3Atrue%2C%22pull%22%3Atrue%2C%22remove%22%3Atrue%2C%22rename%22%3Atrue%2C%22restart%22%3Atrue%2C%22save%22%3Atrue%2C%22start%22%3Atrue%2C%22stop%22%3Atrue%2C%22tag%22%3Atrue%2C%22unpause%22%3Atrue%2C%22untag%22%3Atrue%2C%22update%22%3Atrue%7D%2C%22type%22%3A%7B%22container%22%3Atrue%2C%22image%22%3Atrue%2C%22network%22%3Atrue%2C%22volume%22%3Atrue%7D%7D&since=1723408856&until=1723409156" spanID=e79ffa8d1dcb5e28 traceID=205ccc930d06ef008f438669adfb5476
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.771765893Z" level=debug msg="Calling HEAD /_ping" spanID=4f2d075dd4a0400c traceID=21856f2f9b609e0fb3a2a52b1043d29e
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.772422178Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=3a26bfbb7b1fc24f traceID=f3c0b52b8942cd5c4e8bc8351920a7e7
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.788422251Z" level=debug msg="Calling HEAD /_ping" spanID=0f6f9749b1a08ba9 traceID=1ab3cd94adaf21f047c020b1ce7745e3
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.788929997Z" level=debug msg="Calling GET /v1.45/volumes" spanID=c8a495d1fd86f2ea traceID=b5dca49bbbf96d0b0890a48e487ebfd3
Aug 11 20:40:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:40:56.788994350Z" level=debug msg=VolumeStore.Find ByType=service.CustomFilter ByValue=0x574ac89ee400 spanID=c8a495d1fd86f2ea traceID=b5dca49bbbf96d0b0890a48e487ebfd3
Aug 11 20:41:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:41:56.743228499Z" level=debug msg="Calling HEAD /_ping" spanID=95e57e5c062cd8ee traceID=5e640c297088ecf680f44c436634ed73
Aug 11 20:41:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:41:56.744289233Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=a830e435a40cd6ce traceID=6c81e3779707862b5c00d95c75ed27d5
Aug 11 20:41:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:41:56.745944306Z" level=debug msg="Calling HEAD /_ping" spanID=c0a977d9edc13a73 traceID=29642b4082dbd2896f5f00c392385ea6
Aug 11 20:41:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:41:56.746414526Z" level=debug msg="Calling GET /v1.45/volumes" spanID=bf782fec79f1aefe traceID=ad701ae9249fc2ecd0f36ffb165bf8fe
Aug 11 20:41:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:41:56.746471422Z" level=debug msg=VolumeStore.Find ByType=service.CustomFilter ByValue=0x574ac89ee400 spanID=bf782fec79f1aefe traceID=ad701ae9249fc2ecd0f36ffb165bf8fe
Aug 11 20:42:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:42:56.732364584Z" level=debug msg="Calling HEAD /_ping" spanID=ad8023e839b2d0cd traceID=82a73e48c416cc0fb4c8244594bbfe39
Aug 11 20:42:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:42:56.733339661Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=27078929cc725cae traceID=b355d9db7a1d95d481934b231eb7ff2c
Aug 11 20:42:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:42:56.743157268Z" level=debug msg="Calling HEAD /_ping" spanID=333779aaec3a2c1f traceID=0c4d022746f4988942933d9d7db84bcc
Aug 11 20:42:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:42:56.743595094Z" level=debug msg="Calling GET /v1.45/volumes" spanID=75dc505be98cb1b0 traceID=8e45cab7c38d808a157f509b1400c467
Aug 11 20:42:56 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:42:56.743637483Z" level=debug msg=VolumeStore.Find ByType=service.CustomFilter ByValue=0x574ac89ee400 spanID=75dc505be98cb1b0 traceID=8e45cab7c38d808a157f509b1400c467
Aug 11 20:43:46 ip-10-0-4-201 dockerd[685]: time="2024-08-11T20:43:46.292337945Z" level=debug msg="Calling HEAD /_ping" spanID=b98e2047ef4077b8 traceID=90aae443153223e1ebfe5092ab5cc689
After looking into the docker API docks and the testcontainer source code, I found the source of my confusion. I was setting timeout as if it was in milliseconds because that is the behavior described in the testcontainers docs; however, the parameter is actually supposed to be in seconds per the docker API documentation. Passing a value as if it were seconds instead of milliseconds correctly resolves the issue. The below example passes a value of 1 to the container stop() call:
Aug 11 22:29:08 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:08.662078441Z" level=debug msg="Calling POST /containers/773069cb6abceffb8baf1a82507c7afcf5179da62f897cfc7d5a4ef56487eeec/stop?t=1" spanID=994dbd89a44beb45 traceID=034b95db5c22f606bdd5a91af629d07e
Aug 11 22:29:08 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:08.662173811Z" level=debug msg="form data: {\"t\":1}" spanID=994dbd89a44beb45 traceID=034b95db5c22f606bdd5a91af629d07e
Aug 11 22:29:08 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:08.662217308Z" level=debug msg="Sending kill signal 15 to container 773069cb6abceffb8baf1a82507c7afcf5179da62f897cfc7d5a4ef56487eeec"
Aug 11 22:29:09 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:09.275659211Z" level=debug msg="Calling HEAD /_ping" spanID=e8ec504c05e8fc17 traceID=b87fca4ca1eab03402920e04a71e3cbd
Aug 11 22:29:09 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:09.276104543Z" level=debug msg="Calling GET /v1.45/containers/json?all=1" spanID=b4b0664e3172783d traceID=981da7b4211974808414f1521eac30bd
Aug 11 22:29:09 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:09.675265709Z" level=info msg="Container failed to exit within 1s of signal 15 - using the force" container=773069cb6abceffb8baf1a82507c7afcf5179da62f897cfc7d5a4ef56487eeec spanID=994dbd89a44beb45 traceID=034b95db5c22f606bdd5a91af629d07e
Aug 11 22:29:09 ip-10-0-4-201 dockerd[685]: time="2024-08-11T22:29:09.675310993Z" level=debug msg="Sending kill signal 9 to container 773069cb6abceffb8baf1a82507c7afcf5179da62f897cfc7d5a4ef56487eeec"
As a result, documentation should be updated to correctly call out the stop timeout as seconds.
Expected Behaviour
Testcontainers node documentation correctly identifies the stop timeout parameter as in seconds instead of milliseconds.
Actual Behaviour
Documentation indicates stop() calls should take milliseconds resulting in badly behaved containers (those that do not respect SIGTERM) to hang seemingly indefinitely. In reality, they are awaiting a timeout 1000 times larger than intended.
Testcontainer Logs
See above docker API logs.
Steps to Reproduce
In a linux environment with any recent version of docker and test containers
Configure the docker daemon in debug mode so that API calls are echoed
Instantiate a badly behaved container such as opensearch 2.13 const container = await new GenericContainer("opensearchproject/opensearch:2.13.0")
Stop the container with a timeout value of 5: container.stop({timeout: 5});
Monitor the docker daemon logs and observe that the container is killed 5s after SIGTERM (signal 15) is sent to it.
Environment Information
Operating System: Ubuntu 20.04
Docker Version: 26.1.1
Node version: 18.17.1
Testcontainers version: 10.8.1
The text was updated successfully, but these errors were encountered:
Thanks @vbaier-mitll for the detailed issue. Would you be willing to raise a PR to fix it? Should be a case of updating the parameter name in types and interfaces, and updating the containers page in the docs
When trying to figure out why my containers weren't getting killed in my test suite I stumbled across a feature request for killing containers #716 . Looking into the resolution it seems that after the timeout, badly behaved containers should be killed automatically. I used some curl commands against the docker socket and discovered that this is in fact the case:
However, I also noticed that when using the timeout argument for the testcontainers stop() call it would hang without ever sending a kill signal. Replicating this directly on the API with curl I also experienced a hang where curl failed to return:
After looking into the docker API docks and the testcontainer source code, I found the source of my confusion. I was setting timeout as if it was in milliseconds because that is the behavior described in the testcontainers docs; however, the parameter is actually supposed to be in seconds per the docker API documentation. Passing a value as if it were seconds instead of milliseconds correctly resolves the issue. The below example passes a value of 1 to the container stop() call:
As a result, documentation should be updated to correctly call out the stop timeout as seconds.
Expected Behaviour
Testcontainers node documentation correctly identifies the stop timeout parameter as in seconds instead of milliseconds.
Actual Behaviour
Documentation indicates stop() calls should take milliseconds resulting in badly behaved containers (those that do not respect SIGTERM) to hang seemingly indefinitely. In reality, they are awaiting a timeout 1000 times larger than intended.
Testcontainer Logs
See above docker API logs.
Steps to Reproduce
const container = await new GenericContainer("opensearchproject/opensearch:2.13.0")
container.stop({timeout: 5});
Environment Information
The text was updated successfully, but these errors were encountered: