Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsufficient timeout for opening K8S logstream #883

Open
Cl0udius opened this issue Oct 19, 2023 · 5 comments
Open

Unsufficient timeout for opening K8S logstream #883

Cl0udius opened this issue Oct 19, 2023 · 5 comments
Labels
help wanted Extra attention is needed

Comments

@Cl0udius
Copy link

Cl0udius commented Oct 19, 2023

Hi.

this is a followup to issue #683.
We currently recognize that the 1 second timeout in between the retries is sometimes not sufficient and leads to job loss when using AWX.

time.Sleep(time.Second)

Would it be possible to make this value configurable from our side and not having this 1 second fixed.
Maybe a Environment variable which we can configure on the POD could do the job.

There could also be a default value which has the one second value.

@fosterseth
Copy link
Member

I'd be in favor of being able to override with env variable.

@fosterseth fosterseth added help wanted Extra attention is needed and removed needs_triage labels Oct 25, 2023
@chadmf
Copy link

chadmf commented Oct 25, 2023

i endorse this!

@JoelKle
Copy link

JoelKle commented Sep 19, 2024

We have the same problem as @Cl0udius. Our k3s environment is from time to time under load and a timeout of one second with 5 retries leads to job loss.

@fosterseth
This issue is almost a year old. Can I move foward and open a PR for this issue?
We would implement this feature in the same was as the env var RECEPTOR_KUBE_CLIENTSET_RATE_LIMITER https://github.com/ansible/receptor/blob/devel/pkg/workceptor/kubernetes.go#L1275

aeter added a commit to aeter/receptor that referenced this issue Oct 3, 2024
@aeter
Copy link
Contributor

aeter commented Oct 3, 2024

Together with @JoelKle we agreed that I send a PR with a possible fix for this. Thanks! (the PR is already referenced)
@JoelKle - please confirm when you have the time - to make sure all is ok. Thanks

@JoelKle
Copy link

JoelKle commented Nov 18, 2024

yes, we work together with @aeter and agreed that he submits a pull request

aeter added a commit to aeter/receptor that referenced this issue Nov 21, 2024
aeter added a commit to aeter/receptor that referenced this issue Dec 10, 2024
AaronH88 pushed a commit that referenced this issue Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants