Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timescaledb-tune wrapper CPU heuristic off in GKE/k8s #173

Open
remingtonc opened this issue Jul 9, 2021 · 0 comments
Open

timescaledb-tune wrapper CPU heuristic off in GKE/k8s #173

remingtonc opened this issue Jul 9, 2021 · 0 comments

Comments

@remingtonc
Copy link

The heuristic in 001_timescaledb_tune.sh does not tune according to k8s limits (at least in GKE (Google Kubernetes Engine)). With CPU resource limits as requesting 1.5 and limit 3 we can see that /sys/fs/cgroup/cpuset/cpuset.effective_cpus returns the node potential, which yields tuning greater than limits.

# tune heuristic
$ cat /sys/fs/cgroup/cpuset/cpuset.effective_cpus
0-7
$ cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-7
# resource request
$ cat /sys/fs/cgroup/cpu/cpu.shares
1536
# resource limits
$ cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
300000
$ cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
$ cat /sys/fs/cgroup/cpu/cpu.stat
nr_periods 295975
nr_throttled 23928
throttled_time 2144160036200

It seems like the tune should derive from /sys/fs/cgroup/cpu/cpu.cfs_quota_us and /sys/fs/cgroup/cpu/cpu.cfs_period_us rather than the cpuset if more constrained than the cpuset.

This is easily fixed by specifying the TS_TUNE environment variables to be the same as the limits, which is pretty simple, but could be more automatic. :)

@erimatnor erimatnor transferred this issue from timescale/timescaledb-docker Nov 23, 2021
@erimatnor erimatnor transferred this issue from timescale/timescaledb-tune Nov 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants