You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
On k8s, io-engine container hangs whilst trying to flock /dev/hugepages
To Reproduce
Setup your system with default 1GiB hugepage size and allocate 2MiB hugepages.
Expected behavior
Should work :)
Additional context
The problem is that we end up with both 1Gi and 2Mi hugepages mounted on /dev/hugepages.
/ # mount | grep huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=1024M)
nodev on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
EAL init is not able to cope with this and hangs as soon as it tries to flock /dev/hugepages.
I think the fix is to ensure we mount 1Gi and 2Mi on separate mounts, example:
- name: hugepage
mountPath: /dev/hugepages-2MiB
In fact today helm-chart is not very flexible as it simply calls out 2MiB hugepages. We should add some vars to allow choosing which hugepages to use...
The text was updated successfully, but these errors were encountered:
Describe the bug
On k8s, io-engine container hangs whilst trying to flock /dev/hugepages
To Reproduce
Setup your system with default 1GiB hugepage size and allocate 2MiB hugepages.
Expected behavior
Should work :)
Additional context
The problem is that we end up with both 1Gi and 2Mi hugepages mounted on /dev/hugepages.
EAL init is not able to cope with this and hangs as soon as it tries to flock /dev/hugepages.
I think the fix is to ensure we mount 1Gi and 2Mi on separate mounts, example:
In fact today helm-chart is not very flexible as it simply calls out 2MiB hugepages. We should add some vars to allow choosing which hugepages to use...
The text was updated successfully, but these errors were encountered: