-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DHCP bridge driver when DHCP server/client on the same interface #1156
Comments
Adding an option for this seems backwards as it just complicates things for no real reason. Maybe we should have the proxy join the netns of each container and make it send the packages from the container interface side like a regular dhcp client would do. |
Yes, that is the key. Whatever stack you use,
That is ideal! Though I am not sure how netavark proxy can join every container simultaneously... |
You do not really have to, I have not looked into how the dhcp lib opens the sockets but with namesapce speaking all that needs to be done is The socket then always stays associated with the original namespace it was opened in AFAIK. I never tried it with SOCK_RAW though but we use this trick to open the AF_NETLINK socket to configure the interfaces in the namespace so I would assume it works for all socket types. So then we keep open one socket fd per container which is manageable. |
On my setup,
dhcp server
anddhcp client(netavark dhcp proxy)
is on the same bridge interface. Kernel eventually dropped UDP packets because it is sent by itself to itself.I need a new method to specify the interface that netavark proxy will use to send its UDP packets.
My current patch is adding a new field in
options.dhcp_interface="veth_bind_to_bridge_interface"
just likeoptions.mode=unmanaged
. But I'd like to get ideas from your side./cc @Luap99
The text was updated successfully, but these errors were encountered: