Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DHCP bridge driver when DHCP server/client on the same interface #1156

Open
xhebox opened this issue Jan 8, 2025 · 3 comments
Open

DHCP bridge driver when DHCP server/client on the same interface #1156

xhebox opened this issue Jan 8, 2025 · 3 comments

Comments

@xhebox
Copy link

xhebox commented Jan 8, 2025

On my setup, dhcp server and dhcp client(netavark dhcp proxy) is on the same bridge interface. Kernel eventually dropped UDP packets because it is sent by itself to itself.

I need a new method to specify the interface that netavark proxy will use to send its UDP packets.

My current patch is adding a new field in options.dhcp_interface="veth_bind_to_bridge_interface" just like options.mode=unmanaged. But I'd like to get ideas from your side.

/cc @Luap99

@Luap99
Copy link
Member

Luap99 commented Jan 8, 2025

Adding an option for this seems backwards as it just complicates things for no real reason.
I guess the point should be this should work out of the box. I haven't really looked deeply into how we do the DHCP handshakes but most likely the issue would be around us sending a broadcast package as that of course would not be send to the same interface where it was send from.

Maybe we should have the proxy join the netns of each container and make it send the packages from the container interface side like a regular dhcp client would do.

@xhebox
Copy link
Author

xhebox commented Jan 8, 2025

I haven't really looked deeply into how we do the DHCP handshakes but most likely the issue would be around us sending a broadcast package as that of course would not be send to the same interface where it was send from.

Yes, that is the key. Whatever stack you use, SOCK_RAW or SOCK_DGRAM, broadcast on bridge and its children interface just won't work.

Maybe we should have the proxy join the netns of each container and make it send the packages from the container interface side like a regular dhcp client would do.

That is ideal! Though I am not sure how netavark proxy can join every container simultaneously...

@Luap99
Copy link
Member

Luap99 commented Jan 8, 2025

That is ideal! Though I am not sure how netavark proxy can join every container simultaneously...

You do not really have to, I have not looked into how the dhcp lib opens the sockets but with namesapce speaking all that needs to be done is
setns(container)
socket(SOCK_RAW)
setns(host)

The socket then always stays associated with the original namespace it was opened in AFAIK. I never tried it with SOCK_RAW though but we use this trick to open the AF_NETLINK socket to configure the interfaces in the namespace so I would assume it works for all socket types.

So then we keep open one socket fd per container which is manageable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants