Hi folks, hope your weekend is going well.

So I have put myself into a situation. I have a home server with docker installed running fine so far. In my home network I have multiple networks for different purposes. The whole network stack looks like this OPNSense — Switch — Ubuntu Server

The server is connected to a switch port with pvid 100, and runs on vlan0.100 Now my goal is to move some docker containers to other vlans. To accomplish that I have set vlan0.101 and vlan0.102 on my server as interfaces with their own IP and default gateway on that subnet (e.g. 192.168.101.10) Next step I set up macvlans for my docker containers Then I set the port to also allow tagged traffic, but kept it on pvid 100. Now on my OPNSense I changed the host ip of my server from 192.168.100.10 to include all 3 IPs so homeserver 192.168.100.10, 192.168.101.10, 192.168.102.10

This setup seems to work fine for internal network, however no services are reachable from the outside (internet) anymore.

My first question is: Am I thinking correctly about this? Or is this over-engineered bs at this point and there is a better way to put docker containers on different subnets.

Second question is: Any ideas what’s breaking the internet access?

Thanks for the help in advance :D

EDIT: i have not changed the vlan of any container yet

  • buedi@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I can’t see your full setup / config from here, but a) you are not overengineering that. Using VLANs to segment networks is a very good practice. And although Docker (nor Podman) allow macvlan when running rootless, my gutfeeling tells me that segmenting my network takes priority over running rootless, because I think that attack vectors by traversing networks are much more common that breaking out of a container into the host. But this is just my gutfeeling. b) I think I run here what you want to achieve, so I try to explain what I did.

    My Setup is similar to yours. OPNsense (OpenWRT before that), a Switch that is capable of VLAN and a Ubuntu Server with a single NIC that hosts all the Compose stacks.

    1. You already configured your VLANs in OPNsense, so I will just mention that I created mine via Interface -> Devices -> VLAN on the LAN Interface of my OPNsense and then used the Assignments to finally make them available. On the OPNSense each one gets a static IP from the respective Network I defined for the VLAN.
    2. On the Docker Host, in Netplan I configured the single NIC I have as a Bridge. I cannot remember if that was necessary or if I just planned ahead, should I add a 2nd NIC later on, to prevent that I need to reconfigure the whole networking again. Of course that Bridge sits in my LAN and the Netplan Config looks like this:
    network:
      ethernets:
        eno1:
          dhcp4: no
      version: 2
      bridges:
        br0:
          addresses:
          - 192.x.x.3/24
          nameservers:
            addresses:
            - 192.x.x.x
            search:
            - my.lan
            - local
          routes:
          - to: default
            via: 192.x.x.1
          interfaces:
            - eno1
    
    1. Now that the Docker Containers can use the VLANs, I had to create Docker Networks as macvlan like this:
    docker network create -d macvlan --subnet=192.x.10.0/24 --gateway=192.x.10.1 -o parent=br0.10 vlan10
    docker network create -d macvlan --subnet=192.x.20.0/24 --gateway=192.x.20.1 -o parent=br0.20 vlan20
    
    1. Now for a Container to make use of those Networks, you have to define them as External in the Compose Stack like this:
    services:
      my-service:
        image: blah
        ...
        networks:
          vlan10:
    
    networks:
      vlan10:
        name: vlan10
        external: true
    

    In 4. you have the option to not define an ipv4_address in the networks section. Then Docker will just pick its own addresses when the containers start. Letting OPNsense assign IP addresses dynamically in such a VLAN is something that did not work for me. So either you let Docker pick the IPs when starting a stack, or you define your IP addresses in the stack. If you do the latter, you have to do it for every stack that ever joins that VLAN, otherwise Docker might pick an IP that you already assigned manually and that stack will not start.

    I also wanted to have some services running directly in the LAN via Docker. This setup is a bit more involved and requires you to create a SHIM Network, otherwise the Docker Host itself will not be capable of accessing Containers running in the LAN Network. This was the case for my Pi-Hole for example, that I wanted to have an IP in my LAN Network and had to be reachable by the Docker Host itself too. There is a very good post about Macvlan and SHIM Networks in this blog: https://blog.oddbit.com/post/2018-03-12-using-docker-macvlan-networks/

    I hope this helps. Do not give up. Segmenting your Networks is important, especially if you plan to publish some services over the Internet.

    • zo0@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Thanks, that’s a great write up. One thing I didn’t ubderstand however is in your Docker macvlan, you set the parent to br0.10 and br0.20, where are those parents defined?

      Maybe I misunderstood the macvlan documents but what I did was defining a vlan in server netplan vlan0.100 and set the macvlan parent to that vlan0.100. Is that not how it’s supposed to be?

      • buedi@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        The .10 or .20 just advises Docker to create that specific Subinterface automatically. In my example ip link will show new interfaces called br0.10 and br0.20 after creating the macvlan networks for VLAN IDs 10 and 20. You do not need to adjust your Netplan config when doing it like that. I would even assume that you are not allowed to define VLAN ID 10 and 20 in that particular case also in Netplan. I would expect that this will cause issues. Also see https://docs.docker.com/engine/network/drivers/macvlan/ in the 802.1Q trunk bridge mode section.

        There are probably multiple ways to do all of this, but this is how I did it and it works for me since a few years without touching it again. All VLANs are separated from each other and no VLAN has access to the LAN side. Everything is forced to go through tagged VLANs via the switch to the Firewall, where I then create rules to allow / deny traffic from / to all my networks and the Internet.

        For me, this setup is very simple to re-implement should my Host go down. No special configuration in Netplan is needed. Only create the Docker Networks and start up my stacks again.