There are certain steps that almost every container runtime implements in order to establish networking in containers. These steps are almost same in all the runtimes — 1)creating networking namespaces, 2)creating a virtual cable, attaching virtual cable to the networking namespace and the other interface, 3)assigning IP address to the namespace, 4)bringing the interfaces up and 5)enabling NAT IP Masquerade/port forwarding. A program was developed that could be used to establish networking with all the steps defined above and it was called as Bridge. Bridge can easily work with almost every container runtime there is. E.g.
bridge add <containerID> <namespace> whenever k8s or rkt create a container, they call the bridge program to configure networking for that container.
Other programs like bridge can be developed but one needs to make sure that it works smoothly with all the container runtimes, which is where CNI(Container Networking Interface) comes into picture. CNI defines a set of rules that every bridge-like program and every container runtime must adhere to in order to keep things compatible. So every bridge-like program that is developed in alignment with CNI will work with every container runtime that is in alignment with CNI, and vice versa.
CNI rules for networking program(plugin) —
1) It must have a ADD/DEL/CHECK CLI commands to add/del/check namespaces.
2) It must support args such as containerID and namespace names.
3) It must manage IP address assignment to pods.
4) It must return results in a specific format.
CNI rules for container runtime —
1) It must create container namespace.
2) It must identify network the container must attach to.
3) It must invoke the networking program(plugin) when a container is ADDed or DELeted.
4) It must have JSON format for network configuration.
CNI has a few supported plugins such as — bridge, VLAN, IPVLAN, MACVLAN, IPAM plugins such as DHCP and host-local, and third-party plugins such as — weaveworks, flannel, cilium, vmware nsx, infoblox, etc.
IPAM — IPAM (IP Address Management) is the administration of DNS and DHCP, which are the network services that assign and resolve IP addresses to machines in a TCP/IP network. Simply put, IPAM is a means of planning, tracking, and managing the Internet Protocol address space used in a network.
Container runtimes such as rkt, k8s and mesos align with CNI rules. But, Docker doesn’t.
Docker has it’s own set of standards called as CNM that aims to solve networking challenges with a little different approach. Due to these differences, the plugins do not natively integrate with docker, i.e. we cannot use them like
docker run --network=cni-bridge nginx .But there is still a way to integrate them. Create a container with
--network=none and then manually invoke the bridge plugin by running
bridge add <containerID> <namespace> .
This is how K8s does it when it creates containers using docker runtime. It creates the container with
--network=none and then invokes the CNI networking plugin to take care of rest of the networking configuration.