This post shows how I set up kubernetes 1.20 cluster with NSX-T Container Plugin. All the namespaces will be using the same T1 Gateway and the stateful services will be applied to this gateway.

Bill of material.

  • Ubuntu core 20.04.3 LTS
  • Kubernetes 1.20
  • NSX-T 3.1
  • NSX container plugin 3.1
  • vSphere 7.0.3

Prepare NSX-T

I have NSX-T running in policy mode and T0 Gateway setup with BGP configured to my ToR switch.

Management Connection

 I created a VLAN segment called nsx-k8-management-vlan30 for the management access to the VMs. This provides the north/south connectivity and access to the Kubernetes api. Internet access is also enabled. A standard vDS/vSS should also suffice. Internet access is required because this network will be used for kubernetes installation, updates, management and pulling containers from docker hub in my case.

Overlay Connection

Next, I created a T1 Gateway K8s-Cluster1-t1 connected to the T0-Gateway. I have enabled all the options in Route Advertisement.

Then, I created a overlay segment nsx-k8-transport and connected it to the above T1 gateway. The second vnic of the vms will be connected to this segment. NCP uses this segment to tunnel overlay network. 

IP address pools

Next, I created a IP address block called k8-container-network for the container networking. This is a private network since it will not be leaving the environment.

I created another pool called k8-external-network for external networking. These addresses are routable in my environment as the container networks of the namespaces will be NAT’ed behind these ips.

Prepare the Ubuntu VMs

I am running Ubuntu core 20.04.3 LTS with openSSH Server with 2 vCPu, 4GB RAM and 2 vNics. Once vnic is connected to the management network and other is connected to the overlay segment. I will be running one master and 2 worked nodes. These tasks are common to all virtual machines, so I performed on one vm and then cloned it to 2 more. After cloning, I set hostnames on all 3 nodes.

Disable SWAP.

  • Deactivate SWAP by typing: sudo swapoff -v /swap.img
  • Remove the swap file entry from /etc/fstab file
  • Finally delete the swap file: rm /swap.img

Install additional packages needed.

harjinder@k8s-master:~$ sudo apt-get install docker.io open-vm-tools apt-transport-https python linux-headers-$(uname -r)

Check Docker status: harjinder@k8s-master:~$ sudo systemctl status docker

Enable the docker service: harjinder@k8s-master:~$ sudo systemctl enable docker.service

Make the current user member of the docker group: harjinder@k8s-master:~$ sudo usermod -a -G docker $USER

Install Kubelet

Kubelet is the primary node agent. It runs on each node and it manages containers created by Kubernetes on every node.

harjinder@k8s-master:~$ sudo touch /etc/apt/sources.list.d/kubernetes.list

harjinder@k8s-master:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

harjinder@k8s-master:~$ echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a /etc/apt/sources.list.d/kubernetes.list

harjinder@k8s-master:~$ sudo apt-get update

The latest version of Kubernetes supported by NSX-T container plugin 3.1.2 is 1.20, so I found the subversion and installed that next.

harjinder@k8s-master:~$ apt-cache show kubelet | grep “Version: 1.20”
Version: 1.20.15-00

harjinder@k8s-master:~$ sudo apt-get install -y kubelet=1.20.15-00 kubeadm=1.20.15-00 kubectl=1.20.15-00harjinder@k8s-master:~$ sudo apt-get install -y kubelet=1.20.15-00 kubeadm=1.20.15-00 kubectl=1.20.15-00

Upload NSX-T plugin to the VM

I downloaded the plugin, extracted it and uploaded it to the vm. Then I loaded the image to the local docker Repository.

harjinder@k8s-master:~$ docker load -i nsx-ncp-ubuntu-3.1.2.17855682.tar 

These were all the common tasks for the VMs. 

Once this was done, I cloned the “k8s-master” vm to 2 more vms and named them “k8s-worker1” and “k8s-worker2”. All vms were given a static ip on management node and host file was updated to include the node names for their respective ip address.

 

Tag Segment Ports

VCP needs to know the VIF ID of the vnic on all the vms. This is how the container VIFs are linked to the correct host VIFs. It is achieved by tagging the Segment ports in the below format.

Tag: Node name

Scope: ncp/node_name

Tag: Cluster Name

Scope: ncp/cluster

So the tags for my master node look like this

Tag: k8s-master

Scope: ncp/node_name

Tag: k8s-cluster1

Scope: ncp/cluster

Similarly, I have tagged the worker node ports with the respective tags.

Update the Configuration file

In the folder for the plugin(NCP) is a file called ncp-ubuntu-policy.yaml, this file needs to be updated with the relevant configurations and the file loaded to the master node. The updates that I made for my lab were below

    [coe]

    adaptor = kubernetes

    cluster = k8s-cluster1

    loglevel = ERROR

    node_type = HOSTVM

    [nsx_v3]

    policy_nsxapi = True

    nsx_api_managers = 192.168.20.23

    nsx_api_user = admin

    nsx_api_password = VMware1!VMware1!

    insecure = True

    subnet_prefix = 28

    use_native_loadbalancer = True

    pool_algorithm = ROUND_ROBIN

    service_size = SMALL

    container_ip_blocks = k8-container-network

    external_ip_pools = K8-external-pool

    top_tier_router = k8s-cluster1-t1

    single_tier_topology = True

    overlay_tz = 1b3a2f36-bfd1-443e-a0f6-4de01abc963e

    [k8s]

    apiserver_host_ip = 192.168.30.221

    apiserver_host_port = 6443

    ingress_mode = nat

    [k8s]

    apiserver_host_ip = 192.168.30.221

    apiserver_host_port = 6443

    ingress_mode = nat

    [coe]

    adaptor = kubernetes

    cluster = k8s-cluster1

    loglevel = ERROR

    node_type = HOSTVM

    [nsx_node_agent]

    ovs_bridge = br-int

    ovs_uplink_port = ens224

The below will appear 6 times in the configuration. Update it with the details of the ncp image that was loaded into local docker registry

      initContainers:

        – name: nsx-ncp-bootstrap

          # Docker image for NCP

          image: registry.local/3.1.2.17855682/nsx-ncp-ubuntu:latest

          imagePullPolicy: IfNotPresent

Create Kubernetes cluster

On the k8s-master node run the command: harjinder@k8s-master:~$ sudo kubeadm init

This command gives an output which I used on both of my worker nodes to connect to the master.

harjinder@k8s-worker1:~$ sudo kubeadm join 192.168.30.221:6443 –token i2wh9x.ea6cgv5v6o14s3ir     –discovery-token-ca-cert-hash sha256:2a2ee5d14c336807eab19c8e5e6a744c201f94e3acac6e5dcac5000ea47973f3

Once this is done, I applied the ubuntu-policy.yaml file

harjinder@k8s-master:~$ kubectl apply -f ~/ncp-ubuntu-policy.yaml

After a couple of minutes, all pods in the namespace “nsx-system” should show the status Running

harjinder@k8s-master:~$ kubectl get pods –namespace nsx-system
NAME READY STATUS RESTARTS AGE
nsx-ncp-55866f44d-g757b 1/1 Running 1 12s
nsx-ncp-bootstrap-hdmhn 1/1 Running 6 12s
nsx-ncp-bootstrap-lsftw 1/1 Running 4 12s
nsx-ncp-bootstrap-qqqbx 1/1 Running 4 12s
nsx-node-agent-545g2 3/3 Running 3 12s
nsx-node-agent-885l5 3/3 Running 3 12s
nsx-node-agent-vzp2s 3/3 Running 3  12s

Testing

To test the cluster, I created a nginx deployment with the below config file

apiVersion: apps/v1

kind: Deployment

metadata:

  creationTimestamp: null

  labels:

    app: nginx

    tier: frontend

  name: nginx-deployment

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx

      tier: frontend

  strategy: {}

  template:

    metadata:

      creationTimestamp: null

      labels:

        app: nginx

        tier: frontend

    spec:

      containers:

      – image: nginx

        name: nginx

        resources: {}

I also created a service that will leverage the NSX-T load balancer with the below configuration file

apiVersion: v1

kind: Service

metadata:

  creationTimestamp: null

  labels:

    app: nginx-service

  name: nginx-service

spec:

  ports:

  – name: 8081-8080

    port: 8082

    protocol: TCP

    targetPort: 80

  selector:

    app: nginx

    tier: frontend

  type: LoadBalancer

status:

  loadBalancer: {}

Once all the pods and the service are online, I can connect via port 8082

I can also see the newly created LB in NSX-T


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *