Kubernetes Cluster master/ Worker Nodes

Muhammad ahmad

I am trying to create a Kubernetes cluster, this cluster will contain 3 nodes
Master Nodes, where I Installed and configured kubeadm , kubelete, and installed my system there (which is web application developed by laravel ), the worker nodes is joined to the master without any problem , and I deployed my system to PHP-fpm pods and created services and horizontal Pods Autoscaling this is my service:

PHP             LoadBalancer   10.108.218.232   <pending>     9000:30026/TCP   15h   app=php

this is my pods

NAME                         READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
qsinavphp-5b67996888-9clxp   1/1     Running   0          40m   10.244.0.4    taishan             <none>           <none>
qsinavphp-5b67996888-fnv7c   1/1     Running   0          43m   10.244.0.12   kubernetes-master   <none>           <none>
qsinavphp-5b67996888-gbtdw   1/1     Running   0          40m   10.244.0.3    taishan             <none>           <none>
qsinavphp-5b67996888-l6ghh   1/1     Running   0          33m   10.244.0.2    taishan             <none>           <none>
qsinavphp-5b67996888-ndbc8   1/1     Running   0          43m   10.244.0.11   kubernetes-master   <none>           <none>
qsinavphp-5b67996888-qgdbc   1/1     Running   0          43m   10.244.0.10   kubernetes-master   <none>           <none>
qsinavphp-5b67996888-t97qm   1/1     Running   0          43m   10.244.0.13   kubernetes-master   <none>           <none>
qsinavphp-5b67996888-wgrzb   1/1     Running   0          43m   10.244.0.14   kubernetes-master   <none>           <none>

the worker nondes is taishan , and the master is Kubernetes-master. and this is my nginx config which is sending request to php service

server {
 listen 80;
  listen 443  ssl;
    server_name k8s.example.com;
    root /var/www/html/Test/project-starter/public;
        ssl_certificate "/var/www/cert/example.cer";
        ssl_certificate_key "/var/www/cert/example.key";

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";

    index index.php;
    charset utf-8;
 # if ($scheme = http) {
 #   return 301 https://$server_name$request_uri;
 # }
   ssl_protocols TLSv1.2;
      ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES25>
      ssl_prefer_server_ciphers on;

    location / {

try_files $uri $uri/ /index.php?$query_string;

    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    location ~ [^/]\.php(/|$) {
         fastcgi_split_path_info  ^(.+\.php)(/.+)$;
         fastcgi_index            index.php;
         fastcgi_pass             10.108.218.232:9000;
         include                  fastcgi_params;
         fastcgi_param   PATH_INFO       $fastcgi_path_info;
         fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
      }

    location ~ /\.(?!well-known).* {
        deny all;
}
}

the problem is I have 3 pods on the worker node and 5 pods on the master node, but no request going to the worker's pods all request is going to the master, both of my nodes are in ready status

NAME                STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
kubernetes-master   Ready    control-plane,master   15h   v1.20.4   10.14.0.58    <none>        Ubuntu 20.04.1 LTS   5.4.0-70-generic   docker://19.3.8
taishan             Ready    <none>                 79m   v1.20.5   10.14.2.66    <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic   docker://19.3.8

this is my kubectl describe nodes php result

Name:                     php
Namespace:                default
Labels:                   tier=backend
Annotations:              <none>
Selector:                 app=php
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.108.218.232
IPs:                      10.108.218.232
Port:                     <unset>  9000/TCP
TargetPort:               9000/TCP
NodePort:                 <unset>  30026/TCP
Endpoints:                10.244.0.10:9000,10.244.0.11:9000,10.244.0.12:9000 + 7 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From                Message
  ----    ------  ----  ----                -------
  Normal  Type    48m   service-controller  ClusterIP -> LoadBalancer

this is my yaml file which I am using to create the deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: php
  name: qsinavphp
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: php
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: php
    spec:
      containers:
        - name: taishan-php-fpm
          image: starking8b/taishanphp:last
          imagePullPolicy: Never
          ports:
            - containerPort: 9000

          volumeMounts:


            - name: qsinav-nginx-config-volume
              mountPath: /usr/local/etc/php-fpm.d/www.conf
              subPath: www.conf
            - name: qsinav-nginx-config-volume
              mountPath: /usr/local/etc/php/conf.d/docker-php-memlimit.ini
              subPath: php-memory
            - name: qsinav-php-config-volume
              mountPath: /usr/local/etc/php/php.ini-production
              subPath: php.ini
            - name: qsinav-php-config-volume
              mountPath: /usr/local/etc/php/php.ini-development
              subPath: php.ini
            - name: qsinav-php-config-volume
              mountPath: /usr/local/etc/php-fpm.conf
              subPath: php-fpm.conf

            - name: qsinav-www-storage
              mountPath: /var/www/html/Test/qSinav-starter
          resources:
            limits:
              cpu: 4048m

            requests:
              cpu: 4048m



      restartPolicy: Always
      serviceAccountName: ""
      volumes:
        - name: qsinav-www-storage
          persistentVolumeClaim:
            claimName: qsinav-pv-www-claim
        - name: qsinav-nginx-config-volume
          configMap:
            name: qsinav-nginx-config

        - name: qsinav-php-config-volume
          configMap:
            name: qsinav-php-config
 

and this is my service yaml file

apiVersion: v1
kind: Service
metadata:
  name: php
  labels:
    tier: backend
spec:
  selector:
    app: php

  ports:
    - protocol: TCP
      port: 9000
  type: LoadBalancer

I am not sure where is my error , so please help to solve this problem

Muhammad ahmad

actually the problem was with flannel network , it was not able to make connection between nodes , so I solved it by installing weave plugin which is working fine now by applying this command

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related

Why I do not see any nodes in kubernetes cluster with role master or worker?

Kubernetes Cluster: How to figure out which nodes are master nodes

Are the master and worker nodes the same node in case of a single node cluster?

Akka cluster with one master node, worker nodes and non cluster client nodes

How to use the master node as worker node in Kubernetes cluster?

Kubernetes cluster without kubernetes Master

Kubernetes - Can you build a cluster with master node on x86 and worker node on arm?

Can we create kubernetes multiple node (master-worker)cluster on window? if yes,How?

network service in kubernetes worker nodes

How can I safely NFS mount /var/lib/kubelet in a kubernetes cluster with diskless worker nodes?

I want to ping (icmp) monitor the worker nodes that make up the kubernetes cluster without using the internal IP of the node

Kubernetes : Disadvantages of an all Master cluster

Add nodes to kubernetes cluster on AWS?

How to auto scale Kubernetes worker nodes on AWS

Kubernetes volume duplication between worker nodes

No configuration has been provided on Kubernetes worker nodes

Share storage/volume between worker nodes in Kubernetes?

Understand Spark: Cluster Manager, Master and Driver nodes

Unable to connect worker node to kubernetes cluster

discovery.zen.minimum_master_nodes value for a cluster of two nodes

Install multi master kubernetes cluster in local

Safely remove master from Kubernetes HA cluster

System hangs after running master cluster kubernetes

Kubernetes Cluster Master Node on Windows OS

terraform azurerm_kubernetes_cluster nodes

kubernetes HA cluster masters nodes not ready

Can a kubernetes cluster share windows and linux nodes?

Kubernetes cluster nodes running out of inodes in /tmp

Kubernetes cluster is not exposing external ip as <nodes>