Kubernetes
kubectl 獲取 pod READY 0/1 狀態
我正在關注 Kubernetes 和 Mongodb 的實驗室,但所有 Pod 始終處於 0/1 狀態,這是什麼意思?我如何讓它們準備好 1/1
[root@master-node ~]# kubectl get pod NAME READY STATUS RESTARTS AGE mongo-express-78fcf796b8-wzgvx 0/1 Pending 0 3m41s mongodb-deployment-8f6675bc5-qxj4g 0/1 Pending 0 160m nginx-deployment-64bd7b69c-wp79g 0/1 Pending 0 4h44m
kubectl 獲取 pod nginx-deployment-64bd7b69c-wp79g -o yaml
[root@master-node ~]# kubectl get pod nginx-deployment-64bd7b69c-wp79g -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2021-07-27T17:35:57Z" generateName: nginx-deployment-64bd7b69c- labels: app: nginx pod-template-hash: 64bd7b69c name: nginx-deployment-64bd7b69c-wp79g namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nginx-deployment-64bd7b69c uid: 5b1250dd-a209-44be-9efb-7cf5a63a02a3 resourceVersion: "15912" uid: d71047b4-d0e6-4d25-bb28-c410639a82ad spec: containers: - image: nginx:1.14.2 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2zr6k readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-2zr6k projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2021-07-27T17:35:57Z" message: '0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn''t tolerate.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: BestEffort
kubectl 描述 pod nginx-deployment-64bd7b69c-wp79g
[root@master-node ~]# kubectl get pod nginx-deployment-64bd7b69c-wp79g -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2021-07-27T17:35:57Z" generateName: nginx-deployment-64bd7b69c- labels: app: nginx pod-template-hash: 64bd7b69c name: nginx-deployment-64bd7b69c-wp79g namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nginx-deployment-64bd7b69c uid: 5b1250dd-a209-44be-9efb-7cf5a63a02a3 resourceVersion: "15912" uid: d71047b4-d0e6-4d25-bb28-c410639a82ad spec: containers: - image: nginx:1.14.2 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2zr6k readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-2zr6k projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2021-07-27T17:35:57Z" message: '0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn''t tolerate.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: BestEffort [root@master-node ~]# kubectl describe pod nginx-deployment-64bd7b69c-wp79g Name: nginx-deployment-64bd7b69c-wp79g Namespace: default Priority: 0 Node: <none> Labels: app=nginx pod-template-hash=64bd7b69c Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/nginx-deployment-64bd7b69c Containers: nginx: Image: nginx:1.14.2 Port: 8080/TCP Host Port: 0/TCP Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2zr6k (ro) Conditions: Type Status PodScheduled False Volumes: kube-api-access-2zr6k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m53s (x485 over 8h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
您似乎只有一台伺服器用於 K8s 集群。在典型的 K8s 集群中,主控平面或控制平面通常與執行工作負載的伺服器分開。為此,它有一個“污點”,基本上是一種排斥 pod 的屬性。有了污點,就無法在 master 上安排 pod。
您可以在
kubectl get pod
輸出的“status.conditions.message”元素中看到此資訊:message: '0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master:}, that the pod didn't tolerate.'
Pod 可以定義容限,允許將它們調度到具有相應污點的節點。該機制在文件中進行了詳細說明:https ://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
容忍配置應該看起來像這樣(未經測試):
tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"
在您的情況下,使用此 SO question中提到的方法可能更容易。
nodeName: master
在您的 pod 定義中指定一個顯式元素。這應該會跳過污染機制並允許您的 pod 被調度。另一種選擇是從主節點中刪除污點,如下所述:https ://stackoverflow.com/q/43147941