提问者:小点点

StrimziKafka上库伯内特斯本地裸金属


我有一个在多个本地(裸机/物理)机器上运行的kubernetes集群。我想在集群上部署kafka,但我不知道如何在我的配置中使用strimzi。

我试图按照快速入门页面上的教程进行操作:https://strimzi.io/docs/quickstart/master/
在点2.4处挂起我的zoogger pods。创建集群

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims

我通常使用hostpath作为我的卷,我不知道这是怎么回事…

编辑:我试图使用Arghya Sadhu的命令创建一个StorageClass,但问题仍然存在。
我的PVC描述:

kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name:          data-my-cluster-zookeeper-0
Namespace:     my-kafka-project
StorageClass:  local-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=my-cluster
               app.kubernetes.io/managed-by=strimzi-cluster-operator
               app.kubernetes.io/name=strimzi
               strimzi.io/cluster=my-cluster
               strimzi.io/kind=Kafka
               strimzi.io/name=my-cluster-zookeeper
Annotations:   strimzi.io/delete-claim: false
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    my-cluster-zookeeper-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  WaitForFirstConsumer  72s (x66 over 16m)  persistentvolume-controller  waiting for first consumer to be created before binding

还有我的豆荚:

kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name:           my-cluster-zookeeper-0
Namespace:      my-kafka-project
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=my-cluster
                app.kubernetes.io/managed-by=strimzi-cluster-operator
                app.kubernetes.io/name=strimzi
                controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
                statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
                strimzi.io/cluster=my-cluster
                strimzi.io/kind=Kafka
                strimzi.io/name=my-cluster-zookeeper
Annotations:    strimzi.io/cluster-ca-cert-generation: 0
                strimzi.io/generation: 0
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/my-cluster-zookeeper
Containers:
  zookeeper:
    Image:      strimzi/kafka:0.15.0-kafka-2.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/kafka/zookeeper_run.sh
    Liveness:   exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:          1
      ZOOKEEPER_METRICS_ENABLED:     false
      STRIMZI_KAFKA_GC_LOG_ENABLED:  false
      KAFKA_HEAP_OPTS:               -Xms128M
      ZOOKEEPER_CONFIGURATION:       autopurge.purgeInterval=1
                                     tickTime=2000
                                     initLimit=5
                                     syncLimit=2

    Mounts:
      /opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
  tls-sidecar:
    Image:       strimzi/kafka:0.15.0-kafka-2.3.1
    Ports:       2888/TCP, 3888/TCP, 2181/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /opt/stunnel/zookeeper_stunnel_run.sh
    Liveness:   exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:   1
      TLS_SIDECAR_LOG_LEVEL:  notice
    Mounts:
      /etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
      /etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-cluster-zookeeper-0
    ReadOnly:   false
  zookeeper-metrics-and-logging:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-cluster-zookeeper-config
    Optional:  false
  zookeeper-nodes:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-nodes
    Optional:    false
  cluster-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-cluster-ca-cert
    Optional:    false
  my-cluster-zookeeper-token-hgk2b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-token-hgk2b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

共3个答案

匿名用户

您需要有一个PerstientVolume来满足PerstientVolumeClaim的约束。

使用本地储存。使用本地储存类别:

$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -

您需要在集群中配置默认的storageClass,以便PerstientVolumeClaim可以从那里获取存储。

$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

匿名用户

我在裸机上运行时遇到了同样的问题,我尝试了@arghya-sadhu提到的储存类别方法,但仍然不起作用。我发现strimzi使用的储存类别是这里提到的本地存储的特定类型。此外,对于每个副本,您将需要不同的储存类别和具有不同目录的持久卷。
例如,下面的片段将为Zoogger和Kafka创建3个副本。
您需要将“node2”替换为您分配数据的节点的名称。您可以使用

kubectl get nodes

然后您需要创建目录为每个储存类别因为他们不能在同一目录否则你会得到一个错误

ssh root@node2
mkdir /mnt/pv0 /mnt/pv1 /mnt/pv2

之后,您可以运行此代码段来创建存储类和持久卷。
"存储类和pv. yaml"

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: class-0
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true"
  fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: class-1
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true"
  fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: class-2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true"
  fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-0
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-0
  local:
    path: /mnt/pv0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-0
  local:
    path: /mnt/pv0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-2
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-1
  local:
    path: /mnt/pv1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-3
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-1
  local:
    path: /mnt/pv1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-4
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-2
  local:
    path: /mnt/pv2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-5
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-2
  local:
    path: /mnt/pv2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2


创建后,您可以部署Kafka和Zoogger集群。只需确保覆盖存储类。
"sample-kafka-club. yaml"

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-sample-cluster
spec:
  kafka:
    version: 3.2.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      default.replication.factor: 1
      min.insync.replicas: 1
      inter.broker.protocol.version: "3.2"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 1Gi
        deleteClaim: true
        overrides:
        - broker: 0
          class: class-0
        - broker: 1
          class: class-1
        - broker: 2
          class: class-2
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 1Gi
      deleteClaim: true
      overrides:
        - broker: 0
          class: class-0
        - broker: 1
          class: class-1
        - broker: 2
          class: class-2
  entityOperator:
    topicOperator: {}
    userOperator: {}

匿名用户

是的,在我看来,库伯内特斯在基础设施级别上缺少一些东西。你应该提供用于静态分配给PVC的持久卷,或者正如Arghya已经提到的,你可以提供动态分配的存储类。