Kubernetes安装ingress-nginx-controller

网友投稿 1578 2022-10-30

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

Kubernetes安装ingress-nginx-controller

Kubernetes 集群内部使用kube-dns实现服务发现的功能;

Kubernetes 集群中的应用使用 NodePort 和 LoadBlancer 类型的 Service 可以实现把应用暴露给外部用户使用,NodePort要维护端口信息,适合小规模应用;

Kubernetes提供另一个非常重要的资源对象可用来暴露服务给外部用户,即ingress

Ingress是外部访问kuberenets集群的一个入口,将外部的请求转发到集群内不同的 Service 上,相当于 nginx、haproxy 等负载均衡代理服务器。Ingress controller 类似监听器,通过kube-apiserver实时感知service、pod变化,当得到这些变化信息后,Ingress controlle结合 Ingress的配置,更新反向代理负载均衡器,达到服务发现的作用。现在用得较多的是ingress controller包括:traefik和nginx-controller,traefik性能较nginx-controller差,但配置简单;本文具体讨论通过helm安装nginx-controller步骤1:在应用商店添加栏,添加应用商店ingress-nginx,执行刷新,等待状态变成active,商店url地址:https://github.com/kubernetes/ingress-nginx.git

镜像默认地址:quay.io/kubernetes-ingress-controller/nginx-ingress-controller

参数描述默认值
controller.image.repository镜像地址
controller.image.tag
镜像tag0.30.0
controller.image.digest
镜像描述""
controller.image.pullPolicy
拉取镜像的策略
IfNotPresent
controller.image.runAsUser
User ID101
controller.containerPort.http
监听http请求的端口80
controller.containerPort.https
监听https请求的端口443
controller.config
nginx的ConfigMap对象none
controller.configAnnotations

{}
controller.hostNetwork
controller.service.externalIPs
 被设置且使用 kube-proxy
 时,不要设置该值为true,否则端口冲突 80
false
controller.dnsPolicy
hostNetwork=true其值变为
ClusterFirstWithHostNet
ClusterFirst
controller.dnsConfig
pod's dns config 自定义dns配置{}
controller.electionID
election ID用于 status更新ingress-controller-leader
controller.extraEnvs
pods额外设置的环境变量{}
controller.extraContainers
pods额外设置的容器{}
controller.extraVolumeMounts
volumeMounts挂载点{}
controller.extraVolumes
数据卷{}
controller.extraInitContainers
初始化容器Containers,  app容器启动前的容器[]
controller.healthCheckPath
health检测路径/healthz"
controller.ingressClass
ingress路由类名nginx
controller.maxmindLicenseKey
Maxmind许可证密钥,用于下载GeoLite2数据库。请参阅访问和使用GeoLite2数据库""
controller.scope.enabled
限制入口控制器的范围
false
controller.scope.namespace
命名空间""
controller.extraArgs
额外的容器启动参数
{}
controller.kind
部署方式Deployment或DaemonSetDeployment
controller.annotations
部署时添加的注释
{}
controller.autoscaling.enabled
为true会创建Horizontal Pod Autoscalerfalse
controller.autoscaling.minReplicas
自动缩放的最小副本数2
controller.autoscaling.maxReplicas
autoscaling是能的情况下设置最大副本数11
controller.autoscaling.targetCPUUtilizationPercentage
触发扩容的cpu利用率"50"
controller.autoscaling.targetMemoryUtilizationPercentage
触发扩容的内存利用率
"50"
controller.hostPort.enabled
使能端口映射TCP/80及TCP/443false
controller.hostPort.ports.http
controller.hostPort.enabled为
true
  的http端口
"80"
controller.hostPort.ports.https
controller.hostPort.enabled为
true
  的https端口
"443"
controller.tolerations
node taints to tolerate (requires Kubernetes >=1.6)[]
controller.affinity
node/pod affinities (requires Kubernetes >=1.6){}
controller.terminationGracePeriodSeconds
终止pod的等待时间60
controller.minReadySeconds
允许pod更新时kill掉其他pod的就绪时间 0
controller.nodeSelector
标签选择
{}
controller.podAnnotations
pod注释{}
controller.podLabels
元素据标签{}
controller.podSecurityContext
Security context 策略{}
controller.replicaCount
副本数
1
controller.minAvailable
分布式场景下最小可用数
1
controller.resources
resource管理requests & limits{}
controller.priorityClassName
controller priorityClassNamenil
controller.lifecycle
钩子
{}
controller.service.annotations
service注释
{}
controller.service.labels
service标签
{}
controller.publishService.enabled

false
controller.publishService.pathOverride
重写默认的publish-service名称""
controller.service.enabled
disabled情况下service不会被创建,在controller.kind
为 DaemonSet
 且controller.hostPorts.enabled
 为 true 时很有用
true
controller.service.clusterIP
service IP ( "-"
 表示传递空)
nil
controller.service.externalIPs
service的外部IP列表,当 controller.hostNetwork
 为 true
 且使用了 kube-proxy
 会导致端口冲突 80
[]
controller.service.externalTrafficPolicy
co
ntroller.service.type
 为 NodePort
 or LoadBalancer
, 当设置为 Local
 使能 source IP preservation
"Cluster"
controller.service.sessionAffinity
需确保 ClientIP
 或 None
""
controller.service.healthCheckNodePort
c
ontroller.service.type
NodePort
 或LoadBalancer
 和 controller.service.externalTrafficPolicy
 设置为Local
, 将暴露该健康监测端口. 如果为空, 会随机生成port
""
controller.service.loadBalancerIP
负载均衡器的IP""
controller.service.loadBalancerSourceRanges
允许访问负载均衡器IP CIDRs列表[]
controller.service.enableHttp
80
true
controller.service.enableHttps
443true
controller.service.targetPorts.http
设置targetPort映射到Ingress的80端口80
controller.service.targetPorts.https
设置targetPort映射到Ingress的端443口443
controller.service.ports.http
设置service的http端口80
controller.service.ports.https
设置service的https端口443
controller.service.type
service创建的类型LoadBalancer
controller.service.nodePorts.http
设置nodePort通过http映射到Ingress的80端口""
controller.service.nodePorts.https
设置nodePort通过https映射到Ingress的80端口""
controller.service.nodePorts.tcp
设置nodePort通过tcp映射到Ingress的80端口{}
controller.service.nodePorts.udp
设置nodePort通过udp映射到Ingress的80端口{}
controller.livenessProbe.initialDelaySeconds
liveness probe初始化延迟时间10
controller.livenessProbe.periodSeconds
隔多久执行一次探活10
controller.livenessProbe.timeoutSeconds
探活超时时间5
controller.livenessProbe.successThreshold
探活失败后被认为探活成功的最少连续成功次数1
controller.livenessProbe.failureThreshold
探活成功后被认为探活失败的最少连续失败次数3
controller.livenessProbe.port
liveness probe监听的端口10254
controller.readinessProbe.initialDelaySeconds
readiness probe首次等待时间10
controller.readinessProbe.periodSeconds
隔多久执行一次可读性检测10
controller.readinessProbe.timeoutSeconds
可读性检测超时时间1
controller.readinessProbe.successThreshold
可读性失败后被认为可读性成功的最少连续成功次数1
controller.readinessProbe.failureThreshold
可读性成功后被认为可读性失败的最少连续失败次数3
controller.readinessProbe.port
readiness probe监听的端口号10254
controller.metrics.enabled
true使能
Prometheus metrics
false
controller.metrics.service.annotations
Prometheus metrics service注释{}
controller.metrics.service.clusterIP
Prometheus metrics service的cluster IP nil
controller.metrics.service.externalIPs
Prometheus metrics service 的external IP[]
controller.metrics.service.labels
metrics service标签{}
controller.metrics.service.loadBalancerIP
指定负载均衡的IP地址""
controller.metrics.service.loadBalancerSourceRanges
允许访问负载均衡的IP CIDRs列表[]
controller.metrics.service.servicePort
Prometheus metrics service 端口9913
controller.metrics.service.type
Prometheus metrics service类型ClusterIP
controller.metrics.serviceMonitor.enabled
true 创建 ServiceMonitor用于Prometheus operatorfalse
controller.metrics.serviceMonitor.additionalLabels
用于发现ServiceMonitor的标签 {}
controller.metrics.serviceMonitor.honorLabels
false
false
controller.metrics.serviceMonitor.namespace
servicemonitor resource的命名空间nginx ingress同名空间
controller.metrics.serviceMonitor.namespaceSelector
namespaceSelector指定scrape的namespaceshelm release namespace
controller.metrics.serviceMonitor.scrapeInterval
Prometheus scraping的间隔30s
controller.metrics.prometheusRule.enabled
true
 创建prometheusRules用于Prometheus operator
false
controller.metrics.prometheusRule.additionalLabels
用于发现prometheusRules的标签{}
controller.metrics.prometheusRule.namespace
prometheusRules resource的命名空间nginxingress同命名空间
controller.metrics.prometheusRule.rules
prometheus中的rulesYAML格式[]
controller.admissionWebhooks.enabled
创建Ingress admission webhooks. 验证ingress 语法true
controller.admissionWebhooks.failurePolicy
admission webhooks失败策略Fail
controller.admissionWebhooks.port
Admission webhook端口8443
controller.admissionWebhooks.service.annotations
admission webhook service注解{}
controller.admissionWebhooks.service.clusterIP
admission webhook service 的cluster IPnil
controller.admissionWebhooks.service.externalIPs
admission webhook service 的外部IP[]
controller.zadmissionWebhooks.service.loadBalancerIP
指定负载均衡的IP地址""
controller.admissionWebhooks.service.loadBalancerSourceRanges
允许访问负载均衡的IP CIDRs列表[]
controller.admissionWebhooks.service.servicePort
Admission webhook service 端口443
controller.admissionWebhooks.service.type
admission webhook service类型ClusterIP
controller.admissionWebhooks.patch.enabled
patch使能
true
controller.admissionWebhooks.patch.image.repository
webhook integration jobs镜像地址jettech/kube-webhook-certgen
controller.admissionWebhooks.patch.image.tag
webhook integration jobs的镜像tagv1.2.0
controller.admissionWebhooks.patch.image.digest
webhook integration jobs镜像描述""
controller.admissionWebhooks.patch.image.pullPolicy
webhook integration jobs镜像的策略
IfNotPresent
controller.admissionWebhooks.patch.priorityClassName
Priority class for the webhook integration jobs""
controller.admissionWebhooks.patch.podAnnotations
webhook job pods注释{}
controller.admissionWebhooks.patch.nodeSelector
运行admission hook patch jobs的节点选择器{}
controller.admissionWebhooks.patch.tolerations
Node taints/tolerations for running admission hook patch jobs[]
controller.customTemplate.configMapName
configMap containing a custom nginx template""
controller.customTemplate.configMapKey
configMap key携带nginx模板""
controller.addHeaders
configMap key:value对携带响应客户端的头部信息{}
controller.proxySetHeaders
configMap key:value对携带请求后端服务的头部信息{}
controller.updateStrategy
允许设置RollingUpdate策略{}
controller.configMapNamespace
nginx-configmap命名空间""
controller.tcp.configMapNamespace
tcp-services-configmap命名空间""
controller.tcp.annotations
tcp configmap{}
controller.udp.configMapNamespace
udp-services-configmap命名空间""
controller.udp.annotations
udp configmap注释{}
defaultBackend.enabled
使用default backend组件false
defaultBackend.image.repository
镜像地址k8s.gcr.io/defaultbackend-amd64
defaultBackend.image.tag
镜像tag1.5
defaultBackend.image.digest
镜像描述""
defaultBackend.image.pullPolicy
拉取镜像的策略
IfNotPresent
defaultBackend.image.runAsUser
User ID,默认使用nobody65534
defaultBackend.extraArgs
额外参数
{}
defaultBackend.extraEnvs
额外环境变量[]
defaultBackend.port
Http port 8080
defaultBackend.livenessProbe.initialDelaySeconds
liveness probe初始化延迟时间30
defaultBackend.livenessProbe.periodSeconds
隔多久执行一次探活10
defaultBackend.livenessProbe.timeoutSeconds
探活超时时间5
defaultBackend.livenessProbe.successThreshold
探活失败后被认为探活成功的最少连续成功次数1
defaultBackend.livenessProbe.failureThreshold
探活成功后被认为探活失败的最少连续失败次数3
defaultBackend.readinessProbe.initialDelaySeconds
readiness probe首次等待时间0
defaultBackend.readinessProbe.periodSeconds
隔多久执行一次可读性检测5
defaultBackend.readinessProbe.timeoutSeconds
可读性检测超时时间5
defaultBackend.readinessProbe.successThreshold
可读性失败后被认为可读性成功的最少连续成功次数1
defaultBackend.readinessProbe.failureThreshold
可读性成功后被认为可读性失败的最少连续失败次数6
defaultBackend.tolerations

[]
defaultBackend.affinity
node/pod affinities (requires Kubernetes >=1.6){}
defaultBackend.nodeSelector
节点标签{}
defaultBackend.podAnnotations
注释
{}
defaultBackend.podLabels
标签{}
defaultBackend.replicaCount
默认副本数1
defaultBackend.minAvailable
最小可用pod1
defaultBackend.resources
default backend pod resource requests & limits{}
defaultBackend.priorityClassName
default backend priorityClassNamenil
defaultBackend.podSecurityContext
默认添加到后端的Security context policies{}
defaultBackend.service.annotations
默认的后端服务注释{}
defaultBackend.service.clusterIP
后端service的cluster IPnil
defaultBackend.service.externalIPs
默认后端服务的外部IP列表[]
defaultBackend.service.loadBalancerIP
指定负载均衡的IP地址""
defaultBackend.service.loadBalancerSourceRanges
允许访问负载均衡的IP CIDRs列表[]
defaultBackend.service.type
默认的后端服务创建类型ClusterIP
defaultBackend.serviceAccount.create
true
, 创建后端服务账号名称. 只有让pod security policy 运行在后端才有效
true
defaultBackend.serviceAccount.name
后端服务账号名称
imagePullSecrets
nil

rbac.create
true
, 创建并使用RBAC resources
true
rbac.scope
itrue
, 不要创建使用clusterrole and -binding 结合controller.scope.enabled = true设置为true可以禁用负载均衡器状态更新并完全限制入口范围
false
podSecurityPolicy.enabled
true
, 创建并使用Pod Security Policy 资源
false
serviceAccount.create
true
, 创建 service account
true
serviceAccount.name
service 账号

revisionHistoryLimit
维护的允许回滚的历史版本数
10

安装后生成3个pod,ingress-nginx-controller-admission、ingress-nginx-controller及ingress-nginx-controller-metrics,对应yaml文件分别如下,本文未详细给出rbac相关资源对应yaml

ingress-nginx-controller-admission文件:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    helm.sh/chart: ingress-nginx-2.3.0
    io.cattle.field/appId: ingress-nginx
    managedFields:
    - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:labels:
    .: {}
    f:app.kubernetes.io/component: {}
    f:app.kubernetes.io/instance: {}
    f:app.kubernetes.io/managed-by: {}
    f:app.kubernetes.io/name: {}
    f:app.kubernetes.io/version: {}
    f:helm.sh/chart: {}
    f:io.cattle.field/appId: {}
    f:spec:
    f:ports:
    .: {}
    k:{"port":443,"protocol":"TCP"}:
    .: {}
    f:name: {}
    f:port: {}
    f:protocol: {}
    f:targetPort: {}
    f:selector:
    .: {}
    f:app.kubernetes.io/component: {}
    f:app.kubernetes.io/instance: {}
    f:app.kubernetes.io/name: {}
    f:sessionAffinity: {}
    f:type: {}
    manager: Go-http-client
    operation: Update
    time: "2020-05-23T06:28:02Z"
    name: ingress-nginx-controller-admission
    selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx-controller-admission
    spec:
    ports:
    - name: https-webhook
    port: 443
    protocol: TCP
    targetPort: webhook
    selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    sessionAffinity: None
    type: ClusterIP
    status:
    loadBalancer: {}

    ingress-nginx-controller-metrics.yaml

      apiVersion: v1
      kind: Service
      metadata:
      creationTimestamp: null
      labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/managed-by: Tiller
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/version: 0.32.0
      helm.sh/chart: ingress-nginx-2.3.0
      io.cattle.field/appId: ingress-nginx
      managedFields:
      - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
      f:metadata:
      f:labels:
      .: {}
      f:app.kubernetes.io/component: {}
      f:app.kubernetes.io/instance: {}
      f:app.kubernetes.io/managed-by: {}
      f:app.kubernetes.io/name: {}
      f:app.kubernetes.io/version: {}
      f:helm.sh/chart: {}
      f:io.cattle.field/appId: {}
      f:spec:
      f:ports:
      .: {}
      k:{"port":9913,"protocol":"TCP"}:
      .: {}
      f:name: {}
      f:port: {}
      f:protocol: {}
      f:targetPort: {}
      f:selector:
      .: {}
      f:app.kubernetes.io/component: {}
      f:app.kubernetes.io/instance: {}
      f:app.kubernetes.io/name: {}
      f:sessionAffinity: {}
      f:type: {}
      manager: Go-http-client
      operation: Update
      time: "2020-05-24T02:27:16Z"
      name: ingress-nginx-controller-metrics
      selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx-controller-metrics
      spec:
      ports:
      - name: metrics
      port: 9913
      protocol: TCP
      targetPort: metrics
      selector:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
      sessionAffinity: None
      type: ClusterIP
      status:
      loadBalancer: {}

      ingress-nginx-controller.yaml文件:

        apiVersion: v1
        kind: Service
        metadata:
        annotations:
        field.cattle.io/publicEndpoints: '[{"port":30390,"protocol":"TCP","serviceName":"ingress-nginx:ingress-nginx-controller","allNodes":true},{"port":31792,"protocol":"TCP","serviceName":"ingress-nginx:ingress-nginx-controller","allNodes":true}]'
        creationTimestamp: null
        labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/managed-by: Tiller
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/version: 0.32.0
        helm.sh/chart: ingress-nginx-2.3.0
        io.cattle.field/appId: ingress-nginx
        managedFields:
        - apiVersion: v1
        fieldsType: FieldsV1
        fieldsV1:
        f:metadata:
        f:annotations:
        .: {}
        f:field.cattle.io/publicEndpoints: {}
        f:labels:
        .: {}
        f:app.kubernetes.io/component: {}
        f:app.kubernetes.io/instance: {}
        f:app.kubernetes.io/managed-by: {}
        f:app.kubernetes.io/name: {}
        f:app.kubernetes.io/version: {}
        f:helm.sh/chart: {}
        f:io.cattle.field/appId: {}
        f:spec:
        f:externalTrafficPolicy: {}
        f:ports:
        .: {}
        k:{"port":80,"protocol":"TCP"}:
        .: {}
        f:name: {}
        f:port: {}
        f:protocol: {}
        f:targetPort: {}
        k:{"port":443,"protocol":"TCP"}:
        .: {}
        f:name: {}
        f:port: {}
        f:protocol: {}
        f:targetPort: {}
        f:selector:
        .: {}
        f:app.kubernetes.io/component: {}
        f:app.kubernetes.io/instance: {}
        f:app.kubernetes.io/name: {}
        f:sessionAffinity: {}
        f:type: {}
        manager: Go-http-client
        operation: Update
        time: "2020-05-23T07:32:13Z"
        name: ingress-nginx-controller
        selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx-controller
        spec:
        externalTrafficPolicy: Cluster
        ports:
        - name: http
        port: 80
        protocol: TCP
        targetPort: http
        - name: https
        port: 443
        protocol: TCP
        targetPort: https
        selector:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        sessionAffinity: None
        type: NodePort
        status:
          loadBalancer: {}
        上一篇:重点业务公开系统主要数据来源是基于ERP系统的BW系统
        下一篇:系统应用成效二
        相关文章

         发表评论

        暂时没有评论,来抢沙发吧~