Istio开启mtls请求503问题分析

277 阅读1分钟


背景

为测试Istio流量管理,将两个服务sleep、flaskapp的两个版本v1、v2(部署文件见参考链接)部署到Istio环境中,通过sleep-v1向flaskapp发起调用http://flaskapp/env/version,正常结果会交替打印出结果v1和v2,然而在调用过程中报错503 reset reason: connection failure,故将问题的步骤、现象、分析、验证整理于此。



步骤

部署sleep、flaskapp应用,同时Istio平台开启mTls,命名空间kangxzh开启自动注入,部署如下图所示:

kubectl apply -f sleep.istio.yaml -n kangxzhkubectl apply -f flask.isito.yaml -n kangxzh#查看pod创建情况kubectl -n kangxzh get pod -wflaskapp-v1-775dbb9b79-z54fj   2/2     Running           0          13sflaskapp-v2-d454cdd47-mdb8s    2/2     Running           0          14ssleep-v1-7f45c6cf94-zgdsf      2/2     Running           0          19hsleep-v2-58dff94b49-fz6sj      2/2     Running           0          19h 



现象

在sleep应用中发起http请求,调用flaskapp,curl http://flaskapp/env/version,如下所示:

#export SOURCE_POD=$(kubectl get pod -l app=sleep,version=v1 -o jsonpath={.items..metadata.name})# 进入sleep发起http请求kubectl -n kangxzh exec -it -c sleep $SOURCE_POD bashbash-4.4# curl http://flaskapp/env/version# 响应upstream connect error or disconnect/reset before headers. reset reason: connection failure 



背景

1.检测flaskapp tls 配置,如下:

[root@kubernetes-master flaskapp]# istioctl authn tls-check flaskapp-v1-775dbb9b79-z54fj flaskapp.kangxzh.svc.cluster.localHOST:PORT                                 STATUS     SERVER     CLIENT     AUTHN POLICY     DESTINATION RULEflaskapp.kangxzh.svc.cluster.local:80     OK         mTLS       mTLS       default/         default/istio-system 


STATUS OK 证明flaskapp tls配置正确。


进入sleep istio-proxy向flaskapp发起http请求:

kubectl -n kangxzh exec -it -c istio-proxy $SOURCE_POD bash# 发起请求curl http://flaskapp/env/version# 响应v1 


2.发现通过istio-proxy可以得到相应,因为开启了mtls,通过istio-proxy直接请求是需要添加istio相关证书的,此时没有加入证书也可请求,所以想到检查flaskapp iptables配置,如下所示:

# 获取进程号PID=$(docker inspect --format {{.State.Pid}} $(docker ps | grep flaskapp-v1 | awk '{print $1}' | head -n 1))# 查看iptables 规则nsenter -t ${PID} -n iptables -t nat -L -n -v# 输出Chain PREROUTING (policy ACCEPT 477 packets, 28620 bytes) pkts bytes target     prot opt in     out     source               destination  487 29220 ISTIO_INBOUND  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0Chain INPUT (policy ACCEPT 487 packets, 29220 bytes) pkts bytes target     prot opt in     out     source               destinationChain OUTPUT (policy ACCEPT 220 packets, 20367 bytes) pkts bytes target     prot opt in     out     source               destination    0   480 ISTIO_OUTPUT  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0Chain POSTROUTING (policy ACCEPT 220 packets, 20367 bytes) pkts bytes target     prot opt in     out     source               destinationChain ISTIO_INBOUND (1 references) pkts bytes target     prot opt in     out     source               destination   10   600 ISTIO_IN_REDIRECT  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80    #是没有的,修改后增加Chain ISTIO_IN_REDIRECT (1 references) pkts bytes target     prot opt in     out     source               destination   10   600 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            redir ports 15001Chain ISTIO_OUTPUT (1 references) pkts bytes target     prot opt in     out     source               destination    0     0 ISTIO_REDIRECT  all  --  *      lo      0.0.0.0/0           !127.0.0.1    8   480 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            owner UID match 1337    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            owner GID match 1337    0     0 RETURN     all  --  *      *       0.0.0.0/0            127.0.0.1    0     0 ISTIO_REDIRECT  all  --  *      *       0.0.0.0/0            0.0.0.0/0Chain ISTIO_REDIRECT (2 references) pkts bytes target     prot opt in     out     source               destination    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            redir ports 15001 


证明envoy没有劫持到flaskapp 80的流量,也就是说上述第2步是sleep-istio-proxy直接请求flaskapp,没有经过flaskapp-istio-proxy 转发。


3. 此时才检查flaskapp deployment,如下所示:

...---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: flaskapp-v1spec:  replicas: 1  template:    metadata:      labels:        app: flaskapp        version: v1    spec:      containers:      - name: flaskapp        image: dustise/flaskapp        imagePullPolicy: IfNotPresent        ports:          - name: http            containerPort: 80  #缺少containerPort        env:        - name: version          value: v1... 


官网说明https://istio.io/docs/setup/kubernetes/additional-setup/requirements/中:

Pod ports: Pods must include an explicit list of the ports each container listens on. Use a containerPort configuration in the container specification for each port. Any unlisted ports bypass the Istio proxy.# 未列出来来的端口都会绕过istio proxy 


同时describe flaskapp pod 如下所示:

kubectl describe pod flaskapp-v1-6df8d69fb8-fb5mr -n kangxzh#istio-proxy:    ... 省略若干    Args:      ...      --concurrency      2      --controlPlaneAuthPolicy      MUTUAL_TLS      --statusPort      15020      --applicationPorts      "" #为空 


在deployment中增加containerPort: 80后,如下所示:

istio-proxy:    ... 省略若干    Args:      ...      --concurrency      2      --controlPlaneAuthPolicy      MUTUAL_TLS      --statusPort      15020      --applicationPorts      80 

注意:在Istio1.2版本以后也可通过设置Pod annotation 中 traffic.sidecar.istio.io/includeInboundPorts来达到同样的目的,缺省值为Pod的containerPorts列表,逗号分隔的监听端口列表,这些流量会被重定向到 Sidecar,* 会重定向所有端口,具体详情参见官网1.2新特性(见参考链接)



验证

sleep 发起请求:

[root@kubernetes-master flaskapp]# kubectl -n kangxzh exec -it -c sleep $SOURCE_POD bashbash-4.4# curl http://flaskapp/env/version#响应v1 


sleep-istio-proxy 未携带证书,发起请求:

kubectl -n kangxzh exec -it -c istio-proxy $SOURCE_POD bashistio-proxy@sleep-v1-7f45c6cf94-zgdsf:/$ curl http://flaskapp/env/version#响应curl: (56) Recv failure: Connection reset by peer sleep-istio-proxy 携带证书,发起请求istio-proxy@sleep-v1-7f45c6cf94-zgdsf:/$ curl https://flaskapp:80/env/version --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k#响应v1 



参考链接

  • https://github.com/fleeto/sleep

  • https://github.com/fleeto/flaskapp

  • https://istio.io/docs/setup/kubernetes/additional-setup/requirements/

  • https://preliminary.istio.io/about/notes/1.2/