Health Checking of Istio Services

Kubernetes liveness and readiness probes describes several ways to configure liveness and readiness probes:

  1. Command
  2. HTTP request
  3. TCP probe
  4. gRPC probe

The command approach works with no changes required, but HTTP requests, TCP probes, and gRPC probes require Istio to make changes to the pod configuration.

The health check requests to the liveness-http service are sent by Kubelet. This becomes a problem when mutual TLS is enabled, because the Kubelet does not have an Istio issued certificate. Therefore the health check requests will fail.

TCP probe checks need special handling, because Istio redirects all incoming traffic into the sidecar, and so all TCP ports appear open. The Kubelet simply checks if some process is listening on the specified port, and so the probe will always succeed as long as the sidecar is running.

Istio solves both these problems by rewriting the application PodSpec readiness/liveness probe, so that the probe request is sent to the sidecar agent.

Liveness probe rewrite example

To demonstrate how the readiness/liveness probe is rewritten at the application PodSpec level, let us use the liveness-http-same-port sample.

First create and label a namespace for the example:

$ kubectl create namespace istio-io-health-rewrite
$ kubectl label namespace istio-io-health-rewrite istio-injection=enabled

And deploy the sample application:

$ kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: liveness-http
  namespace: istio-io-health-rewrite
spec:
  selector:
    matchLabels:
      app: liveness-http
      version: v1
  template:
    metadata:
      labels:
        app: liveness-http
        version: v1
    spec:
      containers:
      - name: liveness-http
        image: docker.io/istio/health:example
        ports:
        - containerPort: 8001
        livenessProbe:
          httpGet:
            path: /foo
            port: 8001
          initialDelaySeconds: 5
          periodSeconds: 5
EOF

Once deployed, you can inspect the pod’s application container to see the changed path:

$ kubectl get pod "$LIVENESS_POD" -n istio-io-health-rewrite -o json | jq '.spec.containers[0].livenessProbe.httpGet'
{
  "path": "/app-health/liveness-http/livez",
  "port": 15020,
  "scheme": "HTTP"
}

The original livenessProbe path is now mapped against the new path in the sidecar container environment variable ISTIO_KUBE_APP_PROBERS:

$ kubectl get pod "$LIVENESS_POD" -n istio-io-health-rewrite -o=jsonpath="{.spec.containers[1].env[?(@.name=='ISTIO_KUBE_APP_PROBERS')]}"
{
  "name":"ISTIO_KUBE_APP_PROBERS",
  "value":"{\"/app-health/liveness-http/livez\":{\"httpGet\":{\"path\":\"/foo\",\"port\":8001,\"scheme\":\"HTTP\"},\"timeoutSeconds\":1}}"
}

For HTTP and gRPC requests, the sidecar agent redirects the request to the application and strips the response body, only returning the response code. For TCP probes, the sidecar agent will then do the port check while avoiding the traffic redirection.

The rewriting of problematic probes is enabled by default in all built-in Istio configuration profiles but can be disabled as described below.

Liveness and readiness probes using the command approach

Istio provides a liveness sample that implements this approach. To demonstrate it working with mutual TLS enabled, first create a namespace for the example:

$ kubectl create ns istio-io-health

To configure strict mutual TLS, run:

$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: "default"
  namespace: "istio-io-health"
spec:
  mtls:
    mode: STRICT
EOF

Next, change directory to the root of the Istio installation and run the following command to deploy the sample service:

Zip
$ kubectl -n istio-io-health apply -f <(istioctl kube-inject -f @samples/health-check/liveness-command.yaml@)

To confirm that the liveness probes are working, check the status of the sample pod to verify that it is running.

$ kubectl -n istio-io-health get pod
NAME                             READY     STATUS    RESTARTS   AGE
liveness-6857c8775f-zdv9r        2/2       Running   0           4m

Liveness and readiness probes using the HTTP, TCP, and gRPC approach

As stated previously, Istio uses probe rewrite to implement HTTP, TCP, and gRPC probes by default. You can disable this feature either for specific pods, or globally.

Disable the probe rewrite for a pod

You can annotate the pod with sidecar.istio.io/rewriteAppHTTPProbers: "false" to disable the probe rewrite option. Make sure you add the annotation to the pod resource because it will be ignored anywhere else (for example, on an enclosing deployment resource).

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: liveness-http
spec:
  selector:
    matchLabels:
      app: liveness-http
      version: v1
  template:
    metadata:
      labels:
        app: liveness-http
        version: v1
      annotations:
        sidecar.istio.io/rewriteAppHTTPProbers: "false"
    spec:
      containers:
      - name: liveness-http
        image: docker.io/istio/health:example
        ports:
        - containerPort: 8001
        livenessProbe:
          httpGet:
            path: /foo
            port: 8001
          initialDelaySeconds: 5
          periodSeconds: 5
EOF

This approach allows you to disable the health check probe rewrite gradually on individual deployments, without reinstalling Istio.

Disable the probe rewrite globally

Install Istio using --set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=false to disable the probe rewrite globally. Alternatively, update the configuration map for the Istio sidecar injector:

$ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e 's/"rewriteAppHTTPProbe": true/"rewriteAppHTTPProbe": false/' | kubectl apply -f -

Cleanup

Remove the namespaces used for the examples:

$ kubectl delete ns istio-io-health istio-io-health-rewrite
Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!