Istio is an open-source service mesh that provides traffic management, security, and observability for microservices running on Kubernetes. It uses Envoy sidecar proxies to intercept and manage all network traffic between services – giving you fine-grained control over routing, load balancing, mutual TLS, and telemetry without changing application code.
This guide walks through installing Istio 1.29 on an Amazon EKS cluster, deploying the Bookinfo sample application, configuring an AWS load balancer gateway, setting up traffic management with canary deployments, enforcing mTLS, and installing the full observability stack with Kiali, Grafana, and Jaeger. Istio 1.29 brings production-ready ambient mesh enhancements, CRL validation in ztunnel, and improved memory management for the control plane.
Prerequisites
Before starting, confirm you have the following in place:
- A running Amazon EKS cluster (Kubernetes 1.31 – 1.35 supported by Istio 1.29)
kubectlconfigured and pointing to your EKS cluster- AWS CLI installed and configured with permissions to create load balancers
- Helm 3 installed (optional, for observability addons)
- At least 3 worker nodes with 4GB RAM and 2 vCPUs each
- Cluster admin (cluster-admin RBAC) access
Verify your cluster is reachable and nodes are in Ready state:
kubectl get nodes
All nodes should show Ready status:
NAME STATUS ROLES AGE VERSION
ip-192-168-1-100.us-east-1.compute.internal Ready <none> 5d v1.31.4-eks-abc1234
ip-192-168-2-200.us-east-1.compute.internal Ready <none> 5d v1.31.4-eks-abc1234
ip-192-168-3-150.us-east-1.compute.internal Ready <none> 5d v1.31.4-eks-abc1234
Step 1: Install istioctl CLI
The istioctl CLI is the primary tool for installing, managing, and debugging Istio. Download the latest release directly from the Istio GitHub releases page.
Download and extract Istio 1.29.1:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.29.1 sh -
Move into the Istio directory and add istioctl to your PATH:
cd istio-1.29.1
sudo cp bin/istioctl /usr/local/bin/
istioctl version --remote=false
The version output confirms the CLI is installed:
client version: 1.29.1
Run a pre-flight check to confirm your EKS cluster meets all Istio requirements:
istioctl x precheck
A clean result shows no issues found:
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
Step 2: Install Istio on the EKS Cluster
Istio ships with several configuration profiles. The demo profile is good for learning since it enables all features including egress gateway. The default profile is recommended for production EKS deployments – it installs the control plane (istiod) and an ingress gateway with sensible defaults.
The following table shows what each profile installs:
| Profile | Components | Use Case |
|---|---|---|
| default | istiod, ingress gateway | Production |
| demo | istiod, ingress + egress gateways | Learning / evaluation |
| minimal | istiod only | Control plane only |
Option A: Production profile (recommended for EKS)
Install Istio with the default production profile:
istioctl install --set profile=default -y
Option B: Demo profile (for testing)
If you want to explore all Istio features including egress gateway:
istioctl install --set profile=demo -y
The installation takes a couple of minutes. Once complete, verify that all Istio components are running in the istio-system namespace:
kubectl get pods -n istio-system
All pods should show Running status with all containers ready:
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-7b6d6c7d8f-x9k2l 1/1 Running 0 60s
istiod-6f8c7b5d9f-m4vhp 1/1 Running 0 75s
Verify the Istio installation is healthy:
istioctl verify-install
This command checks every Istio resource and confirms the installation matches the selected profile.
Step 3: Enable Automatic Sidecar Injection
Istio works by injecting an Envoy sidecar proxy into each pod. The sidecar intercepts all inbound and outbound traffic, enabling Istio to apply routing rules, security policies, and collect telemetry. You enable automatic injection by labeling the namespace.
Create a namespace for your application and enable sidecar injection:
kubectl create namespace bookinfo
kubectl label namespace bookinfo istio-injection=enabled
Verify the label was applied:
kubectl get namespace bookinfo --show-labels
The output confirms injection is enabled for the namespace:
NAME STATUS AGE LABELS
bookinfo Active 10s istio-injection=enabled,kubernetes.io/metadata.name=bookinfo
Every pod deployed to this namespace will now automatically get an Envoy sidecar container.
Step 4: Deploy the Bookinfo Sample Application
Bookinfo is Istio’s standard demo application. It consists of four microservices – productpage, details, reviews (with three versions), and ratings. It is a good way to verify that Istio sidecar injection, traffic routing, and observability work correctly on your EKS cluster.
Deploy the Bookinfo application:
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.29/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
Wait for all pods to reach Running status with 2/2 containers (application + sidecar):
kubectl get pods -n bookinfo
Each pod should show 2/2 ready containers, confirming the Envoy sidecar was injected:
NAME READY STATUS RESTARTS AGE
details-v1-5f4d8b7c6-k8rpz 2/2 Running 0 45s
productpage-v1-6b7f6dc9c-m2xnj 2/2 Running 0 44s
ratings-v1-7dc98c4c7-vq5lt 2/2 Running 0 45s
reviews-v1-75b9c4c8d-x4t7m 2/2 Running 0 44s
reviews-v2-86d6c4f8b-n9z8j 2/2 Running 0 44s
reviews-v3-5c4f7d9b8-w7r6d 2/2 Running 0 44s
Verify the services are created:
kubectl get svc -n bookinfo
Test the application is working by sending a request from inside the mesh:
kubectl exec -n bookinfo deploy/ratings-v1 -- curl -s productpage:9080/productpage | head -5
You should see HTML content from the product page, confirming service-to-service communication works through the Istio mesh.
Step 5: Configure Istio Gateway with AWS Load Balancer
To expose your mesh services externally on EKS, you configure an Istio Gateway resource. When Istio’s ingress gateway runs on EKS, AWS automatically provisions a Classic Load Balancer or Network Load Balancer (NLB) for the istio-ingressgateway service.
Use an AWS NLB instead of Classic LB
The default EKS behavior creates a Classic Load Balancer. For production, an NLB gives better performance and supports static IPs. Annotate the ingress gateway service to use NLB:
kubectl annotate svc istio-ingressgateway -n istio-system \
service.beta.kubernetes.io/aws-load-balancer-type=nlb
Create the Gateway and VirtualService
Apply the Istio Gateway that listens on port 80 and routes traffic to the Bookinfo productpage:
kubectl apply -n bookinfo -f - <<EOF
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
EOF
Get the external hostname of the AWS load balancer:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "http://$INGRESS_HOST/productpage"
It takes 2-3 minutes for the AWS load balancer to provision and become healthy. Once ready, open the URL in your browser to see the Bookinfo product page served through the Istio mesh.
Verify the gateway is working from the command line:
curl -s -o /dev/null -w "%{http_code}" http://$INGRESS_HOST/productpage
A 200 response confirms the gateway and routing are configured correctly.
Step 6: Traffic Management with Destination Rules
Istio’s traffic management lets you control how requests are distributed across service versions. Destination rules define subsets (versions) of a service, and virtual services route traffic to specific subsets based on weights, headers, or other criteria.
First, apply destination rules that define the three versions of the reviews service:
kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.29/samples/bookinfo/networking/destination-rule-all.yaml
Verify the destination rules were created:
kubectl get destinationrules -n bookinfo
You should see destination rules for all four Bookinfo services:
NAME HOST AGE
details details 10s
productpage productpage 10s
ratings ratings 10s
reviews reviews 10s
Now route all traffic to reviews v1 (no stars):
kubectl apply -n bookinfo -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
EOF
Refresh the Bookinfo product page several times. Every request now goes to reviews v1 – you will see no star ratings. This confirms Istio traffic routing is active and overriding the default round-robin behavior.
Step 7: Install the Observability Stack – Kiali, Grafana, and Jaeger
Istio integrates with several observability tools out of the box. Kiali provides a service mesh topology dashboard, Grafana shows mesh metrics, and Jaeger handles distributed tracing. The Istio release includes sample manifests for all three. If you are already running Prometheus on your EKS cluster, these addons will use it as the metrics backend.
Install Prometheus
Prometheus collects the metrics that Istio generates from Envoy sidecars. Install the Istio Prometheus addon:
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.29/samples/addons/prometheus.yaml
Install Kiali
Kiali visualizes the service mesh topology, showing real-time traffic flow, error rates, and response times between services. Details on configuring Kiali are available in the Istio Kiali integration documentation.
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.29/samples/addons/kiali.yaml
Install Grafana
Grafana on Kubernetes provides pre-built dashboards for Istio mesh metrics, including request volume, latency, and error rates per service:
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.29/samples/addons/grafana.yaml
Install Jaeger
Jaeger collects distributed traces across microservices, making it easy to track a single request as it travels through multiple services in the mesh:
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.29/samples/addons/jaeger.yaml
Wait for all addon pods to be running:
kubectl get pods -n istio-system -l "app in (prometheus,kiali,grafana,jaeger)"
All observability pods should show Running status:
NAME READY STATUS RESTARTS AGE
grafana-5f9b7d8c4-r7k2m 1/1 Running 0 30s
jaeger-7c8f6d9b5-x4p9n 1/1 Running 0 25s
kiali-6d8f7c5b4-m2n8j 1/1 Running 0 35s
prometheus-8b7d6c9f5-k3r7t 2/2 Running 0 40s
Access the dashboards
Use istioctl dashboard to open each tool. This sets up a port-forward and opens your browser:
istioctl dashboard kiali
For Grafana:
istioctl dashboard grafana
For Jaeger tracing:
istioctl dashboard jaeger
Generate some traffic to populate the dashboards by hitting the productpage endpoint several times:
for i in $(seq 1 50); do
curl -s -o /dev/null http://$INGRESS_HOST/productpage
done
After generating traffic, the Kiali dashboard shows the full service graph with live request flow between productpage, reviews, details, and ratings services. Grafana displays Istio’s pre-built dashboards with request rates, latencies, and error percentages. Jaeger shows individual request traces spanning all four microservices.
Step 8: Enforce Mutual TLS (mTLS) Across the Mesh
By default, Istio uses permissive mTLS – services accept both plaintext and encrypted traffic. For production, you should enforce strict mTLS so all service-to-service communication within the mesh is encrypted and authenticated. This prevents any unencrypted traffic from reaching your services.
Apply a mesh-wide strict mTLS policy:
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
EOF
This policy is applied in the istio-system namespace, which makes it mesh-wide. Every service in the mesh will now reject plaintext connections.
Verify mTLS is enforced by checking the authentication status:
istioctl x describe pod $(kubectl get pod -n bookinfo -l app=productpage \
-o jsonpath='{.items[0].metadata.name}') -n bookinfo
The output shows mTLS details for the pod, confirming STRICT mode is active:
Pod: productpage-v1-6b7f6dc9c-m2xnj
Pod Revision: default
Pod Ports: 9080 (productpage), 15090 (istio-proxy)
--------------------
Service: productpage
Port: http 9080/HTTP targets pod port 9080
Effective PeerAuthentication:
Workload mTLS: STRICT
You can also apply mTLS per namespace if you need different policies for different environments. For example, to set strict mTLS only for the bookinfo namespace:
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: bookinfo
spec:
mtls:
mode: STRICT
EOF
With strict mTLS in place, Istio automatically manages certificate rotation. Certificates are valid for 24 hours by default and rotated before expiry. No manual certificate management is needed.
Step 9: Canary Deployments with Istio Traffic Splitting
Canary deployments let you gradually shift traffic from one version of a service to another. Istio makes this straightforward with weighted routing in VirtualService resources – no changes to your application deployment or Kubernetes service definitions. If you manage your Istio deployments with ArgoCD, canary configurations can be version-controlled and applied automatically.
Start by sending 90% of traffic to reviews v1 and 10% to reviews v3 (the canary):
kubectl apply -n bookinfo -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 90
- destination:
host: reviews
subset: v3
weight: 10
EOF
Refresh the product page multiple times. About 1 in 10 requests will show red star ratings (v3), while the majority show no stars (v1). The Kiali dashboard displays the traffic split in real-time.
If the canary looks healthy, increase its traffic share to 50/50:
kubectl apply -n bookinfo -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
EOF
Once you are confident in the new version, route 100% of traffic to v3:
kubectl apply -n bookinfo -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v3
weight: 100
EOF
You can also route traffic based on HTTP headers for targeted testing. This sends all requests with the header end-user: test to v3 while everyone else gets v1:
kubectl apply -n bookinfo -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: test
route:
- destination:
host: reviews
subset: v3
- route:
- destination:
host: reviews
subset: v1
EOF
Header-based routing is useful for letting your QA team test a new version in production without affecting real users.
Conclusion
You now have Istio 1.29 running on your EKS cluster with sidecar injection, an AWS NLB-backed ingress gateway, strict mTLS encryption, traffic management with canary deployments, and a full observability stack. For production hardening, consider enabling CloudWatch logging for your EKS cluster, setting up Istio authorization policies for fine-grained access control, configuring rate limiting at the gateway, and using Istio’s circuit breaker features to protect against cascading failures.