Containers

Install Istio Service Mesh on OpenShift 4.x

Istio is an open-source service mesh that provides traffic management, security, and observability for microservices running on Kubernetes and OpenShift. It works by injecting an Envoy proxy sidecar into each pod, intercepting all network communication between services without requiring any changes to application code.

Original content from computingforgeeks.com - post 53187

This guide walks through installing Istio 1.29 on an OpenShift 4.x cluster using istioctl. We cover the full setup – from installing the CLI tool to deploying a sample application, configuring traffic management, enabling mutual TLS, and accessing the observability dashboards (Kiali, Jaeger, Prometheus). The steps apply to any supported Istio installation on OpenShift 4.14 and later.

Prerequisites

Before starting, make sure you have the following in place:

  • An OpenShift 4.x cluster (version 4.14 or later recommended) with at least 3 worker nodes
  • Cluster admin access via the oc CLI tool
  • At least 4 GB of RAM per worker node for Istio control plane components
  • The kubectl CLI installed and configured to access your cluster
  • Ports 15010, 15012, 15014, 15017, and 443 open between the control plane and worker nodes

Confirm you are logged into your OpenShift cluster with admin privileges before proceeding.

oc whoami

This should return your admin username or system:admin.

Step 1: Install istioctl CLI

The istioctl command-line tool is used to install, manage, and troubleshoot Istio. Download the latest stable release (1.29.1 at the time of writing) directly from the Istio GitHub releases page.

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.29.1 sh -

Move the binary into your system path so it is available globally.

sudo mv istio-1.29.1/bin/istioctl /usr/local/bin/
chmod +x /usr/local/bin/istioctl

Verify the installation by checking the version.

istioctl version --remote=false

The output should confirm version 1.29.1:

client version: 1.29.1

The download also includes sample applications and manifests in the istio-1.29.1/samples/ directory – we will use these later.

Step 2: Install Istio with IstioOperator on OpenShift

Before installing, run the pre-flight check to confirm your OpenShift cluster meets all requirements.

istioctl x precheck

If the check passes, you will see a message confirming the cluster is ready for Istio installation.

OpenShift uses Security Context Constraints (SCCs) that restrict pod permissions by default. Istio needs the anyuid and privileged SCCs. Create a dedicated namespace and configure the required permissions.

oc create namespace istio-system

Grant the necessary SCCs to the Istio service accounts.

oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
oc adm policy add-scc-to-group privileged system:serviceaccounts:istio-system

Now install Istio using the demo profile, which includes all core components plus the addons needed for testing. For production, use the default or minimal profile instead.

istioctl install --set profile=demo --set meshConfig.accessLogFile=/dev/stdout -y

The installation takes a few minutes. When complete, you should see output confirming all components are ready:

✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete

Verify all Istio pods are running in the istio-system namespace.

oc get pods -n istio-system

All pods should show a Running status with all containers ready:

NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-7c4f8d57f-kx9vz    1/1     Running   0          2m
istio-ingressgateway-6b77f6d4c-lhj2p   1/1     Running   0          2m
istiod-5f8c9d5f7b-m7nbq                1/1     Running   0          2m

Step 3: Enable Sidecar Injection

Istio works by injecting an Envoy sidecar proxy into each application pod. The simplest way to enable this is by labeling the namespace where your applications run. The istio-injection=enabled label tells Istio to automatically inject sidecars into all new pods created in that namespace.

Create a namespace for the sample application and enable injection.

oc create namespace bookinfo
oc label namespace bookinfo istio-injection=enabled

On OpenShift, pods in the bookinfo namespace also need the anyuid SCC for the sidecar to function correctly.

oc adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo
oc adm policy add-scc-to-group privileged system:serviceaccounts:bookinfo

Verify the label is set.

oc get namespace bookinfo --show-labels

You should see istio-injection=enabled in the labels column.

Step 4: Deploy the Bookinfo Sample Application

Istio ships with the Bookinfo sample application – a polyglot microservices app made up of four services: productpage, details, reviews (with three versions), and ratings. It is perfect for testing service mesh features.

Deploy the application using the manifests from the Istio download directory.

oc apply -f istio-1.29.1/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo

Wait for all pods to reach the Running state. Each pod should have 2/2 containers ready – the application container plus the Envoy sidecar.

oc get pods -n bookinfo

The output confirms that sidecar injection is working – notice the 2/2 ready column:

NAME                              READY   STATUS    RESTARTS   AGE
details-v1-5f4d584748-wqmfj      2/2     Running   0          60s
productpage-v1-564d4686f-r8hgs    2/2     Running   0          60s
ratings-v1-686ccfb5d8-4khtj       2/2     Running   0          60s
reviews-v1-86896b7648-fzq4l       2/2     Running   0          60s
reviews-v2-b7dcd98fb-7xsnp        2/2     Running   0          60s
reviews-v3-5c5cc7b6d-g2v8c        2/2     Running   0          60s

Verify the application is working by making a request from inside the cluster.

oc exec -n bookinfo deploy/ratings-v1 -c ratings -- curl -s productpage:9080/productpage | grep -o ".*"

You should see the product page title confirming the app is responding:

<title>Simple Bookstore App</title>

Step 5: Configure Gateway and Virtual Service

To expose the Bookinfo application outside the cluster, configure an Istio Gateway and VirtualService. The Gateway defines the entry point (port and protocol), while the VirtualService routes incoming traffic to the productpage service.

oc apply -f istio-1.29.1/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo

Verify the gateway and virtual service were created.

oc get gateway,virtualservice -n bookinfo

The output should show both resources:

NAME                                  AGE
gateway.networking.istio.io/bookinfo-gateway   30s

NAME                                          GATEWAYS               HOSTS   AGE
virtualservice.networking.istio.io/bookinfo   ["bookinfo-gateway"]   ["*"]   30s

Get the external address of the Istio ingress gateway. On OpenShift, this is typically exposed as a LoadBalancer service or via an OpenShift Route.

export INGRESS_HOST=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
echo "http://${INGRESS_HOST}:${INGRESS_PORT}/productpage"

If your cluster does not have a LoadBalancer (bare-metal or on-premises), use a NodePort instead.

export INGRESS_HOST=$(oc get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
export INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
echo "http://${INGRESS_HOST}:${INGRESS_PORT}/productpage"

Open the URL in your browser – you should see the Bookinfo product page with book details and reviews.

Step 6: Configure Traffic Management

Istio provides fine-grained traffic control through DestinationRules and VirtualServices. This lets you route traffic between service versions, configure retries, and implement circuit breaking – all without modifying application code. If you have experience with Kubernetes networking, Istio extends those concepts with L7 traffic management.

Apply Destination Rules

First, create DestinationRules that define the available versions (subsets) for each service.

oc apply -f istio-1.29.1/samples/bookinfo/networking/destination-rule-all.yaml -n bookinfo

Verify the destination rules are in place.

oc get destinationrules -n bookinfo

You should see rules for all four services:

NAME          HOST          AGE
details       details       10s
productpage   productpage   10s
ratings       ratings       10s
reviews       reviews       10s

Route All Traffic to Reviews v1

By default, Istio round-robins traffic across all versions. To route all traffic to reviews v1 (no star ratings), apply this routing rule.

oc apply -f istio-1.29.1/samples/bookinfo/networking/virtual-service-all-v1.yaml -n bookinfo

Refresh the product page several times – you should only see reviews without stars, confirming all traffic goes to v1.

Route Based on User Identity

Istio can route traffic based on HTTP headers. This example sends user “jason” to reviews v2 (black star ratings) while everyone else stays on v1.

oc apply -f istio-1.29.1/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml -n bookinfo

Log in as “jason” on the product page (no password needed) and you will see black star ratings. Log out or use a different name and the stars disappear.

Configure Retries and Timeouts

Add retry policies and timeouts to handle transient failures. Create a custom VirtualService for the ratings service with retry logic.

cat <<'YAML' | oc apply -n bookinfo -f -
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: ratings-retry
spec:
  hosts:
  - ratings
  http:
  - route:
    - destination:
        host: ratings
        subset: v1
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx,connect-failure
    timeout: 10s
YAML

This configuration retries failed requests to the ratings service up to 3 times, with a 2-second timeout per attempt and a 10-second overall timeout.

Enable Circuit Breaking

Circuit breaking prevents cascading failures by limiting the number of concurrent connections and requests to a service. Apply a DestinationRule with circuit breaker settings.

cat <<'YAML' | oc apply -n bookinfo -f -
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: reviews-circuit-breaker
spec:
  host: reviews
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        h2UpgradePolicy: DEFAULT
        http1MaxPendingRequests: 50
        http2MaxRequests: 100
    outlierDetection:
      consecutive5xxErrors: 5
      interval: 30s
      baseEjectionTime: 60s
      maxEjectionPercent: 50
YAML

This configuration limits TCP connections to 100, pending HTTP/1.1 requests to 50, and ejects unhealthy endpoints after 5 consecutive 5xx errors for 60 seconds.

Step 7: Set Up Observability – Kiali, Jaeger, and Prometheus

Istio integrates with several observability tools out of the box. The demo profile does not install them automatically, so deploy the addons from the Istio samples directory. If you are already running Prometheus on your cluster, Istio can use your existing instance.

oc apply -f istio-1.29.1/samples/addons/ -n istio-system

This deploys Kiali (service mesh dashboard), Jaeger (distributed tracing), Prometheus (metrics collection), and Grafana (metrics visualization). Wait for all addon pods to be ready.

oc rollout status deployment/kiali -n istio-system --timeout=120s

When the rollout completes, you should see the success message:

deployment "kiali" successfully rolled out

Verify all observability pods are running.

oc get pods -n istio-system -l 'app in (kiali,jaeger,prometheus,grafana)'

All pods should show Running status with all containers ready:

NAME                          READY   STATUS    RESTARTS   AGE
grafana-6c5dc6df6c-vz8xq     1/1     Running   0          90s
jaeger-db6bdfcb4-sq7kn        1/1     Running   0          90s
kiali-5f6dbf4d5c-lk8r2        1/1     Running   0          90s
prometheus-67f6764db9-t2bwj   2/2     Running   0          90s

Step 8: Enable Mutual TLS

Mutual TLS (mTLS) encrypts all traffic between services in the mesh and verifies the identity of both ends of every connection. Istio supports a permissive mode (accepts both plaintext and mTLS) and a strict mode (mTLS only). Start with permissive mode to avoid breaking existing connections, then switch to strict once everything works.

Apply a PeerAuthentication policy to enforce strict mTLS across the entire mesh.

cat <<'YAML' | oc apply -n istio-system -f -
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT
YAML

Verify mTLS is active by checking the authentication policy.

oc get peerauthentication -n istio-system

The output confirms strict mTLS enforcement:

NAME      MODE     AGE
default   STRICT   10s

You can also verify that mTLS is working between services using istioctl.

istioctl x describe pod $(oc get pod -n bookinfo -l app=productpage -o jsonpath='{.items[0].metadata.name}') -n bookinfo

The output should show mTLS connections to all upstream services.

To apply mTLS to a specific namespace only (instead of mesh-wide), target the namespace directly.

cat <<'YAML' | oc apply -n bookinfo -f -
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
  namespace: bookinfo
spec:
  mtls:
    mode: STRICT
YAML

Namespace-level policies override mesh-wide policies, giving you fine-grained control over which namespaces enforce strict mTLS.

Step 9: Access the Kiali Dashboard

Kiali provides a real-time view of your service mesh topology, traffic flows, and health status. On OpenShift, you can expose it via an OpenShift Route for easy access.

oc expose service kiali -n istio-system

Get the route URL.

oc get route kiali -n istio-system -o jsonpath='{.spec.host}'

Open the returned URL in your browser to access the Kiali dashboard. You can also use port-forwarding if you prefer not to expose the service externally.

istioctl dashboard kiali

This opens Kiali at http://localhost:20001/kiali in your default browser.

Generate some traffic to see the mesh topology populate in Kiali. Run this command in a separate terminal.

for i in $(seq 1 100); do
  curl -s -o /dev/null "http://${INGRESS_HOST}:${INGRESS_PORT}/productpage"
done

In the Kiali dashboard, navigate to the Graph section and select the bookinfo namespace. You will see the traffic flowing between the productpage, details, reviews, and ratings services with real-time metrics.

Access Jaeger for distributed tracing using the same approach.

istioctl dashboard jaeger

Jaeger shows the full request path through all services, which is invaluable for debugging latency issues and understanding service dependencies in your mesh.

For Grafana dashboards with Istio-specific metrics, run the following.

istioctl dashboard grafana

Grafana comes pre-configured with Istio dashboards showing mesh traffic, control plane performance, and per-service metrics. This is especially useful when combined with an existing Istio deployment on EKS or other managed Kubernetes platforms.

Conclusion

You now have Istio 1.29 running on OpenShift 4.x with sidecar injection, traffic management policies, mutual TLS encryption, and full observability through Kiali, Jaeger, and Prometheus. The Bookinfo sample application demonstrates how all these features work together in practice.

For production deployments, switch from the demo profile to the default profile, configure resource limits for the control plane, set up proper TLS certificates for ingress gateways, and integrate with your existing monitoring stack. Consider also setting up authorization policies to control which services can communicate with each other.

Related Articles

Containers Run Nextcloud on Docker Containers using Podman Containers How To Install Kubernetes Cluster on Ubuntu 24.04 Containers Install Pouch Container Engine on Ubuntu / CentOS 7 Containers Run FreeIPA Server in Docker or Podman Containers

Leave a Comment

Press ESC to close