Managing service meshes across multiple Kubernetes clusters gets complicated fast. Different mesh technologies have their own CLIs, configuration formats, and lifecycle quirks. Meshery steps in as a unified management plane that lets you provision, configure, and operate service meshes from a single interface. In this guide, we will walk through installing Meshery on a Kubernetes cluster and putting it to work managing Istio and running performance benchmarks.
What is Meshery?
Meshery is an open source, cloud-native management plane that provides lifecycle, configuration, and performance management for service meshes. It is a CNCF project that supports a wide range of service mesh technologies including:
- Istio – the most widely adopted service mesh
- Linkerd – lightweight, security-focused mesh
- Consul Connect – HashiCorp’s service mesh solution
- Kuma – built on Envoy, backed by Kong
- NGINX Service Mesh
- Open Service Mesh (OSM)
- Traefik Mesh
Rather than learning the CLI and API surface for each mesh independently, Meshery gives you a single web-based UI and API to manage them all. It uses an adapter architecture where each supported mesh has a dedicated adapter that translates Meshery operations into mesh-specific API calls. This design keeps the core platform lean while making it straightforward to add support for new meshes.
Prerequisites
Before starting, make sure you have the following ready:
- A running Kubernetes cluster (v1.23 or later) – minikube, kind, EKS, GKE, or AKS all work
kubectlinstalled and configured to communicate with your cluster- Helm 3.x installed on your workstation
- Cluster admin permissions (needed for CRD installation and namespace creation)
- At least 4 GB of available memory in your cluster for Meshery and its adapters
Verify your cluster is reachable before proceeding:
kubectl cluster-info
kubectl get nodes
Expected output should show your cluster endpoint and all nodes in Ready state.
Step 1: Install the mesheryctl CLI
Meshery provides a dedicated CLI tool called mesheryctl that handles both local and Kubernetes-based deployments. Install it using the official install script:
Linux / macOS
curl -L https://meshery.io/install | PLATFORM=kubernetes bash -
macOS with Homebrew
brew install mesheryctl
Windows with Scoop
scoop install mesheryctl
After installation, verify the CLI is working:
mesheryctl version
You should see output showing the mesheryctl client version. The server version will appear once Meshery is deployed.
Step 2: Deploy Meshery to Your Kubernetes Cluster
With the CLI ready, deploy Meshery into your cluster. The mesheryctl system start command handles everything – it creates the namespace, deploys the server, the database, and the default set of adapters:
mesheryctl system start --platform kubernetes
This process takes a few minutes. You can monitor the rollout by watching the pods in the meshery namespace:
kubectl get pods -n meshery -w
Wait until all pods show Running status with all containers ready:
$ kubectl get pods -n meshery
NAME READY STATUS RESTARTS AGE
meshery-5d4b7c8f9-xk2lp 1/1 Running 0 2m
meshery-broker-6c8d9f7b4-mt9rn 1/1 Running 0 2m
meshery-meshsync-7f9d8c6b5-qw4jp 1/1 Running 0 2m
meshery-operator-8b7c6d5f4-hn3kz 1/1 Running 0 2m
meshery-istio-adapter-9c8d7e6f5-pl2 1/1 Running 0 2m
Alternatively, you can deploy Meshery using Helm if you prefer more control over the configuration:
helm repo add meshery https://meshery.io/charts/
helm repo update
helm install meshery meshery/meshery --namespace meshery --create-namespace
Verify all services are running:
kubectl get svc -n meshery
Step 3: Access the Meshery Web UI
Meshery exposes a web-based dashboard for managing your service meshes. Forward the Meshery server port to your local machine:
mesheryctl system dashboard
This command automatically sets up port forwarding and opens the dashboard in your default browser. If you prefer manual port forwarding:
kubectl port-forward -n meshery svc/meshery 9081:9081
Then open http://localhost:9081 in your browser. On first access, you will need to choose an authentication provider. Meshery supports sign-in through your browser session. Once logged in, the dashboard shows an overview of your cluster, connected adapters, and mesh status.
Step 4: Connect Meshery to Your Cluster
Meshery uses MeshSync to discover and watch resources in your cluster. If you deployed Meshery directly into the cluster, it should auto-detect the environment. Verify the connection from the CLI:
mesheryctl system check
This command validates that the Meshery server, adapters, and operator are running correctly. In the web UI, navigate to Settings > Environment and confirm your Kubernetes context appears with a green status indicator.
If the connection is not detected automatically, you can upload your kubeconfig through the UI under Settings > Environment > Add Cluster. Meshery will parse the contexts and let you select which cluster to manage.
Step 5: Deploy and Manage Istio Through Meshery
This is where Meshery shows its value. Instead of installing Istio manually with istioctl, you can deploy and manage it entirely through Meshery.
Deploy Istio
In the Meshery UI, go to Lifecycle > Service Meshes. Select Istio from the list. Click Deploy and choose your desired Istio profile (default, demo, minimal, or custom). Meshery’s Istio adapter handles the complete installation including the control plane, ingress gateways, and CRDs.
You can also deploy Istio from the CLI:
mesheryctl mesh deploy istio
Verify the Istio components are running:
kubectl get pods -n istio-system
You should see istiod, ingress gateway, and other Istio control plane components all in Running state.
Apply Sample Configuration
Meshery bundles sample applications and configurations you can deploy to test your mesh setup. From the UI, navigate to Lifecycle > Sample Applications and deploy the BookInfo sample. This creates a multi-service application with sidecar injection enabled, letting you verify that Istio’s data plane is working correctly.
mesheryctl pattern apply -f bookinfo-pattern.yaml
Check the deployed application:
kubectl get pods -n default
kubectl get vs -n default
Step 6: Service Mesh Performance Testing
One of Meshery’s standout features is built-in service mesh performance benchmarking. This lets you measure the overhead your mesh adds to service-to-service communication – something that is critical for production planning.
Navigate to Performance > Profiles in the UI. Create a new performance test profile with these parameters:
- Concurrent requests: 5
- Queries per second: 100
- Duration: 60 seconds
- Load generator: fortio or wrk2
- Target URL: your application endpoint
You can also run performance tests from the CLI:
mesheryctl perf apply --name "istio-baseline" \
--url http://productpage.default:9080/productpage \
--qps 100 \
--concurrent-requests 5 \
--duration 60s \
--load-generator fortio \
--mesh istio
Meshery stores performance results and lets you compare runs side by side. This is useful for measuring the impact of mesh configuration changes, comparing different mesh implementations, or validating that an upgrade did not introduce latency regressions.
Step 7: Design Patterns and Configuration Management
Meshery includes a visual design tool called MeshMap (available in the cloud-hosted version and as an extension) that lets you compose and apply service mesh configurations graphically. Even without MeshMap, you can use Meshery’s pattern system to manage configurations as code.
Meshery patterns are declarative YAML files that describe the desired state of your mesh infrastructure. For example, a pattern might define an Istio VirtualService with specific traffic routing rules:
name: canary-routing
services:
productpage-vs:
type: VirtualService
namespace: default
settings:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
weight: 90
- destination:
host: productpage
subset: v2
weight: 10
Apply the pattern with:
mesheryctl pattern apply -f canary-routing.yaml
Patterns are version-controlled and shareable. The Meshery catalog at meshery.io/catalog provides a library of community-contributed patterns for common service mesh use cases like mutual TLS enforcement, circuit breaking, rate limiting, and canary deployments.
Understanding the Adapter Architecture
Meshery’s adapter architecture is what makes multi-mesh management practical. Each supported service mesh has a dedicated adapter that runs as a separate container in the meshery namespace. The architecture looks like this:
- Meshery Server – the core platform that serves the UI and API
- Meshery Operator – a Kubernetes operator that manages MeshSync and Meshery Broker
- MeshSync – discovers and watches Kubernetes resources, keeping Meshery’s internal state in sync with the cluster
- Meshery Broker – a messaging system (NATS-based) for communication between components
- Mesh Adapters – one per supported mesh (e.g., meshery-istio, meshery-linkerd), each exposing a standard gRPC interface
When you issue a command like “deploy Istio,” the Meshery server routes the request to the Istio adapter, which translates it into the appropriate Istio API calls. This abstraction means you can switch between meshes or manage multiple meshes simultaneously without learning each one’s native tooling.
Check which adapters are active:
mesheryctl system status
Meshery vs Direct Istio CLI Management
If you only run Istio and nothing else, you might wonder whether Meshery adds unnecessary complexity. Here is a practical comparison:
| Capability | istioctl (Direct) | Meshery |
|---|---|---|
| Install Istio | istioctl install --set profile=demo | UI click or mesheryctl mesh deploy istio |
| Configuration validation | istioctl analyze | Built-in validation with visual feedback |
| Performance testing | Requires separate tools (fortio, wrk2) | Integrated benchmarking with result history |
| Multi-mesh support | Istio only | 10+ meshes from one interface |
| Visual configuration | Not available | MeshMap designer |
| Lifecycle management | Manual upgrade process | Managed upgrades with rollback |
| Learning curve | Istio-specific knowledge required | Consistent interface across all meshes |
| GitOps integration | Through Istio Operator | Pattern-based with catalog support |
For single-mesh, single-cluster setups where the team already knows Istio well, istioctl works fine. Meshery becomes valuable when you are managing multiple clusters, evaluating different mesh technologies, need integrated performance testing, or want a UI-driven workflow for teams that are not comfortable with CLI-heavy operations.
Lifecycle Management
Meshery handles the full lifecycle of your service mesh deployments:
- Provisioning – deploy a mesh with a chosen profile and custom configuration
- Configuration – apply traffic management rules, security policies, and observability settings
- Upgrades – roll out new mesh versions with the ability to roll back if something breaks
- De-provisioning – clean removal of a mesh and all its components
To upgrade Istio through Meshery:
mesheryctl mesh deploy istio --version 1.22.0
To remove Istio completely:
mesheryctl mesh remove istio
Troubleshooting Common Issues
Meshery Pods Stuck in CrashLoopBackOff
Check the pod logs for the failing component:
kubectl logs -n meshery deployment/meshery --tail=50
kubectl logs -n meshery deployment/meshery-operator --tail=50
Common causes include insufficient memory (increase your node resources) and RBAC permissions (make sure your service account has cluster-admin or the required RBAC rules).
Adapter Not Connecting
If an adapter shows as disconnected in the UI, restart it:
kubectl rollout restart deployment/meshery-istio -n meshery
kubectl rollout status deployment/meshery-istio -n meshery
MeshSync Not Discovering Resources
MeshSync requires proper RBAC permissions to watch cluster resources. Verify the MeshSync service account has the right bindings:
kubectl get clusterrolebinding | grep meshery
kubectl describe clusterrolebinding meshery-operator
If bindings are missing, re-run the Meshery operator deployment:
mesheryctl system restart
Port Forwarding Drops Connection
If kubectl port-forward keeps disconnecting, consider using a NodePort or LoadBalancer service instead:
kubectl patch svc meshery -n meshery -p '{"spec": {"type": "NodePort"}}'
kubectl get svc meshery -n meshery
Resetting Meshery
If things get into a bad state, you can do a clean reset:
mesheryctl system stop
mesheryctl system reset
mesheryctl system start --platform kubernetes
Cleaning Up
To remove Meshery and all its components from your cluster:
mesheryctl system stop --platform kubernetes
kubectl delete namespace meshery
Verify the namespace is gone:
kubectl get ns | grep meshery
Summary
Meshery fills a real gap in the service mesh tooling ecosystem. Instead of juggling separate CLIs, dashboards, and configuration formats for each mesh, you get a single management plane that handles provisioning, configuration, performance testing, and lifecycle management across all major mesh implementations. The adapter architecture keeps things modular – you only run adapters for the meshes you actually use. For teams running multiple meshes, evaluating new mesh technologies, or looking for a UI-driven management workflow, Meshery is a solid addition to the Kubernetes toolkit. The integrated performance testing alone makes it worth setting up, since it gives you repeatable benchmarks that help you make data-driven decisions about your mesh configuration.
























































