Much better isolated than namespaces
Cheaper than separate «real» Kubernetes clusters
More powerful thanks to its own API server
vCluster offer a unique developer experience for anyone developing against Kubernetes as a deployment target
When scalability limits of k8s are reached due to large multi-tenant clusters, clusters can be split into vClusters and effectively shared
New ingress controller tests can be virtually simulated without affecting cluster operation
NKE | vCluster | |
Service Type Load Balancer | ✅ | ✅ |
Persistent Storage (RWO) | ✅ | ✅ |
Ingress | ✅ | ✅ |
Autoscaling | ✅ | ✅ |
Argo CD Integration | ✅ | ✅ |
NKE Machine Type | ✅ | ✅ |
Dedicated Worker Nodes | ✅ | ✅ |
Dedicated HA Control-Plane Nodes | ✅ | ❌ |
Cluster Add-ons | ✅ | ❌ |
Automatic Backup | ✅ | ❌ |
Guaranteed Availability (SLA) | ✅ | ❌ |
Cluster Fee | ✅ | ❌ |
Fast Creation Time (< 2 min) | ❌ | ✅ |
Cluster Admin | ❌ | ✅ |
As more developers get familiar with deploying to Kubernetes, the need for better isolation between tenants becomes more important. Is it enough to just have access to a namespace in a cluster? I would argue for most use-cases to deploy your production apps, yes.
Read more...