Why build an in-house Kubernetes platform?

Tom Whiston Jul 20, 2022
Why build an in-house Kubernetes platform?

 

Having the reliability and mature APIs of global-scale cloud providers offers the possibility of a massive productivity increase over managing low-level infrastructure yourself. We have directly benefited from this at Nine with our first Kubernetes product, managed Google Kubernetes Engine, and we have no plans to stop working in the cloud or offering our Kubernetes stack there. So in an age of GKE (Google Kubernetes Engine), AKS (Azure Kubernetes service) and EKS (Amazon Elastic Kubernetes Service) it might seem strange that a managed service provider, well accustomed to working in the cloud, would invest significant time and effort into making their own in-house Kubernetes offering, especially considering the technical complexity of such an undertaking. But since the Platform Team at Nine has spent the last two years doing just that we thought it would be interesting to discuss why and where we go from here.


Swiss Location

There is no getting around the fact that some of our customers, be it due to risk management, data protection or reasons of tradition, require their managed services to be located in fully Swiss owned and operated data centres. Due to the multinational nature of cloud providers and the complex international laws that exist around data privacy (for example the CLOUD Act), it is just not possible to make the same privacy guarantees found in Swiss law when working in the cloud. Although the industry has begun to see more previously hesitant industries such as banking and insurance move to the cloud, there is still a clear hesitancy. 

This, therefore, became a primary driver in the development of our in-house solution, Nine Kubernetes Engine, to offer our customers the same robust privacy and security protections of Swiss law when using Kubernetes that our other managed products enjoy.

Cost

Although it is possible to cost optimise usage of the cloud this often comes with requirements such as significant time or spend based resource commitments, this may not be suitable for every customer or use case. By building a Kubernetes offering in our own data centre we have been able to leverage our high-density Nutanix infrastructure to bring down the per-minute cost of resource consumption for customer node pools without requiring any committed use agreements. In addition, we have been able to offer, for the first time at Nine, CPU and RAM costs calculated with a per-minute granularity instead of fixed monthly prices. This not only means a cheaper resource price overall compared to our cloud locations but will allow customers to leverage features such as cluster auto-scaling with Keda* to further cost optimise their setups.

We have also invested significantly in re-engineering the architecture of our Kubernetes service stack. Firstly, we have made additional components of our service stack opt-in. This allows customers to only deploy the services they are using and thus consume (and pay for) fewer compute resources. Secondly, we have moved a number of these services outside of customer clusters. This allows us to optimise not only the way that we deploy, configure, and maintain applications but also their cost-efficiency. Moving Nine services away from customer clusters also helps to minimise the potential for security or stability issues near customer application environments.

The result of these changes is that we have moved to a new costing model for Nine Kubernetes Engine, which involves a small fixed price cluster fees and additional optional service fees in addition to dynamic resource usage costs.

Self-Service

One additional goal that we had in building an on-premises Kubernetes offering is that it would be our first product to offer a fully-featured self-service interface. Our goal in implementing this was to allow customers to easily action common tasks which previously would have required interaction with a Nine engineer and thus wait time. Self-servicing a Kubernetes cluster means that customers will be able to control their clusters and node pools, storage, additional service configurations, and user accounts. 

This forms part of a larger goal at Nine to bring self-service to our product line. Our goal is to offer self-service through both a browser-based GUI, Nine’s cockpit, and an API. This will not only make managing your services faster and easier but allows Nine’s service catalogue to be leveraged for automated and DevOps workflows. As this is a major change to how customers interact with our services we will be rolling these features out in an incremental manner, with the GUI for Nine Kubernetes Engine in general availability now. We expect the API to be generally available in Q4 of 2022, and that additional products and services from across Nine’s portfolio will be added in an ongoing manner.

Future Products

Having a fully in-house, automatable and self-serviceable Kubernetes platform forms the basis for future product development at Nine, allowing us to explore concepts like Function as a Service or Namespace as a Service. It also allows us to choose where to run managed services so that we can, where appropriate, take advantage of Kubernetes features such as auto-scaling to provide a better customer experience. In addition, by having built a scalable and flexible self-service system on top of Kubernetes we are able to bring our existing products and services into a self-service environment without significant engineering or customer disruption, to make working with Nine faster, more convenient and more efficient for everyone.

If you would like to learn more about our Kubernetes offering please visit https://www.nine.ch/en/kubernetes or contact us.

*Available for Nine Kubernetes Engine in Q3 2022


 

If you would like further information regarding self-service, contact Nine:

Contact nine now

 

Tom Whiston

Strategic & Agile Consultant @ Nine
Find me on Github