Cisco Container Platform + HyperFlex = Stupid Easy Kubernetes

Liam Keegan
5 min readJun 18, 2020
Photo by Joe Caione on Unsplash

I refuse to use the stock photo of containers on a ship , so here’s a picture of a puppy instead. You’ll be this happy when you realize how easy it is to deploy k8s with CCP.

I’ve been spending time deep-diving into Cisco Container Platform (CCP), running on the Cisco HyperFlex HCI (HX) platform. If you need to deploy an enterprise-grade container strategy on-prem and in the cloud in a short amount of time without having to move Heaven and Earth to get it done, HX+CCP should be a serious contender.

What is CCP?

Cisco Container Platform is a platform that allows you to manage and deploy Kubernetes clusters on-premise and in the three major public clouds (AWS, Azure and GCP). It’s software, nothing more. But, with all things open source, support and integration can be a major challenge.

So what?

Here’s why you care: out of the box, you get a production-ready, API-driven, k8s ecosystem that is fully supported by Cisco. Have an issue with Harbor? Call Cisco. Istio Service Mesh getting all funky on you? Call Cisco. Did your JSON blob coagulate and your YAML indentations are all nimblypimbly? Call Cisco.

When you combine CCP with the HyperFlex platform, you get full stack support, all the way from the bare metal drivers into the containers.

How do I deploy a CCP-managed Kubernetes cluster?

Now we’re in the big leagues. Let’s deploy a fully supported, enterprise-grade Kubernetes environment via the GUI.

Screen #1: Pick your provider (in this case, vSphere/ESXi running on the HX nodes), give it a name and description, then pick your k8s version.

Screen 1 is where you answer the hard questions of having to figure out a naming convention.

Screen #2: Configure VMWare settings (nothing nuts here). Datacenter. cluster, datastore, etc. are all VMWare standard settings.

Screen #3: In every cluster you have master and worker nodes. In this case, I’m setting up three dual-CPU/16GB RAM master nodes, plus an additional three worker nodes. I don’t have GPUs installed here otherwise we could dedicate those. Other options are setting SSH credentials, NTP servers, the pod IP range, import any custom registry CAs and parameters to specify insecure local test/dev registries.

Screen 3 is where you size your k8s environment.

Screen #4 is a confirmation screen. I’m not counting that. That’s it. Then we wait.

How long does it take?

From the time I hit Submit to having a working cluster took 8 minutes.

I selected the cluster in the GUI, clicked to download the kubeconfig file, and I’m off to the races.

It doesn’t matter if the Kubernetes cluster is on-premise, or in a public cloud provider — AWS, Azure or GCP. The deployment process is the same!

What about add-ons?

I’d like to use the k8s dashboard, as well as have monitoring via Graphana. Click the Install First Add-On button, select component from the available options, and hit Go.

Here are the out-of-the-box supported add ons (CCP 6.1)

That’s not just easy, that’s stupid-easy! Four steps and you’re ready to go!

At the end of the cluster deployment here are the k8s pods when CCP is done with its work. This shows a standard cluster with three masters and three nodes, as well as monitoring and the k8s dashboard.

Out of the box, ready to rock and roll.

This is brain-dead simple and ready to deploy to hand over to your end users. From here, it’s pure Kubernetes — no special vendor lock-in, no proprietary crapware, just good old-fashioned Kubernetes, like Grandma used to make.

This seems like a lot of work. Time to AUTOMATE!

The magic of CCP is its API. Think about this — there’s absolutely nothing that I outlined here that couldn’t be automated from start to finish. Here’s how we do it:

We collaboratively develop a service catalog for your end users. For instance, we sit down and come up with the following templates:

  • On-premise dev/test cluster (1 master, 1 worker, 2vCPU/8GB RAM)
  • On-premise production small (3 masters, 3 workers, 2vCPU/16GB RAM)
  • On-premise production large (3 masters, 12 workers, 4vCPU/128GB RAM)
  • AWS production small (3 masters, 3 workers, m5.large)
  • AWS production large (3 masters, 12 workers, m5.4xlarge)

Then, we leverage the REST API in CCP to template these configurations. It’s nothing more than a JSON blob.

Here’s the output from my k8s deployment I created.

If you have Jira, ServiceNow, or a random Excel workbook, your users simply enter a ticket and once the workflow is approved, the API provisioning engine fills in the department/cost center/owner, and then sends the entire batch to CCP for production deployment. Once deployment is done, the provisioner grabs the kubeconfig, updates the ServiceNow ticket and goes on to the next one.

Sample automation workflow for CCP k8s deployment. Who doesn’t want that?

No human intervention. Happy end-users. How cool is that?

Who should look at this and how can I try it?

If you are a Cisco HyperFlex customer and Kubernetes is on your radar, you should definitely look at this. If you’re already a UCS customer and Kubernetes is on your radar, you should really think hard about looking at this. If Kubernetes is not on your radar, you might find the training and education valuable for your team!

We offer a fixed-cost (~$10k), POC/pilot program that’s designed to get the CCP platform up and running in your environment without any capital expenditure. We leverage the trial period on CCP to make sure you’re completely comfortable with what you’re getting prior to making any purchase, and give you a taste of that sweet automation goodness.

If you’re interested, please get in touch! Or, you can book a timeslot here.

--

--

Liam Keegan

Data center/security/collab hack, CCIE #5026, focusing on automation, programmability, operational efficiency and getting rid of technical debt.