I’ve been lucky enough to be able to spend the last couple of days having a better look at Docker and vSphere Integrated Containers (VIC for short). So first of all what are containers and how do they fit in a vSphere world?
A container is an isolated instance of an application. It also contains any dependencies that application needs as well as commands to get it up and running. Any dependencies are pulled from a central online repository (or registry). For example my application might be a PHP website running on an Ubuntu Linux distribution. In fact all of these things are grouped into an image and then we can create multiple containers from that image.
Up until now organisations (and Developers) have typically installed Docker on a VM and started creating containers on that VM with the team responsible managing the VM having no visibility of that process. From an Ops perspective, the only evidence for this change is probably the proliferation of super-sized VMs – whereas in the “traditional” model we’d see lots of smaller VMs probably running one application, these have been displaced by larger VMs running lots of applications in the form of containers.
Above: “Traditional” Architecture
Above: Containers Architecture
This presents a challenge in that the Ops team has no visibility of what is running inside that VM, and if there are issues (like performance) it’s not possible for them to understand the cause at anything less than 30,000 feet.
Enter vSphere Integrated Containers! In a nutshell, VIC sits between Docker and vSphere and enables containers to run in a separate VM enabling the visibility that existed previously while still maintaining the speed and flexibility of a Docker environment. The main component of VIC is the Virtual Container Host (VCH) which basically translates Docker API calls into vSphere API calls – so Developers just see a Docker target or endpoint and behind that is a vSphere environment, whether a host, cluster or resource pool.
Above: Containers Architecture with VIC
The first step to get VIC up and running is to deploy the VCH Appliance which comes as an OVA file.
Fill in the information for Appliance Security, IP Address and Registry Configuration to avoid running into problems later 😉
Next we need to get the VIC binaries by going to https://<IP_Address>:9443 and downloading and extracting vic_1.1.0.tar.gz. Incidentally the other files are plug-ins for the Web Client but as I’m not using vCenter I don’t need them.
The next task is to open the firewall port on the host from my macBook using the binaries we just downloaded. I’m using vic-machine-darwin update firewall –target <ESXi_Host> –user root –thumbprint <Thumbprint> –allow. The thing to note here is you need to supply the SSL thumbprint for the host – if you don’t supply the –thumbprint argument, the thumbprint is handily supplied in the command.
Once the firewall is open we can deploy the VCH by using vic-machine-darwin create –target <ESXi_Host> –user root –no-tlsverify –force. Note the details returned for DOCKER_HOST.
As a result of this, the ESXi host now has a virtual-container-host VM running as well as the appliance we deployed earlier. Note that the guest OS is Photon.
Now we should be able to run Docker commands against the VCH – docker -H <DOCKER_HOST> –tls info confirms the VCH is running.
The final test is to deploy a container. The first step is to pull an image into the VCH – docker -H <DOCKER_HOST> –tls pull busybox
If all is well we should be able to use docker run to create a container from this image and interact with it –docker -H <DOCKER_HOST> –tls run -it –name test1 busybox /bin/sh
And in the host client, test1 appears.
Finally to cleanup…
All in a few seconds – Containers, vSphere Style! 🙂