Migrations – What, Why, How? Part 2

In my last post I talked about the what and the why and considered the planning element of a migration. This post focussed on the execution and the options which are available.

The How – Migrating

As mentioned previously, we’re fortunate that physical server numbers are less than they once were but there are usually still some kicking around (usually the ones which run some mission critical business function). The question is then how these are handled which will vary depending on the situation. For example, is a lift and shift exercise to quickly exit the data center, is there a desire for transformation to reduce operating costs associated with these servers, or can they simply be left in situ on the basis that they will be refreshed at a later date as part of normal BAU process?  Ultimately, this will dictate what products are required for the job – e.g. P2V agent-based products or something that just needs to handle virtual machines.

I remember years ago using SRM to migrate VMs from one data centre to another as part of a migration rather than a DR (which was it’s primary purpose at the time). It must have been common enough that VMware has built a migration tool out of it – HCX.

HCX is a real game-changer when it comes to migrations because it provides a lot of very useful functionality, some of which I’ll describe below. As a result HCX has become the de facto standard when it comes to the (V2V) migrations that VMware PS are involved in. The first thing to note is that HCX requires NSX at the destination (it’s not mandatory at the source).

The first decision point is whether you need IP changes should or need to be avoided. HCX has a feature called Network Extension which can stretch a VLAN associated with a vSphere Distributed Port Group or overlay from the source to the NSX enabled destination. This allows VMs to be moved whilst maintained the IP address and even MAC address which can be useful in terms of not breaking applications, but also maintaining the network security profile. Typically in most cases, maintaining IP addresses is the only game in town. In the past, overlapping IP spaces associated with mergers and data center consolidations, have become less of an issue in a software-defined networks world. An alternative to network extension is stretching networks at the physical layer (e.g. Cisco EVPN) but most tend to avoid this route in my experience. If IP addresses will change during migration then Network Extension is not necessary and workloads can simply be migrated between two independent networks.

After that, it’s a question of the migration approach itself. This largely boils down to three questions;

  1. Is the source environment vSphere based?
  2. Is an outage acceptable?
  3. Is scheduling necessary?

First of all, if you’re moving from a non-vSphere environment, you are limited to HCX OS Assisted Migration which uses an OS agent to replicate image. This will involve a period of downtime when the replicated target takes over from the source.

If the source platform is vSphere 5.5 based (or later) you have rather more options.

HCX Bulk Migration – uses vSphere Replication to replicate the VMs, allows scheduling within a migration window and involves a brief outage (reboot) when the VMs are migrated (SRM, hello?). It supports 100 VMs concurrently. Actually, this method is supported with vSphere 5.0 and above.

HCX Cold Migration – used for migrating VMs which are in a powered-off state and uses the NFC protocol for file copy.

HCX vMotion – Normal vMotion we all know and love, the VM will remain online but like Cold Migration, it is for moving one VM at a time without scheduling.

HCX Replication Assisted vMotion – similar to Bulk Migration in that it uses vSphere Replication to replicate the VMs ahead of time, but rather than a ‘failover’ it uses vMotion technology to sync the remaining delta at migration time (i.e. very fast vMotion within the migration window). There are a number of caveats around this, Hardware Version 9, no physical RDMs etc. so it’s important to validate these as part of your migration design.

Posted in Architecture, SDDC | Leave a comment

Tanzu Kubernetes Grid Integrated GA

A few weeks ago I posted an overview of the Tanzu portfolio. At the time Tanzu Kuberenetes Grid Integrated (formerly PKS Enterprise) was the missing piece of the puzzle. Well the good news is that TKGI is now GA and so joins TKG and TKG+ under the Tanzu umbrella and provides another means of running containers on vSphere (and beyond). At this time TKGI does not support VCF 4.0 or the Converged VDS v7. It does, however, support NSX-T 3.0 in beta and supports upgrades from PKS 1.7. See the Release Notes for full details.

https://docs.pivotal.io/tkgi/1-8/release-notes.html

Posted in Uncategorized | Leave a comment

VVD 6.0.1 Released and Updated Matrix

Last week an updated version of VVD dropped. Version 6.0.1 (aligned to VCF 4.0.1) introduces vSphere 7.0b. Beyond that not too much has changed; NSX-T is updated to from 3.0 to 3.0.1 and the associated vRLI Content Pack is also updated but that’s about it.

I’ve updated my VVD version matrix which I have now made available as an MS Excel workbook as the table/image was getting a bit unwieldy. It can be accessed through this link (OneDrive).

Posted in SDDC | Leave a comment

Migrations – What, Why, How? Part 1

One of the mainstays of my role is migration projects. I thought it would be worth doing a post on this to provide some food for thought for anyone contemplating this at the moment or in future.

The What and the Why

So first of all, what does a migration entail and why would you need to do it? Simply put, a migration is the movement of workloads from one place to another. These workloads could be virtual, physical, modern or legacy. In a VMware context, we are familiar with the concept of vMotion and DRS moving workloads within a cluster. For the purposes of this post, we’re thinking about migrating between platforms. There could be a number of reasons why you would need to do this. A data centre exit or consolidation programme is a typical one (data centres are expensive things and consolidating them can make a big impact on the IT and overall budgets of companies). The target could be another owned or leased data centre, or increasingly, it could be in the cloud. Another typical example is migrating from a legacy platform to a new one. In this case, the new platform could be in a different data centre or it could be in the next rack of the same data hall. Depending on the different architectures of the source and target platforms, this could be quite straight-forward or very complex.

A typical platform might have a lifespan of 4 or 5 years. Some companies may opt to build a new platform alongside, directing new workloads towards the new platform and essentially closing the original platform, operating them in parallel on the basis that over time, the workloads on the legacy platform will be decommissioned and/or refreshed onto the new platform with the original eventually dying out. The problem with this approach is that, in reality, the legacy platform will probably still be there when it comes to do this all over again. And operating multiple, disparate platforms has an associated (high) cost. The alternative is going all in and migrating everything off the old platform and decommissioning it entirely. This is the optimal approach from an operating perspective as it means only supporting one, strategic platform which provides all the benefits across the boards such as increased agility, speed, flexibility etc. That said migration projects aren’t exactly cheap and easy either and so making this decision can be difficult and finely balanced and requires proper analysis and a business case. The high (but often hidden) Opex costs associated with running multiple platforms over a prolonged period vs the upfront and more visible Capex costs of running a migration. In my experience, running hybrid or bi-modal operations acts as a drag on innovation.

The How – Planning

Planning is the most important part of any migration. Migrations in 2020 are so much easier than in times gone by as virtualisation has been widely adopted and that additional abstraction layer makes mobility easier than moving from a physical server to a virtual server and the changes to the operating system that entails. But that is only half of the story. Although moving a virtual machine in itself is not difficult, the complexity comes from moving hundreds or thousands or virtual machines which make up applications, and those applications have dependencies on other applications. And so on. Depending on the architecture and physical locality of the source and target, introducing latency (even a tiny amount) can cause havoc. It’s essential therefore, to have a comprehensive plan of what needs to move and when. This may involve a combination of people (talking to application owners etc.) or technology or both.

On the technology side, organisations generally have things like Configuration Management Databases’s (with varying degrees of quality). There’s also things like RVTools and vROps but unless you’re lucky those things aren’t going to help with real application dependencies because in the real world, these things are generally poorly documented and critical applications can evolve over time. vRealize Network Insight can be really useful here because it can analyse actual network traffic flows between servers as well as port usage to build up a mapping of communications and dependencies. The end goal is to break up the big bucket of workloads into smaller buckets consisting of one or more applications into suitably smaller buckets which can be moved at the same time.

bucketsThese can then be added to a schedule which allows resourcing to be planned, change windows to be secured and all stakeholders to agree the plan which can then be tracked and reported upon back to management. The risks and impact of issues and mistakes is extremely high and confidence in the project can easily be compromised. It’s crucial therefore to ensure that the analysis and planning is watertight.

I’ll cover more of the how (specifically execution) in my next post.

Posted in Architecture, SDDC | 1 Comment

vSphere 6.7 Support Extended

VMware has announced today that they have extended general support to October 15, 2022. It had been intended to go out of support this year (11 months, to October 15, 2022). This is a great move and I know first hand how many customers have been forced to put on hold intended, or even in-flight, projects as part of their response to COVID-19. This announcement will provide some breathing space to plan the move to what in my view is a transformational release in vSphere 7.

https://blogs.vmware.com/vsphere/2020/06/announcing-extension-of-vsphere-6-7-general-support-period.html

 

Posted in vSphere | Leave a comment

vExpert 2020 Applications Open – My Take

Each year there are two opportunities to apply for the vExpert program. Today is the second chance to apply for the program for 2020 for anyone who missed out first time around.

What is the vExpert Program?

As a reminder, the vExpert program is VMware’s global evangelism and advocacy program. It is designed to recognise those individuals (not companies) who promote and evangelise VMware’s products and brand. VMware has is fortunate to have a very strong community; individuals who are willing to share information and support their peers which is what this program is all about.

Continue reading

Posted in Certification, Community | Leave a comment

VVD 5.1.2 Released

Overnight a new version of VVD was released on the VVD 5.1 release train. This is a small maintenance release with relatively few changes. At a glance these are:

  • ESXi updated to patch level ESXi670-202004002 (previously ESXi670-201912001)
  • vCenter Appliance updated to 6.7 Update 3g (previously 6.7 Update 3b)
  • vSAN updated to 6.7 Patch 02 (previously 6.7 Patch 01)
  • vRLI Content Pack for Linux updated to 2.0.1 (previously 2.0)
  • NSX-T updated to 2.5.1 (previously 2.5)

The only other thin to call out is that “starting with this version, VMware Cloud Builder is no longer updated for clean deployments of VMware Validated Design. You use VMware Cloud Builder to deploy VMware Validated Design 5.1.1 and then update the products in your SDDC to the versions in VMware Validated Design 5.1.2.”

Release notes can be found here.

Posted in SDDC | Leave a comment

VMware Learning Zone – 6 Months Free Access

VMware are currently running a promotion whereby access to the VMware Learning Zone Premium Package is free for 6 months. You have up until November 6th 2020 to register. In these strange times when we’re all inside a lot more than we’d like, it’s well worth a look.

The Premium package includes on-demand video training across the VMware product range but also VCP and VCAP preparation materials. These take you through each of the objectives within the exam blueprint so a real help if you’re working towards a VMware certification.

VLZ

Registration is via the link below:

https://mylearn.vmware.com/mgrReg/courses.cfm?ui=www_edu&a=one&id_subject=93848

 

Posted in Certification | Leave a comment

VMware in the Public Cloud

Hot on the heels of my post last week on the announcement of the general availability of the Azure VMware Solution, yesterday it was announced that the Google Cloud VMware Engine is also now generally available. The VMware presence in the hyperscale public clouds is looking pretty impressive – Azure, Google Cloud, AWS, IBM, Oracle and Alibaba Cloud (for which the GA announcement seems to have slipped completely under the radar at the end of April).

These are based on the VMware Cloud Foundation stack that is (or should be) the default option on-prem too. A reminder that Cloud Foundation is a hyper-converged software solution built onĀ  vSphere, vSAN and NSX. This means a true extension of the SDDC from the on-premises data centre to the public cloud, with the additional value-add services they provide.

Continue reading

Posted in Public Cloud | Leave a comment