The Cloud has been trending as the hottest business destination for a while now. It’s scalable and flexible, with the potential to reduce IT costs. In short, it’s the must-have accessory for the ‘modern organisation’.
Yet while firms rush to prove to customers they are ‘cloud-ready’, ‘integrated’, and any number of other business buzzwords, many are failing to contemplate the challenges associated with handing their business critical applications over to a third-party platform, not least the potential to undermine any financial savings thanks to overspending on inaccurate cloud sizing.
Far from being a like-for-like ‘lift and shift’ transition, moving to the cloud requires much greater pre-emption and change management than many firms anticipate. Most companies approach the migration by taking a simple inventory of all their virtual machines, mapping them to the closest fit instance type in their chosen provider and then firing them up. While enterprises should in theory be able to run their applications and workloads in this environment, and in fact many view it as the ‘easy option’, this will almost certainly guarantee a business ends up paying far too much from day one.
A comprehensive stocktake of the demand profile of business workloads is a critical first step on the path to cloud migration. Firms must begin by right-sizing their estate and developing a thorough understanding of workload behaviour via detailed analytics. At the most basic level, this means knowing which machines can be downsized from core count, RAM and allocated storage, and which could even be switched off altogether.
The next step is to understand the demand profiles of each machine: which need to prioritise CPU, IO or fast memory access, and which are flexible in their CPU demands. And the nuances don’t end there. Some machines may require fixed reserved capacity, while some can be expected to burst for short periods of time. Some have very irregular demand profiles, characterised by long periods of inactivity followed by short periods of intense activity, while some almost exclusively on performance, or have high levels of IO compared to others.
Once a company gathers all this information, it can optimise its environment for the right workload configuration and accurately plan its monthly cloud spend based on a right-sized environment. This means more accurate instance sizes and, in the majority of cases, decreased financial input.
This process is, of course, far from stagnant. Once in the cloud, businesses must continually monitor, manage and analyse their workloads. This is where an automated monitoring system comes into play. Those who make it to this stage will often use a variety of vendors to provide them with a portal by which to view the cloud. While better than zero oversight, this approach presents its own challenges. With each of these portals being unique and presenting similar data in different ways, it’s like comparing apples with oranges.
A single, omniscient tool that can unite various sources has the potential to save firms significant time and costs. By generating data from the entire business estate, analysing it and then modelling it on a variety of scenarios, firms can make informed decisions on how to optimise their estates. It can also cut costs and give businesses the ability to project growth plans, affording them the confidence that all future needs will be identified and addressed before becoming an issue. By monitoring the entire estate with a single aggregated, managed service, firms can meet customer demand, ensure continuity, protect their estates and save money all in one. Four birds, one stone.
In today’s digital age, where DevOps and Cloud reign supreme and every word seems to have ‘tech’ tacked on the end, businesses need to make sure they aren’t just jumping on what’s trending. By pausing to critically analyse the needs of their business estate, they can make sure they are maximising their operational spend across their hybrid IT environments.