A Cost-Cutting Cloud Optimization Strategy for Your Skyrocketing Cloud Bills
With the business world shrouded in uncertainty, companies need to make an explicit effort to improve their cost management processes, especially when it comes to cloud infrastructure and services.
There is a huge opportunity for organizations to reduce costs in this area. A massive amount of cloud accounts are overspending on their infrastructure and services, resulting in an estimated $17.6 billion in wasted cloud spend. Most cloud providers bill by the second or by the hour, so each second without a cost-cutting strategy is a second that creates cost.
By focusing on the low-hanging fruit, a business can start saving quickly and those savings will grow over time. Below are some strategies to begin, starting with the easiest and quickest approach and working through to the more long-term, complicated processes.
Achieving Observability Into Your Cloud Ecosystem
Before everything else, it’s important to set up some form of monitoring capability that lets you know if you’re doing things correctly. For example, AWS has tagging, which allows you to shed light on where the problems are, see who is over-provisioned or under-provisioned, and which applications are costing the most to run.
While specialized products can provide deep breakdowns with a bit of work, 99% of companies can use the cloud vendor's own billing tracking services for creating budgets, alerts, tags or resource divisions.
The more detailed the observability, the better. It helps to understand the ecosystem, find out how things are divided, and show how much money is being spent on every application in development, production or storage. Once you start applying cost-saving measures, you can leverage this high-level observability to assess their effectiveness over time.
Related Article: 4 Strategic Steps for Responsible Multi-Cloud Adoption
Hacking Your Cloud Computing Billing Model
Whichever cloud vendor you use, you need to understand how you’re being billed, what the alternative billing options are, and which option is the most appropriate for your business. There are several ways to “hack” these billing models to make the most of the service and keep costs to a minimum, tweaking the way you pay for cloud services to match your needs.
For instance, AWS bills for servers in multiple different ways, mostly based on computing capacity, so the difference in cost can sometimes be as much as 75% for the same machine. One of the things we can do to address this is to lower the capacity of the main server and pay for a reserve capacity that would cover any increases for a few months. If there is no plan to scale capacity in the short to medium term, this solution is ideal as it’s undetectable on the user side and doesn’t require you to change your whole fleet.
Planning capacity changes in advance is another billing hack. Try to estimate your product’s growth and understand how much computing capacity you’ll need in the future. Capacity can always be increased, but with better planning, it’s possible to drastically reduce costs.
Automating the Simple Stuff
With organizations racing against the clock to create savings, automation can help massively when it comes to short-term cost reduction in the cloud.
In a lot of cases, companies keep unnecessary computing capacity running when not in use, often at a higher cost than the capacity generating income. There’s almost no reason to run at full computing capacity 24/7, except on the production servers or anything that generates revenue. With some simple automation, you can easily shut down development, quality assurance or user acceptance testing capacity when not in use, reducing those active hours and lowering your bill.
Cloud storage cost is another area that lends itself well to automation. It should go without saying that more cloud storage costs more to provide, so implement a lifecycle policy that dictates how long to keep backups and which backups are with keeping.
Most non-production environments don’t really need a backup. It’s actually bad practice to store historical data on developer actions on particular servers as it takes up unnecessary space. Combined with a robust lifecycle policy, automation can help reduce costs in storage overuse.
While this is only one possible use case of many, it shows that by implementing automation you can generate some impactful cost savings in a relatively short space of time.
Now that we’ve covered the easiest and quickest actions that will generate quick wins and impact, here are some of the more expensive, long-term things.
Related Article: How Automation Can Future-Proof Enterprises Against Major Disruption
Right-Sizing: Establish the Right Server Capacity
After automation, one of the most complicated but impactful cost-reduction activities is right-sizing, which is all about finding the correct capacity for your servers or storage.
Right-sizing involves performance testing to help understand how much capacity will be required, then sizing the servers according to those results. These tests cost money and require an automated approach that can be modified to suit the process as it evolves. Developers also need to be capable of responding to the tests in a continuous manner.
The process is expensive, complicated and demanding to perform, but the long-term payoff is a consistently more reasonable cloud services bill that reflects the true capacity your organization requires at any given time.
Architecting for the Cloud
Most cloud adoptions start with a “lift and shift,” where companies migrate everything from a data center to a similar environment in the cloud. While this approach gets things up and running without much thought, it doesn’t really serve to get the most out of the cloud model.
One of the main advantages of the cloud is being able to use on-demand capacity and services, which often requires companies to re-think their product’s architecture. For instance, using a serverless approach to only pay for individual executions, rather than forking out for a full-blown server.
While the benefits are tempting, organizations must spend time planning and ensuring that their products are the right fit for certain cloud services, otherwise they run the risk of paying a lot more. It’s important to implement limiters, monitoring, alerts and budgeting configurations to keep risks at a minimum. Architecting for the cloud is huge and forces you to get down a design level on your applications and understand which services they really need, but it takes real commitment and requires a highly skilled team that can adopt a culture of cost-saving.
About the Author
Juan is living the DevOps life, and loves nothing more than helping others get to a place where they can do the same. As a solutions and cloud architect, software developer, entrepreneur and SySadmin, he understands what DevOps takes from a variety of perspectives. In his role at PSL, he works with different international clients to highlight the versatility and adaptability of a DevOps culture and how they can achieve it, no matter the circumstances.