Everything’s got a price.
Iosef Tarasov, a Russian mobster, rightfully said “Everything’s got a price”. The same applies while using cloud services. Every cloud provider's console has a palette of services by which they paint rainbows. But, there is always a price for it. As a Cloud Practitioner / SRE / Infrastructure Consultant/ DevOps / Whatever Your Job Title Is, you will always get fascinated by the lesser overhead on you due to solutions provided by Cloud Providers.
At this point, I would like to specify that I am not advocating against cloud providers. The IT industry did a great job by switching from on-premise infra to Cloud Providers. This definitely would have reduced the cost for the companies. And now is no time to stop. Be consistently frugal. Being frugal is the father/mother of innovation. So below are the dos and don’ts to save money while on the cloud.
Someone else is paying
Typical boyfriend/girlfriend culture is to buy materialistic gifts for each other from their parent's money. Gradually, as they grow up and earn for themselves, they tend to avoid such gestures to express love. You should also act mature and treat as if the cloud cost is going out of your pocket. Usually, we accept a solution by neglecting the infra/services costs involved. Cause we think it’s the client/company who is going to pay and they would have definitely thought about it or will think about it. And not to mention, it will be a drop in the ocean. But, you should be aware that every drop counts. Slowly and steadily, you would keep on piling up costs. And finally, a new stream of work would come — “reduce the infra cost”. The cloud providers usually have postpaid plans, not prepaid ones; like credit cards. While using credit cards you should anticipate what you would owe rather than get surprised by bills at the end of the month. So, just as you look at the price section before ordering a menu item, do the same with the cloud services.
Many times local vegetable sellers give you coriander leaves/chilies/curry leaves as freebies. They do so to make sure you buy vegetables from them regularly. Many times, it might happen that they would be selling onions/potato at a slightly higher price. But you don’t see anything beyond the greenery. You end up paying more, just to get the freebies. Many times cloud providers would say that they are not charging anything for this/that. But eventually, when you go into their ecosystem, you end up paying more in the total. Let’s understand it by comparing Google Cloud Functions v/s AWS Lambda invocations pricing.
So you can see, for 2 million requests Google Cloud Function was cheaper. But as soon as you reach a 3 million request and above, AWS Lambda wins.
Our resources are our resources, none of your resources
Multiple teams in an organization would be using cloud services. Intuitively, the approach would be not sharing any resources to avoid stepping over each other’s toes. But this could cost more. For example, there could be multiple teams creating their own VPCs. Within VPCs, their own private/public subnets and NAT gateways. Good that you created different subnets. But creating multiple NAT gateways is a costly choice. It would have reduced the cost drastically if NAT gateways are shared across teams. There could be many resources which when shared would reduce the cost and also not bring any disruption for the teams. Rather than creating different DB instances, one could create multiple DBs inside a single DB instance.
It takes more fuel and time too if you take a longer route to drive to the office. While you can have a cloud-managed CI build agent, they cost $X per minute. There is an added incentive to optimize the time to run tests. You not only save time but also some money. The per-minute rate to run Linux runner on Github is $0.008. Moreover, Github Free comes with 2000 minutes free per month for running builds. The same for AWS Codebuild for general1.small instance is $0.005 per minute. This comes with 100 minutes free per month. So how do you reduce your build times? Exploit the CI runner hardware. With Github, you get 7GB Memory and 2 core CPU.
- Write a code that uses multi-cores through multi-threading (first I will have to excel writing multi-threaded code).
- Remove obsolete steps in CI. For example, delete tests that don't test anything.
- Optimize your build tools. For example, keep the docker build context light.
- Use caching for build dependencies. Refer: AWS, Github
No brainer !— the per kg rate for 10kg flour would be less than the 5kg one. If you know that you would consume 10kg of flour in a month, go for the 10kg packet. It will cost you less than two 5kg packets. Likewise, it is cheaper to go for AWS reserved instances, rather than on-demand. You know the drill! First, make sure that you can estimate your infra requirements upfront. Then look for bulk buy discounts.
Setup Billing Alerts
As soon as you pay somewhere with your credit/debit card, you get the alert on your phone. You would think that it is futile. But that becomes a savior when an unintended transaction shows up. You quickly call the bank and inquire about the fraud transaction. The same applies to the cloud. You can set up billing alerts. As soon as your estimated bills go above your budget you get an email alert. For my personal AWS account, I have set it to raise alert as soon as it goes above $0. This way I am sure that I am not going to be charged a single buck. By the way, this is the first and foremost thing to set up before running anything on the cloud. Better be safe than sorry.
Enable Audit logs
While borrowing as well as returning a book to a library an entry has to be made somewhere. Either it is in your library card or else a register. Same way resources in the cloud are similar to shared books. One needs to keep track of who created/deleted/updated what and when. This will help you to trace any anomalies related to the cloud resources’ usage. If there is a drastic increase in DB resources price. One could look up the audit logs to track who is/are creating so many DB resources and why. AWS Cloudtrail enables you to log your AWS account activity.
Switch off after use
Just like the power discoms suggests you do not keep lights and fans running without need. You should make sure that you turn off/terminate the resources which are no longer needed. Mostly such a scenario happens when one has to run a spike. He/she provisions some resources. Run their spike. Even get the outcome. But the resource is still needed for demo purposes. And such resources just keep on lingering. But no issues, just nuke 🚀 it!!!
Also, you can have an AWS lambda function run at a cron which turns off your instances during weekends and off-hours. You can set different schedules for different environments. For example, run DEV/QA instances only during weekdays 9–5. Run UAT/STAGING only if a release is scheduled. Try to run PROD instances 24/7.