Thanks to our awesome community, I'm very excited to announce that you can now use Infracost to get cloud cost estimates for Microsoft Azure in pull requests. Try it now, it's free and open source!
Cloud costs for engineering teams
Cloud costs have become so complex that the industry (ourselves included) started addressing issues around breached budgets after the bill arrived. This is not the way it should be. It is like going shopping and having no idea how much things cost till after your card has been charged.
We are on a mission to empower engineering teams to use cloud infrastructure economically and efficiently. We do this by fitting in the developer workflow via CI/CD integration, reading the Infrastructure-as-code project, picking up the parameters that have a price point, looking up the prices for the configurations and leaving a comment in the Pull Request like "This change will increase your cloud costs by 25%" with a detailed breakdown. This way, the whole team is aware of the cost implications of the change.
Today we are announcing that in addition to AWS and Google Cloud Platform, we have added support for Microsoft Azure. Not only has Azure support been requested by over 60 of our community members (https://github.com/infracost/infracost/issues/64), but we have seen a lot of growth from Microsoft in terms of the number of resources offered and enterprise adoption.
We have added support for over 65 Azure resources (and another 70 resources which are free), with many more planned. Visit our GitHub issues page and put a thumbs up on the resources you'd like covered and we will prioritize them.
But there is one more thing! We have also added support for Microsoft Azure DevOps Pipelines. This is in addition to our current supported CI/CD integrations such as GitHub Actions, GitLab CI, CircleCI, Bitbucket Pipelines, Atlantis and Jenkins.
Get started! We have made it super simple to get up and running:
1. brew install infracost # (docker, windows etc options available)
2. infracost register
3. az login # To set cloud creds, see note-1
4. infracost breakdown --path . # Run in your terraform directory. We also have an example Azure terraform file you can use to try it out.
I sat down with David Julia, who is the Head of Engineering at Eagle.io to talk about cost attribution, why it matters, who should care and how Eagle.io achieves this. We worked through their use-case, their tech stack, the tools they use, what worked and what did not work, and ultimately how they have achieved cloud cost attribution.
The following is lightly edited transcript (from YouTube) of the introduction section of our chat:
Ali: Hey everyone, I have David Julia here who is the head of engineering at Eagle.io. David thank you very much for taking time to speak with me today. Would you like to introduce yourself?
David: Yeah absolutely, so David Julia head of engineering at Eagle.io. We do environmental IoT so we're an environmental IoT data platform essentially, you connect up all sorts of devices to us were used in water monitoring, natural resource monitoring, various other things. We are used to monitor the cracks in Mount Rushmore for example, how Mount Rushmore is splitting and how that's going and whether they need to remediate anything so you know a broad variety of use cases, all these data loggers centers, connected to the platform and analytics and processing logic on that. A fun gig for me. I just started that a couple of months ago before that I was
in software engineering consulting for a long time, about nine years at Pivotal Labs and then VMware.
A fun little project that we're working on now and I had the chance to jump into some cost attribution stuff as part of it, hence the conversation today.
Ali: Which cloud providers are you using and what is the setup?
David: We're a multi-tenant SaaS platform so the whole value proposition of Eagle.io
is unlike AWS IoT Core or Azure IoT, we think of those as primitive building blocks which if you want to build your own solution cool go nuts with them but we're much higher up the stack than that. Essentially for engineering firms or big municipal water companies, state water companies, mining companies that they don't really want to mess around with all this code and IoT Core and all of this different stuff that's out there, which is really cool admittedly, but not core to their business so they just hook up to us and very quickly they're able to get data in analyze it alert on it. If you're monitoring a dam for example you want to know if a dam wall is failing you don't really care about configuring microservices or anything like that. So we're a multi-tenant SaaS application lots and lots of different users but a lot of shared resources as well. So that's kind of the layout. We're mostly AWS.
Ali: Can you tell us then in this context, what do you mean by cost attribution and why do cloud costs matter as part of that?
David: So why cost attribution, well it's pretty fundamental especially when you're building a data
intensive application like ours, but also in other circumstances you really just want to know 1)
can I guarantee that I'm going to make good gross margin, and 2) if I add more customers am I going to be
adding more profit or not is it going to be impacting my gross margin positively or negatively and if that's a very big question mark that's something to be nervous about and that's actually the situation we found ourselves in. We were negotiating with a particular enterprise customer who wanted to essentially more than 10x their usage of the platform and so they said well you got to give us a discount, for this to be financially feasible for us to do this and we want to do this deal you guys but we need to make it in the realm of dollars and that we can actually do. We said well we should be able to do this but our current pricing model doesn't scale for them or maybe it's just the numbers but if we give them too deep of a discount are we going to lose money on the thing like we don't know like what are our costs what's their usage what rights like what what's even driving our costs. So that's when we really sat down and said okay we need to figure this out.
There are two main reasons why people star GitHub projects:
Bookmarks: some people star GitHub repos to bookmark them for later use. For example I can see the repos I've starred1 and search within them for a keyword or sort them by how recently I starred them, or how active the project has been recently.
Show support or appreciation: others star repos to show support or appreciation, similar to how "likes" are used in social media sites. This is a social signal, and it's very important in the very early stages of open source projects, acting as a feedback loop for project creators. Knowing that other people have seen the project and cared enough to click on the Star button can create motivation for the creators to continue working on the project initially.
The latter is why I personally star projects. Regardless of whether I've used the project in the past, using it just now, plan to use it, or think it's a cool idea, I want the project creator to know that I like what they're doing. Terraform and Pulumi are projects that I recently starred to show support.
The main benefit of repo stars is creating confidence and a good first impression of the project. That in turn helps with the project getting users, and to a lesser extent contributors.
A 2018 academic research survey of over 700 developers found that "three out of four developers consider the number of stars before using or contributing to GitHub projects"2. GitHub stars are not the only metric that matters though. A project's activity level, for example its last release or commit, and its ease of use, for example the quality of its documentation, are also important factors in helping projects get users.
I say to a lesser extent as contributing, by creating a GitHub issue or submitting a pull request, requires significantly more effort than starring a repo. People who only star a repo are probably not yet active community members but they might become active in the future. This is why the Orbit Model classifies them as Observers3, as they can act as the top-of-funnel for growing users and contributors. hugely popular
In addition to helping projects get users, GitHub stars can help the project creators meet investors who are familiar with open source. Early on in Infracost's journey, we were surprised to get cold emails from VCs congratulating us on our star count. After speaking with a few, it became clear that they either had systems in place to monitor stars4, or had analysts who reviewed Trending Repos on GitHub for potential investment opportunities5. Some have gone even further. For example, the VC firm Runa Capital, who invested in Nginx and MariaDB, has started to track the fastest growing open source startups using GitHub stars and forks. Infracost was recently placed 5th on the ROSS Index6.
A16Z's Martin Casado thinks that there is a big trend towards bottom-up strategies in business-to-business (B2B) software that will shape the entire B2B landscape in the next 10 years7. I wonder if in the same way that social media influencers are changing how products are marketed and sold, GitHub influencers (someone with many GitHub followers) will change how enterprise software is marketed and sold? Developer Advocates are currently using Twitter and LinkedIn, but GitHub has a "follow" and a "status update" feature too. Will those remain as a simple way to get updates on code-related activities? Or could they be extended to enable GitHub influencers to post their demos, talks and blogs into the GitHub activity feed? Will companies be able to buy ads on GitHub and promote their open source projects?
Over to you - what have you learnt about GitHub stars, and how do you think they'll change in the future? I hang out on Twitter...
You can now do what-if anlaysis on AWS EC2 Reserved Instances (RI), as we have added support for these in the Infracost usage file. The RI type, term and payment option can be defined as shown below, to quickly get a monthly cost estimate. This works with aws_instance as well as aws_eks_node_group and aws_autoscaling_group as they also create EC2 instances. Let us know how you'd like Infracost to show the upfront costs by creating a GitHub issue.
operating_system: linux # Override the operating system of the instance, can be: linux, windows, suse, rhel.
reserved_instance_type: standard # Offering class for Reserved Instances. Can be: convertible, standard.
reserved_instance_term: 1_year # Term for Reserved Instances. Can be: 1_year, 3_year.
reserved_instance_payment_option: all_upfront # Payment option for Reserved Instances. Can be: no_upfront, partial_upfront, all_upfront.
Our new Jenkins integration enables you to save an HTML page for each pipeline build, which shows the Infracost diff output. Checkout this demo that uses Jenkins' Docker agent to run Infracost; the Jenkinsfile can be customized based on your requirements. The integration can also be used to fail a build if its cost estimate crosses a percentage threshold. This safety net is often used to ensure no one breaks the bank 😃
The infracost breakdown and infracost output commands show the monthly quantity, units, and monthly cost of resources by default. You can now use the new --fields flag to customize the columns shown in the table output to include price and hourly cost, or you can set it to only show the monthly cost if you prefer a simplified view (shown below). The HTML output format is being updated to support the same feature. The JSON output format will always include all fields.
The Infracost usage file enables you to define resource usage estimates using their resource path, e.g. storage for aws_dynamodb_table.my_table. This can be cumbersome for resource arrays, such as AWS CloudWatch Log Groups, since you'd have to define the array items individually.
We've addressed this issue by supporting the wildcard character [*] for resource arrays. Infracost will apply the usage values individually to each element of the array (they all get the same values). If an array element (e.g. this) and [*] are specified for a resource, only the array element's usage will be applied to that resource. This enables you to define default values using [*] and override specific elements using their index.
Last week David Nunez (Documentation Manager at Stripe) and Stephanie Blotner (Technical Writer, Manager at Uber) sat down with me to review our docs. We also discussed feature-based vs task-based docs and how early-stage startups should think about technical documentation as a key part of their product.
David and Stephanie have around 20 years of experience between them, with a special a focus on writing for a developer audience. I learned a lot from this session; I'm sharing the recording and transcripts here as I think it contains great advice that applies to other startups too.
The following is lightly edited transcript of our meeting:
Ali: So today's I've got David from Stripe and Stephanie from Uber, maybe we can do some quick intros, then we're going to have a look at the Infracost docs and I've got some questions I'm going to ask them. We'll focus on how early stage startups should look at their docs as a function. David and Stephanie thank you for taking time to meet with me!
Stephanie: Hi! I'm Stephanie Blotner, Technical Writer and Manager at Uber. I mainly write for a developer audience and just really love helping people learn through documentation.
David: I manage the docs team at Stripe and similarly just have always obsessed over documentation. I mean it's 8am, we're talking about docs, and I'm excited about it! My favorite thing is talking to startup about docs as a) they'll avoid a lot of problems later on if they think about docs early and b) also it's fresh clay to mold. It's a lot easier than helping a big company undo years and years of decisions.
Ali: I'm going to share my screen [on the docs homepage], and maybe I'll just let you comment on it.
David: I like docs to be approachable, even before you read or start to understand what the thing is, users that are coming to the docs still have the same subconscious tendencies as they did on the website. How pretty is it, what's the layout, is the simple, so I think the things that I like: the colors are nice on the eye, it has the branding, it's not like here's some auto-generated docs that looks like another terminal. There's some good negative space. Layout wise you can do a better job of directing people to the main pieces, almost like your main site, where you show what the product does in action, there's some space, so the docs should play a similar complementary role. More of a docs home page than a getting started page. Show the top 3 things that you do really well, above the fold, and also a 1-line description. Not trying to do too much upfront like install it, try it, pay us... just getting them to scroll down and browse it is better.
Stephanie: Definitely agree that having a home-page is important, so focusing on one or two key paths that you want to funnel users through, for example highlighting the different CI/CD integrations. When it comes to the actual getting started page, I really like how concise and clean most of your documentation actually is, and that comes from you using headings, tables and lists, and not just a wall of text. In the installation section I really like having headings that are each like a task that the user should do. Moving onto the Usage section, I think you can follow the same model, where it's more perceptive and telling people what to do. When I read this for the first time, it wasn't clear that this was the part telling me how to use Infracost with my project. I wish it was a little bit more bossy, and tell me do this, do this, so as a user it makes me feel more grounded and clearer of what's expected of me.
Ali: So what would success look like when you've reached the bottom of the page. If you're observing a few people looking at your docs, how do you know it's working? Would you want them to explore? Focus on one specific task so they get some value immediately?
David: To take a step back, look at the GitLab docs, I watched your video with Sid so I thought hah maybe GitLab has good docs, and they do! So if you scroll down, this covers the use-case and layout perfectly, you might not have 6 use-cases, you might have 1 or 3, but here they're showing some graphics, a 1-line use-case description, so users can start to map-out if this product is for them, so they're drawing you, and they're not doing anything with this page. So to answer your question of what does success look like, that is a question for your company to answer. What do you want users to do? Is it really important for them to install it, or do you want them to do their first PR and see the product in action? Coz really you want the getting started page to focus on brand new users, expert users will use other parts of the docs.
Ali: So during demos, people smile when they see the pull request comment. And so far I've done the CTA very poorly.
David: yeah it's buried. Have you heard of any eye-tracking studies with docs? Like we're nerding out now but me and Stephanie talk about this a lot. There's a lot of studies that show people read docs using an F pattern, so they'll read the heading across then scan down and they'll see something that catches their eyes then they'll read again, then they'll keep scanning and scanning. So if you can imagine if this is what you want the hook to be (CTA to use CI/CD) then it's very buried. Users might not even scroll, so you want that to be very early, and maybe re-use the same graphic from your home page so users notice it and are excited to go install it. What is the demo link on the CI/CD integrations? The word Demo is eye candy for developers. That would be what I'd highlight too.
Stephanie: I think the point about CI/CD being buried a bit, raises an interesting point: when someone comes to a document, and it's long or requires them to scroll a bit, it's really helpful to put upfront and set expectations, like by the end of reading this page you will be able to do have you own pull request that looks like this then have that cool screenshot or a link to the demo.
David: So you asked earlier, what does success look like? Constant user research is the only way to measure that. Have your high-level goals, like MAUs (Monthly Active Users) that you pitch to investors, then do user interviews regularly. At Stripe we do that weekly, all types of users, and it doesn't really need to be formal. Like offer a $10 gift card to new users. Every time that you make a change, or have a regularly weekly or monthly time you set aside for that. The time they'll save you in development time pays exponentially back. That's how you can shape over time and see if this page is working. So I would get a fresh users, ask them to open this page and see what they think, ask them to be honest, and let them talk through those first impression. Then give them a task, so you can see what they do. It's easy to develop docs in a vacuum, so these are my tools, my environment and my terminal but users might have different setups.
Stephanie: The docs are actually part of the product. That's a key point.
David: It's often easier to do that vs product research, as it's just a document. And you can do it with sample docs, without even having a built product or working code. So docs are a great way to test upcoming things, they can simulate what they would do, so you can take that feedback and incorporate it into the product. We do that all the time and it helps us a lot.
Stephanie: It's also a great way to test out eye patterns, so if you share a link with someone and are watching their screen, you can immediately tell how they scroll, do they go all the way to the bottom, are they trying to click on stuff, are they lingering.
Ali: So far we have product docs, we don't have guides or use-cases. How do you think about that? Should we have both and split the focus?
Stephanie: It's really challenging when you're launching docs for the first-time, to know how to chose and what to write first. It's very cognitively intuitive to put-up feature docs, and I'm super-biased here but I think task-based docs are the way to go, to really guide users towards what they're trying to accomplish. Users are coming to your docs because they're trying to learn something, or they're trying to accomplish a task or goal, or they're really frustrated, something is broken and it's not working. The more you can guide them, the more you can reduce the cognitive overload from them. The better-off you are. But it's definitely more work on the creator side as you really have to understand who are you users and what are they trying to do.
David: I'm going to use stronger language and say that's the only way to do it properly, but it's also by far the hardest way to do it. A simpler way to do it is to just get your language and terminology mapped to how users are talking about it, so it's really easy to go with awesome names you came-up with your features, but users are really searching Google on how to save money with AWS. They're not like searching "use Infracost 2.0 cloud-saver". Write down 3 use-cases that you want Infracost to show-up on Google, and put that on your pages front and center. Then you can map-out the rest of the docs. Information-architecture wise, you should always think about their problems and the tasks they'd need to do to solve it, and work backwards from that.
Ali: So for this next part, I thought it would be cool if we could zone out a bit and ignore Infracost docs, to talk about more generic questions. To begin, when do you think seed-stage startups should start to think about docs, and maybe more importantly what is important for them to consider?
Stephanie: I think you should consider docs as part of the product. The same care and considerations and thoughts you put into the product should go into docs. Docs are an incredible tool to improve adoption, reduce onboarding and support time, brand reputation, overall wonderful-ness. Don't worry too much about being comprehensive, and trying to document all the things as you haven't figured out everything. Maybe use-cases, what this thing is, maybe a little about troubleshooting. People prefer less content actually, and gather feedback really early on docs.
David: Agree, it's part of the product, you don't want to say do this first or do it last. If you think of docs as a way to increase adoption, or reduce the time it takes for someone to start using your product vs a sales person reaching out to them and them trialing it over a period of time, you're making a lot more money efficiently. It's better when a user feels smart, and they can figure out your product and use it without any human intervention. The costs (user acquisition) and the user experience improves, so what else do you need?
Ali: In open source, the contributors can be from a variety experiences and from all over the world, should the core contributors own the docs?
Stephanie: Have the core team setup the docs, have principals of how you'd want docs to be, and review contributions. One way I've seen it done really well is similar to how Infracost leaves a comment on PRs, you can get people to read a check list, and have docs as one of the core things as they're making a pull request.
Ali: When you're submitting a PR, it's accepted that it obviously needs test. Do you feel like it's the same with docs? Does it create a barrier? Coz in the early stages of an open source project, you kinda want everyone to come check it out and contribute, so the discussions we've had with Alistair are sometimes like, let's just get them to contribute code and we'll write all the docs. So we can reduce the barriers, but I don't think that scales.
David: that definitely won't scale.
Stephanie: You want the core team to setup the docs framework and have standards for docs, so if you have tasks or use-cases, people can see how to document it. And that way you're reducing the barrier, so people can just follow it to plug things in.
David: I think that's the right way to think about it. So you don't want a high barrier, but do want to document yourself out of those tasks. Find a light-weight style guide, like the Microsoft Writing Style Guide is great, so use something like that and add a few things that are specific to Infracost. You don't want a ton of stuff, and also PR guidelines: what does a good PR review look like, show by example, so have examples where you are giving people feedback on the docs and put that in the run book for a good PR review. So the balance is you want lots of contributions, but also you want quality contributions. So showing the community how excited you are about good docs via contributions or having a leader board or some way to call-out good docs contributions. People don't usually contribute to open source docs as they think no one really cares. I've also found in engineering teams to put in an incentive in the engineering career ladder helps, like a simple thing to say as part of being a senior engineer you can document your code and review other docs as part of the peer-review process. Setting the example from the founding team, is important. I've been at companies where the executive don't even put grammar into their emails and that sends a signal that I'm too busy to capitalize my words, use periods and commas, and so why would anyone else obsess with quality of their writing as the leadership don't. If you signal without being overbearing, people will see that and follow along.
Ali: That's awesome! In your recent separate blog posts I was reading about the importance of having a writing culture, and I was going to ask a question about when startup founders should start thinking about that as part of the bigger company culture; but you've already answered that question. It's like: you set the tone as leaders of the company and you can do a lot early-on to encourage and make it part of the process vs something that's not really thought of from day 1. I don't have any other questions! Thank you very much - I don't know if you have any other tips or any other final thing I should go look at or think about?
David: I'd say resource-wise, Nielsen Norman Group is great. If you want to learn about F-patterns or information architecture, all the nitty gritty, their search is pretty good, you'll find stuff dating back 20 or 25 years, up till recently when they have videos. So I would say when you want to dive deep into something, that resource is great. My last tip would just be: that if you set the goal of having high quality content and you know it's important, just know that you're not going to get there on day 1, just little steps so you move in that direction, creating that habit and move towards that goal. You'll then someones have huge breakthroughs and make big leaps as long as you constantly keep working on it.
Stephanie: Exactly, perfect is the enemy of good, so don't put too much pressure specially if it's people's first time contributing to your code base and documentation - you don't want to be too hard on them.
Ali: That's awesome! Thank you both very much, David and Stephanie for your time. I'm going to put a lot of the stuff you said into practice and hopefully in 6 months time when you look at the Infracost docs, it'll be a lot better.
It's estimated that data centers consume around 1% of the global electric supply 1 and this could increase to between 3-13% by 2030 2. The wider ICT (information and communications technology) ecosystem accounts for 2% of the world's carbon emissions, putting it on par with the entire aviation industry 3.
The cloud has revolutionized how we use data centers. With the democratisation of infrastructure provisioning, companies have been able to launch, iterate and scale many times faster than before.
There's no doubt that migrating from traditional on-premise data centers to the cloud is more environmentally friendly. Due to their scale and multi-tenancy, cloud providers can be a lot more efficient. By switching to the cloud, companies can provision 75% fewer servers for the same workload 2. Between 2010 and 2018 the total global compute power of data centers increased five-fold. However, due to the energy efficiency improvements offered by the cloud, the amount of energy they consumed only grew by 6% 3.
The large cloud providers themselves all have sustainability initiatives. Google recently used their DeepMind AI to reduce the energy required for cooling their data centers by 40% 4. In December 2020 Amazon announced they had become the world's largest corporate purchaser of renewable energy 5.
Despite these encouraging efforts we are seeing a growing problem of "cloud waste". In 2020 Infrastructure as a Service (IaaS) spend was $50bn and it's estimated that $17bn of this spend is "cloud waste" 6. The main causes of this are idle resources and over-provisioned resources.
With the rise of infrastructure as code deploying to the cloud has become even more accessible. It's no longer only central teams who are provisioning infrastructure. It's now the responsibility of every engineering team, and it's integrated directly into their workflows. Without visibility at this level it's only a matter of time before a company suffers from cloud sprawl.
A highly requested feature was the ability to see the difference in cost between the current state and the planned state of Terraform projects in the CLI (we already have this feature in CI/CD integrations). Check it out by running infracost diff --help. We have also updated the CI output to make it easier to read.
Usage-based resources, such as AWS Lambda or Google Cloud Storage, require estimated usage data so Infracost can show costs in the output. You can define these in a YAML file, called a usage file, and use that to get cost estimates for such resources.
Previously you had to create this file manually. You can now use the --sync-usage-file option to generate a new usage file or update an existing one from your Terraform project. This option is a safe sync: it adds any missing resources (with zeros for the usage estimates), it does not overwrite any lines that you have changed in the YAML, and it deletes any resources that are not used in the Terraform project.
Inputs: a new path flag has been introduced to replace the various methods of running Infracost. You can now simply point Infracost to the path of a Terraform directory, plan binary file, or plan JSON file and it'll just work. This lays some of the groundwork for supporting other IaC tools in the future.
Outputs: the dashes (-) in the output have been replaced with price descriptions such as Cost depends on usage: $0.20 per 1M requests so you can understand the pricing structure of usage-based resources such as AWS Lambda or Google Cloud Storage.
Config file: the config file has been updated to support infra-as-code repos that have multiple workspaces and projects. This enables you to combine projects into the same breakdown or diff output. So if a Terraform module or variable is used across workspaces/projects, you can quickly see the cost impact of changing it.
We've updated the CI/CD integrations to add a new post_condition option so you can decide when you'd like cost comments to be shown in pull requests. Options include: always leave a cost comment, only comment when there is a change to the cost, or only comment when a percentage threshold has been reached (e.g. more than 5% increase or decrease in costs).
We are working on adding Microsoft Azure to Infracost. This has two steps: the first is to add the prices to the Cloud Pricing API, then to add the resources to the CLI. We completed adding around 300,000 prices from Microsoft Azure to the Cloud Pricing API (step one), and now we're looking for volunteers to add resources to the CLI (step two) before the initial release. Please email email@example.com if you are an Azure user and would like to contribute (basic golang knowledge is required).
We also added support for the following cloud resources:
AWS: Elastic File System (EFS), EBS GP3 volumes, DX Connection, Route53 Health checks, RDS Serverless
Google: Memorystore Redis, Cloud Monitoring and Logging, Compute Images and Snapshots
Thanks for being part of the community! We are always looking forward to your feedback.
Recently we released a new infracost diff command inspired by git diff. This shows a diff of monthly cloud cost estimates between the current and planned state of Terraform projects. At a high-level this might seems like a simple exercise of subtracting the current state's cost estimate from the planned state, but cloud costs are rarely that simple to deal with. Let's take a look at the following screenshot to understand some of the nuances.
The aws_instance is being changed, which reduces the cost by $125/month (from $743 to $618).
AWS EC2 has many different cost components, so to explain what caused the above change, we also flag the sub-resource ebs_block_device that changed (the first attached block device). Underneath it, we show the cost component that caused the actual cost diff, Provisioned IOPS SSD Storage (io1); i.e. reducing the size of that volume can save $1500/year. For those who have done this in production, they know it's not a one-click change as you need to create a new EBS volume and copy over the data. What Infracost enables you to do is to quickly tell how much such a change would save you, then decide if it's worth it.
A new aws_lambda_function is being added. Since we don't know how much it's going to be used, we can't show a cost estimate. But we can still show you the prices you'll be charged for: $0.20 per 1M requests and a tiny amount per GB-second. This is a usage-based resource, so if you like you can create a yaml file to provide usage estimates and get a cost estimate. It's hard to think in GB-seconds, so we enable you to input the average request duration and we'll do the math to map that to GB-seconds based on the memory_size of your function and any rounding rules that AWS applies.
monthly_requests:100000000 # Monthly number of requests.
request_duration_ms:250 # Average duration of each request in milliseconds.
Finally we show a summary at the bottom: the EC2 instance change reduces the cost by 17%, and you can use the above yaml file to do simple what-if analysis on the Lambda costs.
"Shift left" has become a popular buzzword for both Software Engineering and DevOps. It means introducing processes earlier in the software development cycle.
The "shift left" principle started with testing. In a traditional waterfall model testing is performed just before release. Shift left testing started it earlier by introducing practices such as Test-Driven Development (TDD) and Behaviour-Driven Development (BDD).
We are now seeing the "shift left" principle applied to other disciplines. Continuous delivery platforms allow engineering teams to deploy frequently and integrate a suite of tools into their cycle. The term DevSecOps has been coined. The idea behind it is to introduce security as early as possible in the software development cycle. This has given rise to a whole ecosystem of tools to help implement this practice. Companies like Snyk and Anchore integrate automated security scanning into DevOps workflows so teams can proactively find and fix vulnerabilities.
There's an argument that "shifting left" gives too much work and responsibility to engineering teams. This can be the case if the right tooling is not available.
"Shift left" isn't about performing one-off tasks earlier in the cycle, and this is where the name causes some confusion. It's about introducing processes and automation earlier and performing them continuously throughout development.
When a company introduces a "shift left" mentality it is important that it doesn't impact developer velocity. Tools that help here should fit into developers' workflows and show them the right level of information at the right time.
Currently cloud costs aren't discussed until they become a problem. A common story is when cloud costs become a problem, companies will set a top-down directive and form a team to reduce their spend by X%. They manage to fix the immediate pain but after six to twelve months the problem returns.
Cloud costs aren't a one-off problem that can be solved. That's why it's inevitable that cloud costs will "shift left". Building a cost-aware engineering team is crucial to keep cloud bills under control.
That's why we built Infracost. We help engineering teams implement this culture of cost-awareness without impacting their velocity. You can integrate Infracost directly into your existing workflow to see cost information throughout your DevOps process. Check out our integrations for instructions.
The CLI now only runs terraform init if required since Terraform commands aren't the fastest in the world (init usually takes 20+ secs for me, but it depends on how many plugins you have). Furthermore, calls to the Cloud Pricing API have been switched from sequential to parallel. Infracost should run much faster than before.
Depending on your Terraform workflow, you'll run Infracost with different options. Things can get complicated when you have multiple projects in a repo, each requiring their own Terraform variables. For example, if you have two workspaces and want to see their total cost estimate, you would run something like this:
You can now get the monthly cost diff from the Infracost JSON output, e.g. the following shows the monthly cost is going to be increased by $1530 if the Terraform plan is applied. You can also get totalHourlyCost, or add --no-color=true --log-level=warn if you don't want the spinners/logs/color.
We also shipped support for the following cloud resources:
AWS: Config, ECS on EC2, EventBridge, Route 53 Resolver, CodeBuild
Google: Key Management Service (KMS), Google Cloud Functions
Azure: great progress is being made, stay tuned for exciting news soon
The usage file params for Google Cloud Functions are pretty cool; as shown below you can define 3 simple params and we'll estimate the cost for you, no need for you to decode how function memory maps to GHz-seconds and rounding.