April 2021 update - EC2 reserved instances and Jenkins integration!

Two big milestones to celebrate this month: Infracost now supports over 100 AWS and Google resources and we have over 100 people in our community Slack channel.

You can upgrade to the latest version (v0.8.6) to pickup the new features. If you are using v0.7 (or older) please follow the v0.8 migration guide.

📉 EC2 reserved instances#

You can now do what-if anlaysis on AWS EC2 Reserved Instances (RI), as we have added support for these in the Infracost usage file. The RI type, term and payment option can be defined as shown below, to quickly get a monthly cost estimate. This works with aws_instance as well as aws_eks_node_group and aws_autoscaling_group as they also create EC2 instances. Let us know how you'd like Infracost to show the upfront costs by creating a GitHub issue.

aws_instance.my_instance:
operating_system: linux # Override the operating system of the instance, can be: linux, windows, suse, rhel.
reserved_instance_type: standard # Offering class for Reserved Instances. Can be: convertible, standard.
reserved_instance_term: 1_year # Term for Reserved Instances. Can be: 1_year, 3_year.
reserved_instance_payment_option: all_upfront # Payment option for Reserved Instances. Can be: no_upfront, partial_upfront, all_upfront.

♾️ Jenkins integration#

Our new Jenkins integration enables you to save an HTML page for each pipeline build, which shows the Infracost diff output. Checkout this demo that uses Jenkins' Docker agent to run Infracost; the Jenkinsfile can be customized based on your requirements. The integration can also be used to fail a build if its cost estimate crosses a percentage threshold. This safety net is often used to ensure no one breaks the bank 😃

Infracost Jenkins integration

⚙️ Customize output columns#

The infracost breakdown and infracost output commands show the monthly quantity, units, and monthly cost of resources by default. You can now use the new --fields flag to customize the columns shown in the table output to include price and hourly cost, or you can set it to only show the monthly cost if you prefer a simplified view (shown below). The HTML output format is being updated to support the same feature. The JSON output format will always include all fields.

Name Monthly Cost
aws_instance.web_app
├─ Instance usage (Linux/UNIX, on-demand, m5.4xlarge) $560.64
├─ root_block_device
│ └─ Storage (general purpose SSD, gp2) $5.00
└─ ebs_block_device[0]
├─ Storage (provisioned IOPS SSD, io1) $125.00
└─ Provisioned IOPS $52.00

* Array wildcards in usage file#

The Infracost usage file enables you to define resource usage estimates using their resource path, e.g. storage for aws_dynamodb_table.my_table. This can be cumbersome for resource arrays, such as AWS CloudWatch Log Groups, since you'd have to define the array items individually.

We've addressed this issue by supporting the wildcard character [*] for resource arrays. Infracost will apply the usage values individually to each element of the array (they all get the same values). If an array element (e.g. this[0]) and [*] are specified for a resource, only the array element's usage will be applied to that resource. This enables you to define default values using [*] and override specific elements using their index.

aws_cloudwatch_log_group.my_group[0]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200
aws_cloudwatch_log_group.my_group[1]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200
aws_cloudwatch_log_group.my_group[3]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200

⛅ New cloud resources#

We added support for the following cloud resources:

  • AWS: Redshift. The CPU-credit usage file params were improved for T2, T3 & T4 instances.
  • Google: Google SQL and Container Registry.

Thanks for being part of the community! We are always looking forward to your feedback, so please create GitHub issues here. We read every single one.

Infracost docs review with leaders from Stripe and Uber

Last week David Nunez (Documentation Manager at Stripe) and Stephanie Blotner (Technical Writer, Manager at Uber) sat down with me to review our docs. We also discussed feature-based vs task-based docs and how early-stage startups should think about technical documentation as a key part of their product.

David and Stephanie have around 20 years of experience between them, with a special a focus on writing for a developer audience. I learned a lot from this session; I'm sharing the recording and transcripts here as I think it contains great advice that applies to other startups too.

The following is lightly edited transcript of our meeting:

Ali: So today's I've got David from Stripe and Stephanie from Uber, maybe we can do some quick intros, then we're going to have a look at the Infracost docs and I've got some questions I'm going to ask them. We'll focus on how early stage startups should look at their docs as a function. David and Stephanie thank you for taking time to meet with me!

Stephanie: Hi! I'm Stephanie Blotner, Technical Writer and Manager at Uber. I mainly write for a developer audience and just really love helping people learn through documentation.

David: I manage the docs team at Stripe and similarly just have always obsessed over documentation. I mean it's 8am, we're talking about docs, and I'm excited about it! My favorite thing is talking to startup about docs as a) they'll avoid a lot of problems later on if they think about docs early and b) also it's fresh clay to mold. It's a lot easier than helping a big company undo years and years of decisions.

Ali: I'm going to share my screen [on the docs homepage], and maybe I'll just let you comment on it.

David: I like docs to be approachable, even before you read or start to understand what the thing is, users that are coming to the docs still have the same subconscious tendencies as they did on the website. How pretty is it, what's the layout, is the simple, so I think the things that I like: the colors are nice on the eye, it has the branding, it's not like here's some auto-generated docs that looks like another terminal. There's some good negative space. Layout wise you can do a better job of directing people to the main pieces, almost like your main site, where you show what the product does in action, there's some space, so the docs should play a similar complementary role. More of a docs home page than a getting started page. Show the top 3 things that you do really well, above the fold, and also a 1-line description. Not trying to do too much upfront like install it, try it, pay us... just getting them to scroll down and browse it is better.

Stephanie: Definitely agree that having a home-page is important, so focusing on one or two key paths that you want to funnel users through, for example highlighting the different CI/CD integrations. When it comes to the actual getting started page, I really like how concise and clean most of your documentation actually is, and that comes from you using headings, tables and lists, and not just a wall of text. In the installation section I really like having headings that are each like a task that the user should do. Moving onto the Usage section, I think you can follow the same model, where it's more perceptive and telling people what to do. When I read this for the first time, it wasn't clear that this was the part telling me how to use Infracost with my project. I wish it was a little bit more bossy, and tell me do this, do this, so as a user it makes me feel more grounded and clearer of what's expected of me.

Ali: So what would success look like when you've reached the bottom of the page. If you're observing a few people looking at your docs, how do you know it's working? Would you want them to explore? Focus on one specific task so they get some value immediately?

David: To take a step back, look at the GitLab docs, I watched your video with Sid so I thought hah maybe GitLab has good docs, and they do! So if you scroll down, this covers the use-case and layout perfectly, you might not have 6 use-cases, you might have 1 or 3, but here they're showing some graphics, a 1-line use-case description, so users can start to map-out if this product is for them, so they're drawing you, and they're not doing anything with this page. So to answer your question of what does success look like, that is a question for your company to answer. What do you want users to do? Is it really important for them to install it, or do you want them to do their first PR and see the product in action? Coz really you want the getting started page to focus on brand new users, expert users will use other parts of the docs.

Ali: So during demos, people smile when they see the pull request comment. And so far I've done the CTA very poorly.

David: yeah it's buried. Have you heard of any eye-tracking studies with docs? Like we're nerding out now but me and Stephanie talk about this a lot. There's a lot of studies that show people read docs using an F pattern, so they'll read the heading across then scan down and they'll see something that catches their eyes then they'll read again, then they'll keep scanning and scanning. So if you can imagine if this is what you want the hook to be (CTA to use CI/CD) then it's very buried. Users might not even scroll, so you want that to be very early, and maybe re-use the same graphic from your home page so users notice it and are excited to go install it. What is the demo link on the CI/CD integrations? The word Demo is eye candy for developers. That would be what I'd highlight too.

Stephanie: I think the point about CI/CD being buried a bit, raises an interesting point: when someone comes to a document, and it's long or requires them to scroll a bit, it's really helpful to put upfront and set expectations, like by the end of reading this page you will be able to do have you own pull request that looks like this then have that cool screenshot or a link to the demo.

David: So you asked earlier, what does success look like? Constant user research is the only way to measure that. Have your high-level goals, like MAUs (Monthly Active Users) that you pitch to investors, then do user interviews regularly. At Stripe we do that weekly, all types of users, and it doesn't really need to be formal. Like offer a $10 gift card to new users. Every time that you make a change, or have a regularly weekly or monthly time you set aside for that. The time they'll save you in development time pays exponentially back. That's how you can shape over time and see if this page is working. So I would get a fresh users, ask them to open this page and see what they think, ask them to be honest, and let them talk through those first impression. Then give them a task, so you can see what they do. It's easy to develop docs in a vacuum, so these are my tools, my environment and my terminal but users might have different setups.

Stephanie: The docs are actually part of the product. That's a key point.

David: It's often easier to do that vs product research, as it's just a document. And you can do it with sample docs, without even having a built product or working code. So docs are a great way to test upcoming things, they can simulate what they would do, so you can take that feedback and incorporate it into the product. We do that all the time and it helps us a lot.

Stephanie: It's also a great way to test out eye patterns, so if you share a link with someone and are watching their screen, you can immediately tell how they scroll, do they go all the way to the bottom, are they trying to click on stuff, are they lingering.

Ali: So far we have product docs, we don't have guides or use-cases. How do you think about that? Should we have both and split the focus?

Stephanie: It's really challenging when you're launching docs for the first-time, to know how to chose and what to write first. It's very cognitively intuitive to put-up feature docs, and I'm super-biased here but I think task-based docs are the way to go, to really guide users towards what they're trying to accomplish. Users are coming to your docs because they're trying to learn something, or they're trying to accomplish a task or goal, or they're really frustrated, something is broken and it's not working. The more you can guide them, the more you can reduce the cognitive overload from them. The better-off you are. But it's definitely more work on the creator side as you really have to understand who are you users and what are they trying to do.

David: I'm going to use stronger language and say that's the only way to do it properly, but it's also by far the hardest way to do it. A simpler way to do it is to just get your language and terminology mapped to how users are talking about it, so it's really easy to go with awesome names you came-up with your features, but users are really searching Google on how to save money with AWS. They're not like searching "use Infracost 2.0 cloud-saver". Write down 3 use-cases that you want Infracost to show-up on Google, and put that on your pages front and center. Then you can map-out the rest of the docs. Information-architecture wise, you should always think about their problems and the tasks they'd need to do to solve it, and work backwards from that.

Ali: So for this next part, I thought it would be cool if we could zone out a bit and ignore Infracost docs, to talk about more generic questions. To begin, when do you think seed-stage startups should start to think about docs, and maybe more importantly what is important for them to consider?

Stephanie: I think you should consider docs as part of the product. The same care and considerations and thoughts you put into the product should go into docs. Docs are an incredible tool to improve adoption, reduce onboarding and support time, brand reputation, overall wonderful-ness. Don't worry too much about being comprehensive, and trying to document all the things as you haven't figured out everything. Maybe use-cases, what this thing is, maybe a little about troubleshooting. People prefer less content actually, and gather feedback really early on docs.

David: Agree, it's part of the product, you don't want to say do this first or do it last. If you think of docs as a way to increase adoption, or reduce the time it takes for someone to start using your product vs a sales person reaching out to them and them trialing it over a period of time, you're making a lot more money efficiently. It's better when a user feels smart, and they can figure out your product and use it without any human intervention. The costs (user acquisition) and the user experience improves, so what else do you need?

Ali: In open source, the contributors can be from a variety experiences and from all over the world, should the core contributors own the docs?

Stephanie: Have the core team setup the docs, have principals of how you'd want docs to be, and review contributions. One way I've seen it done really well is similar to how Infracost leaves a comment on PRs, you can get people to read a check list, and have docs as one of the core things as they're making a pull request.

Ali: When you're submitting a PR, it's accepted that it obviously needs test. Do you feel like it's the same with docs? Does it create a barrier? Coz in the early stages of an open source project, you kinda want everyone to come check it out and contribute, so the discussions we've had with Alistair are sometimes like, let's just get them to contribute code and we'll write all the docs. So we can reduce the barriers, but I don't think that scales.

David: that definitely won't scale.

Stephanie: You want the core team to setup the docs framework and have standards for docs, so if you have tasks or use-cases, people can see how to document it. And that way you're reducing the barrier, so people can just follow it to plug things in.

David: I think that's the right way to think about it. So you don't want a high barrier, but do want to document yourself out of those tasks. Find a light-weight style guide, like the Microsoft Writing Style Guide is great, so use something like that and add a few things that are specific to Infracost. You don't want a ton of stuff, and also PR guidelines: what does a good PR review look like, show by example, so have examples where you are giving people feedback on the docs and put that in the run book for a good PR review. So the balance is you want lots of contributions, but also you want quality contributions. So showing the community how excited you are about good docs via contributions or having a leader board or some way to call-out good docs contributions. People don't usually contribute to open source docs as they think no one really cares. I've also found in engineering teams to put in an incentive in the engineering career ladder helps, like a simple thing to say as part of being a senior engineer you can document your code and review other docs as part of the peer-review process. Setting the example from the founding team, is important. I've been at companies where the executive don't even put grammar into their emails and that sends a signal that I'm too busy to capitalize my words, use periods and commas, and so why would anyone else obsess with quality of their writing as the leadership don't. If you signal without being overbearing, people will see that and follow along.

Ali: That's awesome! In your recent separate blog posts I was reading about the importance of having a writing culture, and I was going to ask a question about when startup founders should start thinking about that as part of the bigger company culture; but you've already answered that question. It's like: you set the tone as leaders of the company and you can do a lot early-on to encourage and make it part of the process vs something that's not really thought of from day 1. I don't have any other questions! Thank you very much - I don't know if you have any other tips or any other final thing I should go look at or think about?

David: I'd say resource-wise, Nielsen Norman Group is great. If you want to learn about F-patterns or information architecture, all the nitty gritty, their search is pretty good, you'll find stuff dating back 20 or 25 years, up till recently when they have videos. So I would say when you want to dive deep into something, that resource is great. My last tip would just be: that if you set the goal of having high quality content and you know it's important, just know that you're not going to get there on day 1, just little steps so you move in that direction, creating that habit and move towards that goal. You'll then someones have huge breakthroughs and make big leaps as long as you constantly keep working on it.

Stephanie: Exactly, perfect is the enemy of good, so don't put too much pressure specially if it's people's first time contributing to your code base and documentation - you don't want to be too hard on them.

Ali: That's awesome! Thank you both very much, David and Stephanie for your time. I'm going to put a lot of the stuff you said into practice and hopefully in 6 months time when you look at the Infracost docs, it'll be a lot better.

Impact of cloud waste on the environment

It's estimated that data centers consume around 1% of the global electric supply 1 and this could increase to between 3-13% by 2030 2. The wider ICT (information and communications technology) ecosystem accounts for 2% of the world's carbon emissions, putting it on par with the entire aviation industry 3.

The cloud has revolutionized how we use data centers. With the democratisation of infrastructure provisioning, companies have been able to launch, iterate and scale many times faster than before.

There's no doubt that migrating from traditional on-premise data centers to the cloud is more environmentally friendly. Due to their scale and multi-tenancy, cloud providers can be a lot more efficient. By switching to the cloud, companies can provision 75% fewer servers for the same workload 2. Between 2010 and 2018 the total global compute power of data centers increased five-fold. However, due to the energy efficiency improvements offered by the cloud, the amount of energy they consumed only grew by 6% 3.

The large cloud providers themselves all have sustainability initiatives. Google recently used their DeepMind AI to reduce the energy required for cooling their data centers by 40% 4. In December 2020 Amazon announced they had become the world's largest corporate purchaser of renewable energy 5.

Despite these encouraging efforts we are seeing a growing problem of "cloud waste". In 2020 Infrastructure as a Service (IaaS) spend was $50bn and it's estimated that $17bn of this spend is "cloud waste" 6. The main causes of this are idle resources and over-provisioned resources.

With the rise of infrastructure as code deploying to the cloud has become even more accessible. It's no longer only central teams who are provisioning infrastructure. It's now the responsibility of every engineering team, and it's integrated directly into their workflows. Without visibility at this level it's only a matter of time before a company suffers from cloud sprawl.

We think it's important that, as engineers, we take responsibility to reduce this waste. We are looking at how we can integrate carbon emission estimates into Infracost. If you have expertise in this area, e.g. know what data sources we can use (as discussed in the GitHub issue), please reach out to us. We're interested in hearing what we can do to help.

  1. https://www.iea.org/reports/data-centres-and-data-transmission-networks
  2. https://www.mdpi.com/2078-1547/6/1/117
  3. https://www.nature.com/articles/d41586-018-06610-y
  4. https://blog.google/outreach-initiatives/environment/deepmind-ai-reduces-energy-used-for/
  5. https://press.aboutamazon.com/news-releases/news-release-details/amazon-becomes-worlds-largest-corporate-purchaser-renewable
  6. https://www.gartner.com/en/newsroom/press-releases/2019-11-13-gartner-forecasts-worldwide-public-cloud-revenue-to-grow-17-percent-in-2020

March 2021 update - new diff command and usage file automation!

March was busy as we added major new features and had Y Combinator's demo day, where Hassan (our CEO) delivered an awesome 60 second pitch on a Zoom call with hundreds of investors!

You can upgrade to the latest version (v0.8.3) to pickup the new features. If you are using v0.7 (or older) please follow the v0.8 migration guide.

🗒️ See diffs in CLI#

A highly requested feature was the ability to see the difference in cost between the current state and the planned state of Terraform projects in the CLI (we already have this feature in CI/CD integrations). Check it out by running infracost diff --help. We have also updated the CI output to make it easier to read.

Infracost diff command

⚙️ Automated usage-based resource definitions#

Usage-based resources, such as AWS Lambda or Google Cloud Storage, require estimated usage data so Infracost can show costs in the output. You can define these in a YAML file, called a usage file, and use that to get cost estimates for such resources.

Previously you had to create this file manually. You can now use the --sync-usage-file option to generate a new usage file or update an existing one from your Terraform project. This option is a safe sync: it adds any missing resources (with zeros for the usage estimates), it does not overwrite any lines that you have changed in the YAML, and it deletes any resources that are not used in the Terraform project.

> infracost breakdown --sync-usage-file --usage-file infracost-usage.yml --path /code
[...]
> cat infracost-usage.yml
version: 0.1
resource_usage:
aws_lambda_function.hi:
monthly_requests: 0 # Monthly requests to the Lambda function.
request_duration_ms: 0 # Avg duration of each request in milliseconds.

😃 Simplified inputs, outputs and config file#

We like it when things are made easy:

  • Inputs: a new path flag has been introduced to replace the various methods of running Infracost. You can now simply point Infracost to the path of a Terraform directory, plan binary file, or plan JSON file and it'll just work. This lays some of the groundwork for supporting other IaC tools in the future.
  • Outputs: the dashes (-) in the output have been replaced with price descriptions such as Cost depends on usage: $0.20 per 1M requests so you can understand the pricing structure of usage-based resources such as AWS Lambda or Google Cloud Storage.
  • Config file: the config file has been updated to support infra-as-code repos that have multiple workspaces and projects. This enables you to combine projects into the same breakdown or diff output. So if a Terraform module or variable is used across workspaces/projects, you can quickly see the cost impact of changing it.

🚀 New Pull request comment options#

We've updated the CI/CD integrations to add a new post_condition option so you can decide when you'd like cost comments to be shown in pull requests. Options include: always leave a cost comment, only comment when there is a change to the cost, or only comment when a percentage threshold has been reached (e.g. more than 5% increase or decrease in costs).

⛅ New cloud resources#

We are working on adding Microsoft Azure to Infracost. This has two steps: the first is to add the prices to the Cloud Pricing API, then to add the resources to the CLI. We completed adding around 300,000 prices from Microsoft Azure to the Cloud Pricing API (step one), and now we're looking for volunteers to add resources to the CLI (step two) before the initial release. Please email ali@infracost.io if you are an Azure user and would like to contribute (basic golang knowledge is required).

We also added support for the following cloud resources:

  • AWS: Elastic File System (EFS), EBS GP3 volumes, DX Connection, Route53 Health checks, RDS Serverless
  • Google: Memorystore Redis, Cloud Monitoring and Logging, Compute Images and Snapshots

Thanks for being part of the community! We are always looking forward to your feedback.

Infracost diff - "git diff" but for cloud costs

Recently we released a new infracost diff command inspired by git diff. This shows a diff of monthly cloud cost estimates between the current and planned state of Terraform projects. At a high-level this might seems like a simple exercise of subtracting the current state's cost estimate from the planned state, but cloud costs are rarely that simple to deal with. Let's take a look at the following screenshot to understand some of the nuances.

Infracost diff command
  1. The aws_instance is being changed, which reduces the cost by $125/month (from $743 to $618).

  2. AWS EC2 has many different cost components, so to explain what caused the above change, we also flag the sub-resource ebs_block_device[0] that changed (the first attached block device). Underneath it, we show the cost component that caused the actual cost diff, Provisioned IOPS SSD Storage (io1); i.e. reducing the size of that volume can save $1500/year. For those who have done this in production, they know it's not a one-click change as you need to create a new EBS volume and copy over the data. What Infracost enables you to do is to quickly tell how much such a change would save you, then decide if it's worth it.

  3. A new aws_lambda_function is being added. Since we don't know how much it's going to be used, we can't show a cost estimate. But we can still show you the prices you'll be charged for: $0.20 per 1M requests and a tiny amount per GB-second. This is a usage-based resource, so if you like you can create a yaml file to provide usage estimates and get a cost estimate. It's hard to think in GB-seconds, so we enable you to input the average request duration and we'll do the math to map that to GB-seconds based on the memory_size of your function and any rounding rules that AWS applies.

    version: 0.1
    resource_usage:
    aws_lambda_function.hello_world:
    monthly_requests: 100000000 # Monthly number of requests.
    request_duration_ms: 250 # Average duration of each request in milliseconds.
  4. Finally we show a summary at the bottom: the EC2 instance change reduces the cost by 17%, and you can use the above yaml file to do simple what-if analysis on the Lambda costs.

The new infracost diff command is used by our CI/CD integrations and is open source alongside the rest of Infracost. We look forward to hearing what you do with it via GitHub issues or our community Slack!

Cloud costs are shifting left

What is "shift left"?#

"Shift left" has become a popular buzzword for both Software Engineering and DevOps. It means introducing processes earlier in the software development cycle.

The "shift left" principle started with testing. In a traditional waterfall model testing is performed just before release. Shift left testing started it earlier by introducing practices such as Test-Driven Development (TDD) and Behaviour-Driven Development (BDD).

We are now seeing the "shift left" principle applied to other disciplines. Continuous delivery platforms allow engineering teams to deploy frequently and integrate a suite of tools into their cycle. The term DevSecOps has been coined. The idea behind it is to introduce security as early as possible in the software development cycle. This has given rise to a whole ecosystem of tools to help implement this practice. Companies like Snyk and Anchore integrate automated security scanning into DevOps workflows so teams can proactively find and fix vulnerabilities.

Can you shift too far left?#

There's an argument that "shifting left" gives too much work and responsibility to engineering teams. This can be the case if the right tooling is not available.

"Shift left" isn't about performing one-off tasks earlier in the cycle, and this is where the name causes some confusion. It's about introducing processes and automation earlier and performing them continuously throughout development.

When a company introduces a "shift left" mentality it is important that it doesn't impact developer velocity. Tools that help here should fit into developers' workflows and show them the right level of information at the right time.

Will cloud cost shift left?#

Currently cloud costs aren't discussed until they become a problem. A common story is when cloud costs become a problem, companies will set a top-down directive and form a team to reduce their spend by X%. They manage to fix the immediate pain but after six to twelve months the problem returns.

Cloud costs aren't a one-off problem that can be solved. That's why it's inevitable that cloud costs will "shift left". Building a cost-aware engineering team is crucial to keep cloud bills under control.

That's why we built Infracost. We help engineering teams implement this culture of cost-awareness without impacting their velocity. You can integrate Infracost directly into your existing workflow to see cost information throughout your DevOps process. Check out our integrations for instructions.

Feb 2021 update - faster runs, new resources and Atlantis!

Here's what we released in February - big thanks to the community contributors! You can upgrade to the latest version (v0.7.20) to pickup these goodies:

🚀 Speed improvements#

The CLI now only runs terraform init if required since Terraform commands aren't the fastest in the world (init usually takes 20+ secs for me, but it depends on how many plugins you have). Furthermore, calls to the Cloud Pricing API have been switched from sequential to parallel. Infracost should run much faster than before.

⚙️ Config file#

Depending on your Terraform workflow, you'll run Infracost with different options. Things can get complicated when you have multiple projects in a repo, each requiring their own Terraform variables. For example, if you have two workspaces and want to see their total cost estimate, you would run something like this:

terraform workspace select dev
infracost breakdown --path code --format json \
--terraform-plan-flags "-var-file=env.dev.tfvars" > dev.json
terraform workspace select prod
infracost breakdown --path code --format json \
--terraform-plan-flags "-var-file=env.prod.tfvars" > prod.json
infracost output --format table --path dev.json --path prod.json

You can now create an infracost.yml config file in your repo to describe your setup, then just run infracost --config-file infracost.yml.

🌎 Atlantis integration#

Infracost now integrates with Atlantis, which is a popular CI/CD tool that enables Terraform pull request automation.

🗒️ Diff functionality in JSON output#

You can now get the monthly cost diff from the Infracost JSON output, e.g. the following shows the monthly cost is going to be increased by $1530 if the Terraform plan is applied. You can also get totalHourlyCost, or add --no-color=true --log-level=warn if you don't want the spinners/logs/color.

infracost breakdown --path=. --format=json | jq '[.projects[].diff.totalMonthlyCost | select (.!=null) | tonumber] | add'
"+1530"

⛅ New cloud resources#

We also shipped support for the following cloud resources:

  • AWS: Config, ECS on EC2, EventBridge, Route 53 Resolver, CodeBuild
  • Google: Key Management Service (KMS), Google Cloud Functions
  • Azure: great progress is being made, stay tuned for exciting news soon

The usage file params for Google Cloud Functions are pretty cool; as shown below you can define 3 simple params and we'll estimate the cost for you, no need for you to decode how function memory maps to GHz-seconds and rounding.

google_cloudfunctions_function.my_function:
request_duration_ms: 150 # milliseconds
monthly_function_invocations: 10000000
monthly_outbound_data_gb: 50
NAME MONTHLY QTY UNIT MONTHLY COST
google_cloudfunctions_function.hi
├─ CPU 800,000 GHz-seconds 8.0000
├─ Memory 500,000 GB-seconds 1.2500
├─ Invocations 10,000,000 invocations 4.0000
└─ Outbound data transfer 50 GB 6.0000
Total (USD) 19.2500

As always, looking forward to your feedback (hello@infracost.io).

Show costs in self service portals easily

Over the last 10 years we’ve seen a lot of shadow IT with AWS being used as the infrastructure provider. The AWS bills were put on company credit cards and expensed. Individual business units could spin up the resources they needed and work in an incredibly agile manner. Unfortunately, for the enterprise as a whole this resulted in shadow spend, less control and security issues. To address these issues, central IT departments built “Self Service” portals with single sign-on (SSO). Business units could still spin up resources as needed, and the enterprise gained some visibility and control.

This caused a new issue. When successful shadow IT projects started consuming more resources, bills went up until the company credit card limits were reached. Then someone would have to go up the ranks to figure out what rules the business unit broke and how to solve it going forward. Self service portals addressed the credit card limit issue by enabling usage, and adding showback and chargeback via spend reports at the end of the month. Optimization was left to the end user to figure out after getting the bill.

A great way to help the end user optimize cloud costs before the showback/chargeback report is to let them see how much resources cost before they are launched. Multiple enterprises have achieved this with Infracost, and I want to share how:

Self Service Portal

1. Show costs in your self service portal to the end user#

The first step is to show cost estimates in your self service portal. If you are using Terraform to launch resources, this is easy to do by integrating with Infracost. Read our API documentation for more information. Infracost is free and open source.

2. Set expectations in management and finance#

The next step is to set expectations by giving visibility of the cost estimates to management and finance. You can use our HTML output reports to achieve this.

3. Monitor ongoing costs#

Finally, set up alerts and reports for ongoing costs. There are many companies who can help with this, including the cloud providers themselves.


Our users have connected their self service portal in a matter of hours. Their end users immediately begin seeing cost estimates. You can start using Infracost now, or email us for a deeper engagement with our team (hello@infracost.io).

Cloud cost alerts are too late: trigger notifications before launching

Most cloud providers enable users to set budget alerts on their actual cloud spend. This is a critical safety net as usage-based resources could incur a lot of cost (e.g. data transfer). There is also another safety net that companies should set up, and that is catching significant cost changes to their infrastructure before going live. For example, finding out how much increasing the RAM for a Lambda function costs before putting the new function into production. Usage estimates can also be considered during cost estimation.

Infracost is an open source tool that can be put into CI/CD pipelines (GitHub, GitLab, CircleCI, Bitbucket and Atlantis…) and will leave a comment with the cloud cost implications of changes to your infrastructure-as-code: "this change to your terraform file will increase your cloud bill by 25% next month".

In some cases, you may only want an Infracost comment when a threshold is reached. For example, if the cost implications of the change are minor (e.g. under 3%), then no comment is needed. We have now added this ability into Infracost - from our CI/CD integration docs, select your CI system, and set the percentage_threshold flag.

We hope Infracost can help your enterprise become more cost-aware when it comes to cloud spend, and maybe we can help reduce that 30% cloud waste! [1]

As always, if you have any feedback, add an issue to our GitHub repo, join our community Slack channel, or reach out to me directly on Twitter: @hassankhosseni.

[1] Flexera 2020 State of the Cloud Report

Terraform cloud costs directly from pull request to management

Last week I wrote about giving cloud cost estimates to DevOps teams via pull requests as they make changes to infrastructure components. The hope is to create a “Prius Effect” for cloud costs: it was observed that many Prius drivers would drive more efficiently simply because they were presented with immediate feedback on the Prius dashboard.

In this blog, I'd like to answer a question that a user posed to us:

We need to know when significant budget changes are expected so we can plan for them before the money is spent. Billing alerts help us react to unexpected changes but they still can be a surprise. My DevOps now have costs in pull requests. What about my team leads and managers? They can’t go through every pull request to see what the cost changes are.

When you have a single engineer with access to change infrastructure a simple discussion about significant cost changes will suffice. Once your team grows to multiple engineers or multiple teams a process can really help.

Infracost now has a new infracost report command which generates a table or HTML report from multiple Infracost JSON files. The output shows a breakdown of all the cost information in a straightforward format alongside tags. You can then upload these reports to AWS S3 and share them with management and team leads.

Example command:

infracost --terraform-dir /path/to/module1 --format json > module1.json
infracost --terraform-dir /path/to/module2 --format json > module2.json
infracost report --format html module*.json > report.html

This is the output you'd get in HTML format. Notice that the filename and all tags are shown: Infracost output in HTML format

This is our first solution to this use-case. I'd love to hear your feedback so we can iterate on it and improve it! This is available now. Please see our Infracost Docs: Report section for usage instructions.

If you have any feedback add an issue to our GitHub repo, join our Community Slack channel, or reach out to me directly on Twitter: @hassankhosseni.