September 2021: Fetch usage from CloudWatch & less noisy GitHub comments

We released Infracost v0.9.9 recently, you can upgrade to use these features.

Fetch usage from CloudWatch#

When it comes to estimates for resources where the cost fully depends on usage (e.g. AWS S3 or Lambda), we have a usage file, in which users can define how much of those resources they will use. This requires you to manually input the usage numbers. We're now experimenting with fetching the values from CloudWatch or other cloud APIs when --sync-usage-file is used. This enables you to quickly see what the last 30-day usage for those resources have been and adjust if needed. If the CLI can fetch the following values from CloudWatch, it will overwrite them in the usage file.

  • aws_dynamodb_table: data storage, read capacity and write capacity units
  • aws_lambda_function: function duration and requests
  • aws_instance, aws_autoscaling_group, aws_eks_node_group: operating system (based on the AMI)

Please use this GitHub discussion to let us know if you find this useful or have feedback.

Less noisy GitHub PR comments#

We have heard from some users that leaving a new comment every time something is changed, makes Infracost PR comments too noisy. We have added a new post_condition option '{"update": true}' which should help. For our GitHub-based CI/CD integrations (GitHub Actions, Azure DevOps with GitHub repos, and CircleCI with GitHub repos) when a commit results in a change in cost estimates vs earlier commits, the integration will create or update a PR comment (instead of commit comments). The GitHub comments UI can be used to see when/what was changed in the comment. PR followers will only be notified on the comment create (not update), and the comment will stay at the same location in the comment history.

This new post_condition is an addition to the existing ones: has_diff, always and percentage_threshold. Please use this GitHub discussion to tell us what you'd like to see in PR comments.

GitHub create or update comments

A few more improvements#

We merged 33 pull requests and closed 20 issues in September, I wanted to highlight a few of these improvements:

  • Added support for google_compute_instance_group_manager and google_compute_region_instance_group_manager, azurerm_bastion_host and azurerm_postgresql_flexible_server (v4).
  • Improved Terragrunt project detection logic.
  • Free resources can be seen when running the CLI with the --log-level debug flag, look for "Skipping free resource" lines. This helps users debug issues with supported resources.

Seed funding#

We announced our $2.2M seed funding from Sequoia, Y Combinator, SV Angel and Yun-Fang Juan as an angel investor. Bogomil Balkansky (our partner at Sequoia) wrote a blog about why they invested in Infracost.

A huge thank you to our amazing open source community of users and contributors for helping us reach this milestone.

Community#

The community has been writing some awesome content around Infracost, and I'd like to give them a shout-out here:

  • Joe (Solutions Architect at GitLab) recorded a video demo of how you can create cloud costs estimates with Terraform, Infracost, and GitLab CI.
  • Russ (Practice Manager of SRE & DevOps at N4Stack) wrote a detailed blog about tracking costs in Terraform using Infracost.
  • Thiago (Cloud DevOps Eng at Dock) wrote a great intro blog for our Brazilian community: estimativa de custo para o Terraform com Infracost.
  • Hassan, our co-founder, went on The Production-First Mindset podcast to talk about the complexity of cloud pricing and our open source approach.

Up next!#

We're focusing on usability improvements, for example by generating the usage file with commented-out 0 values and skipping 0-usage line items in breakdown output. Checkout our project board to see what else is in flight.

We're glad to see our community growing; it's been awesome to hear your feedback on features and discuss how you're handling cloud costs as part of your workflows. Message me on Slack or Twitter if you have any questions!

Announcing Infracost’s seed fundraising from Sequoia, Y Combinator & SV Angel

I’m very excited to announce that Infracost has raised $2.2M in seed funding from Sequoia, Y Combinator, SV Angel and Yun-Fang Juan as an angel investor.

smile

People are often surprised to learn that DevOps and SREs, who are launching cloud resources are never shown how much these resources are going to cost until they are charged for them. That is like going to a supermarket and having no price tags, and no checkout; then being told that your card will be charged later. This is not fair as when bills exceed budgets, the DevOps and SREs are asked to fix it.

The way we solve this problem is to show engineering teams how much resources, and the specific options they have selected, cost. This happens within their workflow before anything goes to production.

smile

Ali, Alistair and I launched Infracost in late 2020 as an open source project, and have gained over 4,000 GitHub stars, with a community who are helping direct the roadmap as well as contributing code. We currently track over 3 million price points from AWS, Google Cloud and Microsoft Azure; have support for popular CI/CD systems such as GitHub Actions, GitLab CI, CircleCI, Bitbucket Pipelines, Jenkins; and support Terraform, with more IaC tools coming soon.

There are many super interesting problems we still need to solve, ranging from supporting many different types of cloud resources, cloud providers and charges, custom pricing discounts for large enterprises, usage-based resources and their consumption estimates (e.g. how much data-transfer will go through these resources, so we can calculate the cost) just to name a few. Just check out our GitHub issue board. With that, I’d love to invite you to join us as a founding engineer.

I want to thank our amazing open source community of users and contributors for helping us reach this milestone. I hope to see you around 😊

Hassan, Ali, Alistair

August 2021: Currency conversion and Terragrunt support!

We released Infracost v0.9.7 recently, you can upgrade to use these features.

Currency conversion#

You can now use infracost configure set currency to set your preferred currency (e.g. EUR, BRL or INR). Any ISO 4217 currency code should work, for example use XAG to see how much Silver you're spending on the cloud 😂 The environment variable INFRACOST_CURRENCY can be used to set the currency in CI/CD pipelines. Cloud vendors usually publish prices in USD so the costs will be converted from USD to your preferred currency using the current exchange rate when the CLI is run.

The new infracost configure command can also be used to get/set your API key, and for users who are self-hosting the Cloud Pricing API, setting your API Endpoint.

Improved Terragrunt support#

We have spent a lot of time improving our support for Terragrunt, and we'd love for you to try this out and give us more feedback on how to improve it further. Previously, Terragrunt users had to set INFRACOST_TERRAFORM_BINARY and specify their Terragrunt modules in the Infracost config file manually. These steps are no longer needed as Terragrunt projects are now automatically detected when passed in via the --path flag. You can read more about this initial set of improvements for Terragrunt in our docs. Please open a GitHub issue with feedback and suggestions.

Slack integration#

We have always focused on making it easy to see costs within your workflow. For some companies, Slack is part of their workflow and communication flow. With that, Slack integration is now supported by all of our CI/CD integrations so pull request comments can also be posted to a Slack channel.

A few more improvements#

  • When running infracost breakdown and infracost output, you can now use --fields all to output all available fields in the table or HTML output. The JSON format always includes all fields.
  • We've added support for the following Azure Terraform resources: azurerm_active_directory_domain_service, azurerm_virtual_network_gateway and azurerm_private_endpoint.

Community#

We hit 4K GitHub stars and now have over 200 people on our community Slack chat! I want to give a warm welcome to new members, we're happy to see you all in Slack and have you share your ideas, feedback, and activity with us.

  • Bruno (SRE at iFood) made an awesome two-part video showing how to use the Infracost Atlantis integration in Portuguese: part 1, part 2.
  • Praveen (Lead Software Engineer at GS Lab) wrote a blog on how to deploy the Cloud Pricing API.
  • Florian (DevOps Engineer at Bluelight Consulting) wrote a blog on how to use Infracost with GitLab.
  • Russ (Practice Manager, SRE & DevOps at Node4) wrote a blog on how to use Infracost with Azure DevOps pipelines to warn users if they're making a change that dramatically increases the running costs of the deployment.

Finally, Tim joined our team as Principal Engineer, and I was interviewed by Secfi about employee stock options as we're hiring!

Up next!#

We're prototyping how we can populate the usage file with data from AWS CloudWatch for resources like aws_s3_bucket or aws_lambda_function. We'd love to hear your feedback via this issue.

We're glad to see our community growing; it's been awesome to hear your feedback on features and discuss how you're handling cloud costs as part of your workflows. Message me on Slack or Twitter if you have any questions!

Cloud Pricing API: 3M prices from AWS, Azure and GCP

The Cloud Pricing API is an open source GraphQL-based API that includes all public prices from AWS, Azure and Google; it contains over 3 million prices! The prices are automatically updated via a weekly job. You can use our hosted version or self-host (it should take less than 15mins to deploy).

We needed a multi-cloud pricing API that we could use to explore pricing data structures (e.g. what are the various price components for AWS EC2) and filter for specific prices for the Infracost CLI. The cloud vendor pricing APIs do not address these use-cases so we developed and open sourced the Cloud Pricing API, which can also be used independently of the Infracost CLI.

GraphQL is a natural fit for cloud pricing as it can model the JSON structure used by cloud vendors. This enables you to query nested JSON structures using vendor-specific parameters, and request only the attributes you need to be returned in the response. For example, you can find all prices that match AWS EC2 m3.large instance in us-east-1 (over 400 prices), then explore the 30+ attributes that AWS return to describe instances (e.g. clockSpeed or networkPerformance).

Usage#

Infracost runs a hosted version of this API that you can use:

  1. Register for an API key by downloading infracost and running infracost register.
  2. Pass the above API key using the X-Api-Key: xxxx HTTP header when calling https://pricing.api.infracost.io/graphql. The following example fetches the latest price for an AWS EC2 m3.large instance in us-east-1. More examples can be found here.
curl https://pricing.api.infracost.io/graphql \
-X POST \
-H 'X-Api-Key: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
--data '
{"query": "{ products(filter: {vendorName: \"aws\", service: \"AmazonEC2\", region: \"us-east-1\", attributeFilters: [{key: \"instanceType\", value: \"m3.large\"}, {key: \"operatingSystem\", value: \"Linux\"}, {key: \"tenancy\", value: \"Shared\"}, {key: \"capacitystatus\", value: \"Used\"}, {key: \"preInstalledSw\", value: \"NA\"}]}) { prices(filter: {purchaseOption: \"on_demand\"}) { USD } } } "}
'

Concepts#

The API has two main types: Products and Prices. Each product can have many Prices. This simple high-level schema provides flexibility to model the exact values that the cloud vendor APIs return at the same time as having useful top-level product filters. The values returned by the API are the same ones that the cloud vendors return in their APIs.

The main properties of Products are:

NameAWS examplesMicrosoft Azure examplesGoogle Cloud Platform examples
vendorNameawsazuregcp
serviceAmazonEC2, AWSLambda, awskmsVirtual Machines, Functions, Azure DNSCompute Engine, Cloud Functions, Cloud DNS
productFamilyDedicated Host, Provisioned Throughput, Elastic GraphicsCompute, Storage, DatabasesCompute Instance, License, Network
regionus-east-1, cn-north-1, us-gov-east-1eastus, uknorth, US Govus-east1, europe, australia-southeast1
attributes (array of key-value pairs)usagetype: UGE1-Lambda-Edge-Request, clockSpeed: 2.5 GHzproductName: Premium Functions, meterName: vCPU DurationmachineType: n2-highmem-64, description: Static Ip Charge

The main properties of Prices are:

NameDescriptionExample
USDPrice from the cloud vendor in the preferred ISO 4217 currency code (e.g. EUR, BRL or INR). For non-USD currencies, prices are converted from USD to the preferred currency at query time.USD: 0.2810000000
unitUnit for the priceunit: Hrs
descriptionAny additional descriptiondescription: Upfront Fee
startUsageAmountStart usage amount for price tier, only applicable for tiered pricingstartUsageAmount: 0
endUsageAmountEnd usage amount for price tier, only applicable for tiered pricingendUsageAmount: 10000
purchaseOptionPurchase option varies between vendorson_demand, reserved, spot, Consumption, preemptible
termPurchaseOptionTerm of the purchase optiontermPurchaseOption: All Upfront
termLengthLength of the purchase optiontermLength: 1yr
termOfferingClassOffering class or type of the termtermOfferingClass: standard

What will you build?#

Whilst our main use-case for developing the Cloud Pricing API is the Infracost CLI, we're excited to see what the community does with this API. Please share your use-cases and issues with us on GitHub or Slack or email.

July 2021: self-hosted Cloud Pricing API

Whilst many Infracost CLI users connect to our hosted Cloud Pricing API (since no cloud credentials or secrets are sent to it), large enterprises that have restrictive security policies require self-hosting. Thus in July we focused on improving the self-hosting experience of the Cloud Pricing API. This is a GraphQL-based API that includes all public prices from AWS, Azure and Google; it contains over 3 million prices just now!

This is actually the third Cloud Pricing API our team has built; the complexity in cloud pricing has increased significantly in the last 10 years:

  • In 2010 (as part of PlanForCloud), we developed scrapers to fetch cloud prices as vendors didn't offer APIs then. There were over 10,000 prices at the time.
  • In 2015 (as part of RightScale), we developed a RESTful API. There were over 100,000 prices at the time.
  • In 2021 (as part of Infracost), we developed an open source GraphQL API for use with the Infracost CLI. AS previously mentioned, it containers over 3,000,000 prices just now.

Improvements#

We made the following main improvements in July:

  1. The datastore was switched from MongoDB to PostgreSQL since all the major cloud vendors have hosted options for this. Previously some of our users ran into compatibility issues with cloud vendor's hosted MongoDB versions.
  2. An official Helm chart was developed; the existing Docker compose file was also updated. This will deploy the Cloud Pricing API and a weekly cronjob to to update prices. By default it will also deploy a PostgreSQL pod/container but it can also be configured to use an external PostgreSQL instance for high availability, e.g. AWS RDS or Azure Database for PostgreSQL.
  3. The time it takes to update prices was reduced from around 1.5 hours to 2 minutes.
Deployment overview

The pricing DB dump is downloaded from Infracost's API as that simplifies the task of keeping prices up-to-date. We have created one job that you can run once a week to download the latest prices. This provides you with:

  • Fast updates: our aim is to enable you to deploy this service in less than 15mins. Some cloud vendors paginate API calls to 100 resources at a time, and making too many requests result in errors; fetching prices directly from them takes more than an hour.
  • Complete updates: we run integration tests to ensure that the CLI is using the correct prices. In the past, there have been cases when cloud vendors have tweaked their pricing API data that caused direct downloads to fail. With this method, we check the pricing data passes our integration tests before publishing them, and everyone automatically gets the entire up-to-date data. The aim is reduce the risk of failed or partial updates.

Over to you#

If you require self-hosting, please follow the self-hosting docs. If you had previously deployed the Cloud Pricing API, you can follow the migration guide.

Thanks for being part of the community! We look forward to hearing your feedback via GitHub issues -- you can also join our community Slack chat.

June 2021: HashiCorp partnership, Env0 and Spacelift integrations!

In June we focused on integrations and adding more resource coverage. You can upgrade to the latest version (v0.9.2) to pickup the new features. If you are using v0.8 please follow the v0.9 migration guide.

⚙️ Integrations#

Infracost already has CI/CD integrations with GitHub Actions, GitLab CI, Atlantis, Azure DevOps, CircleCI, Bitbucket Pipelines and Jenkins. This list keeps growing, so please let us know what you'd like to see added next. Infracost can now be used with the following infra-as-code management platforms too:

  • Terraform Cloud: we have partnered with HashiCorp to bring cloud cost estimates into Terraform Cloud's new RunChecks (currently in beta). Armon Dadgar, co-founder and CTO of HashiCorp, announced the partnership during HashiConf EU; screenshot below! If your company uses Terraform Cloud and would like to work with us and HashiCorp together on this integration, please reply to this email.

  • Env0: cloud cost estimates can easily be enabled in Env0. Our CEO Hassan, Env0's CEO Ohad Maislish, and Tim Davis are doing a webinar about cloud costs shifting left on 14th July. Signup to listen in.

  • Spacelift: cloud cost estimates can also be enabled in Spacelift.

Infracost HashiCorp partnership

⛅ New cloud resources#

Infracost now supports over 200 Terraform resources across AWS, Azure and Google. Over 500 free resources have also been identified; these are not shown in the CLI output since they are free.

We added support for the following cloud resources:

  • AWS: Backup, EFS One Zone, Kinesis Firehose & Data Analytics, Neptune.
  • Google: BigQuery, Load Balancer, VPN tunnel.
  • Azure: Application Gateway, Application Insights, Automation, Cognitive Search, Event Hubs, Kubernetes Load Balancer & HTTP Application Routing, Load Balancer, Redis Cache.

⌨️ Shell completion#

Run infracost completion --help to see how you can generate shell completion scripts for Bash, Zsh, fish and PowerShell. Once enabled, you can use the usual double tab to see autocomplete and help text for commands and flags.

Thanks for being part of the community! We look forward to hearing your feedback via GitHub issues or join our community Slack chat.

Announcement: Azure cloud cost estimates in pull requests

Infracost Azure

Thanks to our awesome community, I'm very excited to announce that you can now use Infracost to get cloud cost estimates for Microsoft Azure in pull requests. Try it now, it's free and open source!


Cloud costs for engineering teams

Cloud costs have become so complex that the industry (ourselves included) started addressing issues around breached budgets after the bill arrived. This is not the way it should be. It is like going shopping and having no idea how much things cost till after your card has been charged.

We are on a mission to empower engineering teams to use cloud infrastructure economically and efficiently. We do this by fitting in the developer workflow via CI/CD integration, reading the Infrastructure-as-code project, picking up the parameters that have a price point, looking up the prices for the configurations and leaving a comment in the Pull Request like "This change will increase your cloud costs by 25%" with a detailed breakdown. This way, the whole team is aware of the cost implications of the change.


Today's announcement

Today we are announcing that in addition to AWS and Google Cloud Platform, we have added support for Microsoft Azure. Not only has Azure support been requested by over 60 of our community members (https://github.com/infracost/infracost/issues/64), but we have seen a lot of growth from Microsoft in terms of the number of resources offered and enterprise adoption.

We have added support for over 65 Azure resources (and another 70 resources which are free), with many more planned. Visit our GitHub issues page and put a thumbs up on the resources you'd like covered and we will prioritize them.

But there is one more thing! We have also added support for Microsoft Azure DevOps Pipelines. This is in addition to our current supported CI/CD integrations such as GitHub Actions, GitLab CI, CircleCI, Bitbucket Pipelines, Atlantis and Jenkins.

Infracost Azure

Get started! We have made it super simple to get up and running:

1. brew install infracost # (docker, windows etc options available)
2. infracost register
3. az login # To set cloud creds, see note-1
4. infracost breakdown --path . # Run in your terraform directory. We also have an example Azure terraform file you can use to try it out.

For full details, see our Getting Started guide. From there, you can setup Azure DevOps Pipelines for the CI/CD integration.

note-1: Infracost does not need or access your cloud creds, however, Terraform needs this to create the plan file.

How Eagle.io achieves cloud cost attribution for their multi-tenant SaaS

I sat down with David Julia, who is the Head of Engineering at Eagle.io to talk about cost attribution, why it matters, who should care and how Eagle.io achieves this. We worked through their use-case, their tech stack, the tools they use, what worked and what did not work, and ultimately how they have achieved cloud cost attribution.

The following is lightly edited transcript (from YouTube) of the introduction section of our chat:

Ali: Hey everyone, I have David Julia here who is the head of engineering at Eagle.io. David thank you very much for taking time to speak with me today. Would you like to introduce yourself?

David: Yeah absolutely, so David Julia head of engineering at Eagle.io. We do environmental IoT so we're an environmental IoT data platform essentially, you connect up all sorts of devices to us were used in water monitoring, natural resource monitoring, various other things. We are used to monitor the cracks in Mount Rushmore for example, how Mount Rushmore is splitting and how that's going and whether they need to remediate anything so you know a broad variety of use cases, all these data loggers centers, connected to the platform and analytics and processing logic on that. A fun gig for me. I just started that a couple of months ago before that I was in software engineering consulting for a long time, about nine years at Pivotal Labs and then VMware. A fun little project that we're working on now and I had the chance to jump into some cost attribution stuff as part of it, hence the conversation today.

Ali: Which cloud providers are you using and what is the setup?

David: We're a multi-tenant SaaS platform so the whole value proposition of Eagle.io is unlike AWS IoT Core or Azure IoT, we think of those as primitive building blocks which if you want to build your own solution cool go nuts with them but we're much higher up the stack than that. Essentially for engineering firms or big municipal water companies, state water companies, mining companies that they don't really want to mess around with all this code and IoT Core and all of this different stuff that's out there, which is really cool admittedly, but not core to their business so they just hook up to us and very quickly they're able to get data in analyze it alert on it. If you're monitoring a dam for example you want to know if a dam wall is failing you don't really care about configuring microservices or anything like that. So we're a multi-tenant SaaS application lots and lots of different users but a lot of shared resources as well. So that's kind of the layout. We're mostly AWS.

Ali: Can you tell us then in this context, what do you mean by cost attribution and why do cloud costs matter as part of that?

David: So why cost attribution, well it's pretty fundamental especially when you're building a data intensive application like ours, but also in other circumstances you really just want to know 1) can I guarantee that I'm going to make good gross margin, and 2) if I add more customers am I going to be adding more profit or not is it going to be impacting my gross margin positively or negatively and if that's a very big question mark that's something to be nervous about and that's actually the situation we found ourselves in. We were negotiating with a particular enterprise customer who wanted to essentially more than 10x their usage of the platform and so they said well you got to give us a discount, for this to be financially feasible for us to do this and we want to do this deal you guys but we need to make it in the realm of dollars and that we can actually do. We said well we should be able to do this but our current pricing model doesn't scale for them or maybe it's just the numbers but if we give them too deep of a discount are we going to lose money on the thing like we don't know like what are our costs what's their usage what rights like what what's even driving our costs. So that's when we really sat down and said okay we need to figure this out.

Watch the whole interview on Infracost's YouTube channel. Subtitles have also been enabled.

If you are interested in working in environmental IoT, reach out to David on Twitter!

GitHub stars matter! Here is why

As Infracost has hit 3,000 GitHub stars 🎉, I wanted to share some thoughts as to why GitHub stars matter.

Why do people star repos?#

There are two main reasons why people star GitHub projects:

  1. Bookmarks: some people star GitHub repos to bookmark them for later use. For example I can see the repos I've starred1 and search within them for a keyword or sort them by how recently I starred them, or how active the project has been recently.

  2. Show support or appreciation: others star repos to show support or appreciation, similar to how "likes" are used in social media sites. This is a social signal, and it's very important in the very early stages of open source projects, acting as a feedback loop for project creators. Knowing that other people have seen the project and cared enough to click on the Star button can create motivation for the creators to continue working on the project initially.

The latter is why I personally star projects. Regardless of whether I've used the project in the past, using it just now, plan to use it, or think it's a cool idea, I want the project creator to know that I like what they're doing. Terraform and Pulumi are projects that I recently starred to show support.

Benefits of repo stars#

The main benefit of repo stars is creating confidence and a good first impression of the project. That in turn helps with the project getting users, and to a lesser extent contributors.

A 2018 academic research survey of over 700 developers found that "three out of four developers consider the number of stars before using or contributing to GitHub projects"2. GitHub stars are not the only metric that matters though. A project's activity level, for example its last release or commit, and its ease of use, for example the quality of its documentation, are also important factors in helping projects get users.

I say to a lesser extent as contributing, by creating a GitHub issue or submitting a pull request, requires significantly more effort than starring a repo. People who only star a repo are probably not yet active community members but they might become active in the future. This is why the Orbit Model classifies them as Observers3, as they can act as the top-of-funnel for growing users and contributors. hugely popular

In addition to helping projects get users, GitHub stars can help the project creators meet investors who are familiar with open source. Early on in Infracost's journey, we were surprised to get cold emails from VCs congratulating us on our star count. After speaking with a few, it became clear that they either had systems in place to monitor stars4, or had analysts who reviewed Trending Repos on GitHub for potential investment opportunities5. Some have gone even further. For example, the VC firm Runa Capital, who invested in Nginx and MariaDB, has started to track the fastest growing open source startups using GitHub stars and forks. Infracost was recently placed 5th on the ROSS Index6.

Infracost GitHub stars

Future of GitHub stars#

A16Z's Martin Casado thinks that there is a big trend towards bottom-up strategies in business-to-business (B2B) software that will shape the entire B2B landscape in the next 10 years7. I wonder if in the same way that social media influencers are changing how products are marketed and sold, GitHub influencers (someone with many GitHub followers) will change how enterprise software is marketed and sold? Developer Advocates are currently using Twitter and LinkedIn, but GitHub has a "follow" and a "status update" feature too. Will those remain as a simple way to get updates on code-related activities? Or could they be extended to enable GitHub influencers to post their demos, talks and blogs into the GitHub activity feed? Will companies be able to buy ads on GitHub and promote their open source projects?

Over to you - what have you learnt about GitHub stars, and how do you think they'll change in the future? I hang out on Twitter...


  1. https://github.com/alikhajeh1?tab=stars, this is a public page, so you can see the repos that any GitHub user has starred.

  2. H. Borges and M. Tulio Valente, "What's in a GitHub Star? Understanding Repository Starring Practices in a Social Coding Platform," Journal of Systems and Software, vol. 146, pp. 112–129, 2018.

  3. The Orbit Model is implemented via the Orbit product, which can be used to measure and grow open source communities.

  4. Openbase helps developers choose the right JavaScript package with more languages coming soon. See the React page to get an idea of the kinds of metrics they collect.

  5. https://github.com/trending, Infracost has hit the Go trending page a few times.

  6. https://runacap.com/ross-index/, Infracost was placed 5th in the fastest-growing open-source startups in Q4 2020.

  7. Growth, Sales, and a New Era of B2B

April 2021 update - EC2 reserved instances and Jenkins integration!

Two big milestones to celebrate this month: Infracost now supports over 100 AWS and Google resources and we have over 100 people in our community Slack channel.

You can upgrade to the latest version (v0.8.6) to pickup the new features. If you are using v0.7 (or older) please follow the v0.8 migration guide.

📉 EC2 reserved instances#

You can now do what-if anlaysis on AWS EC2 Reserved Instances (RI), as we have added support for these in the Infracost usage file. The RI type, term and payment option can be defined as shown below, to quickly get a monthly cost estimate. This works with aws_instance as well as aws_eks_node_group and aws_autoscaling_group as they also create EC2 instances. Let us know how you'd like Infracost to show the upfront costs by creating a GitHub issue.

aws_instance.my_instance:
operating_system: linux # Override the operating system of the instance, can be: linux, windows, suse, rhel.
reserved_instance_type: standard # Offering class for Reserved Instances. Can be: convertible, standard.
reserved_instance_term: 1_year # Term for Reserved Instances. Can be: 1_year, 3_year.
reserved_instance_payment_option: all_upfront # Payment option for Reserved Instances. Can be: no_upfront, partial_upfront, all_upfront.

♾️ Jenkins integration#

Our new Jenkins integration enables you to save an HTML page for each pipeline build, which shows the Infracost diff output. Checkout this demo that uses Jenkins' Docker agent to run Infracost; the Jenkinsfile can be customized based on your requirements. The integration can also be used to fail a build if its cost estimate crosses a percentage threshold. This safety net is often used to ensure no one breaks the bank 😃

Infracost Jenkins integration

⚙️ Customize output columns#

The infracost breakdown and infracost output commands show the monthly quantity, units, and monthly cost of resources by default. You can now use the new --fields flag to customize the columns shown in the table output to include price and hourly cost, or you can set it to only show the monthly cost if you prefer a simplified view (shown below). The HTML output format is being updated to support the same feature. The JSON output format will always include all fields.

Name Monthly Cost
aws_instance.web_app
├─ Instance usage (Linux/UNIX, on-demand, m5.4xlarge) $560.64
├─ root_block_device
│ └─ Storage (general purpose SSD, gp2) $5.00
└─ ebs_block_device[0]
├─ Storage (provisioned IOPS SSD, io1) $125.00
└─ Provisioned IOPS $52.00

* Array wildcards in usage file#

The Infracost usage file enables you to define resource usage estimates using their resource path, e.g. storage for aws_dynamodb_table.my_table. This can be cumbersome for resource arrays, such as AWS CloudWatch Log Groups, since you'd have to define the array items individually.

We've addressed this issue by supporting the wildcard character [*] for resource arrays. Infracost will apply the usage values individually to each element of the array (they all get the same values). If an array element (e.g. this[0]) and [*] are specified for a resource, only the array element's usage will be applied to that resource. This enables you to define default values using [*] and override specific elements using their index.

aws_cloudwatch_log_group.my_group[0]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200
aws_cloudwatch_log_group.my_group[1]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200
aws_cloudwatch_log_group.my_group[3]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200

⛅ New cloud resources#

We added support for the following cloud resources:

  • AWS: Redshift. The CPU-credit usage file params were improved for T2, T3 & T4 instances.
  • Google: Google SQL and Container Registry.

Thanks for being part of the community! We are always looking forward to your feedback, so please create GitHub issues here. We read every single one.