Announcing Infracost’s seed fundraising from Sequoia, Y Combinator & SV Angel

I’m very excited to announce that Infracost has raised $2.2M in seed funding from Sequoia, Y Combinator, SV Angel and Yun-Fang Juan as an angel investor.

smile

People are often surprised to learn that DevOps and SREs, who are launching cloud resources are never shown how much these resources are going to cost until they are charged for them. That is like going to a supermarket and having no price tags, and no checkout; then being told that your card will be charged later. This is not fair as when bills exceed budgets, the DevOps and SREs are asked to fix it.

The way we solve this problem is to show engineering teams how much resources, and the specific options they have selected, cost. This happens within their workflow before anything goes to production.

smile

Ali, Alistair and I launched Infracost in late 2020 as an open source project, and have gained over 4,000 GitHub stars, with a community who are helping direct the roadmap as well as contributing code. We currently track over 3 million price points from AWS, Google Cloud and Microsoft Azure; have support for popular CI/CD systems such as GitHub Actions, GitLab CI, CircleCI, Bitbucket Pipelines, Jenkins; and support Terraform, with more IaC tools coming soon.

There are many super interesting problems we still need to solve, ranging from supporting many different types of cloud resources, cloud providers and charges, custom pricing discounts for large enterprises, usage-based resources and their consumption estimates (e.g. how much data-transfer will go through these resources, so we can calculate the cost) just to name a few. Just check out our GitHub issue board. With that, I’d love to invite you to join us as a founding engineer.

I want to thank our amazing open source community of users and contributors for helping us reach this milestone. I hope to see you around 😊

Hassan, Ali, Alistair

August 2021 update: Currency conversion and Terragrunt support!

We released Infracost v0.9.7 recently, you can upgrade to use these features.

Currency conversion#

You can now use infracost configure set currency to set your preferred currency (e.g. EUR, BRL or INR). Any ISO 4217 currency code should work, for example use XAG to see how much Silver you're spending on the cloud 😂 The environment variable INFRACOST_CURRENCY can be used to set the currency in CI/CD pipelines. Cloud vendors usually publish prices in USD so the costs will be converted from USD to your preferred currency using the current exchange rate when the CLI is run.

The new infracost configure command can also be used to get/set your API key, and for users who are self-hosting the Cloud Pricing API, setting your API Endpoint.

Improved Terragrunt support#

We have spent a lot of time improving our support for Terragrunt, and we'd love for you to try this out and give us more feedback on how to improve it further. Previously, Terragrunt users had to set INFRACOST_TERRAFORM_BINARY and specify their Terragrunt modules in the Infracost config file manually. These steps are no longer needed as Terragrunt projects are now automatically detected when passed in via the --path flag. You can read more about this initial set of improvements for Terragrunt in our docs. Please open a GitHub issue with feedback and suggestions.

Slack integration#

We have always focused on making it easy to see costs within your workflow. For some companies, Slack is part of their workflow and communication flow. With that, Slack integration is now supported by all of our CI/CD integrations so pull request comments can also be posted to a Slack channel.

A few more improvements#

  • When running infracost breakdown and infracost output, you can now use --fields all to output all available fields in the table or HTML output. The JSON format always includes all fields.
  • We've added support for the following Azure Terraform resources: azurerm_active_directory_domain_service, azurerm_virtual_network_gateway and azurerm_private_endpoint.

Community#

We hit 4K GitHub stars and now have over 200 people on our community Slack chat! I want to give a warm welcome to new members, we're happy to see you all in Slack and have you share your ideas, feedback, and activity with us.

  • Bruno (SRE at iFood) made an awesome two-part video showing how to use the Infracost Atlantis integration in Portuguese: part 1, part 2.
  • Praveen (Lead Software Engineer at GS Lab) wrote a blog on how to deploy the Cloud Pricing API.
  • Florian (DevOps Engineer at Bluelight Consulting) wrote a blog on how to use Infracost with GitLab.
  • Russ (Practice Manager, SRE & DevOps at Node4) wrote a blog on how to use Infracost with Azure DevOps pipelines to warn users if they're making a change that dramatically increases the running costs of the deployment.

Finally, Tim joined our team as Principal Engineer, and I was interviewed by Secfi about employee stock options as we're hiring!

Up next!#

We're prototyping how we can populate the usage file with data from AWS CloudWatch for resources like aws_s3_bucket or aws_lambda_function. We'd love to hear your feedback via this issue.

We're glad to see our community growing; it's been awesome to hear your feedback on features and discuss how you're handling cloud costs as part of your workflows. Message me on Slack or Twitter if you have any questions!

Cloud Pricing API: 3M prices from AWS, Azure and GCP

The Cloud Pricing API is an open source GraphQL-based API that includes all public prices from AWS, Azure and Google; it contains over 3 million prices! The prices are automatically updated via a weekly job. You can use our hosted version or self-host (it should take less than 15mins to deploy).

We needed a multi-cloud pricing API that we could use to explore pricing data structures (e.g. what are the various price components for AWS EC2) and filter for specific prices for the Infracost CLI. The cloud vendor pricing APIs do not address these use-cases so we developed and open sourced the Cloud Pricing API, which can also be used independently of the Infracost CLI.

GraphQL is a natural fit for cloud pricing as it can model the JSON structure used by cloud vendors. This enables you to query nested JSON structures using vendor-specific parameters, and request only the attributes you need to be returned in the response. For example, you can find all prices that match AWS EC2 m3.large instance in us-east-1 (over 400 prices), then explore the 30+ attributes that AWS return to describe instances (e.g. clockSpeed or networkPerformance).

Usage#

Infracost runs a hosted version of this API that you can use:

  1. Register for an API key by downloading infracost and running infracost register.
  2. Pass the above API key using the X-Api-Key: xxxx HTTP header when calling https://pricing.api.infracost.io/graphql. The following example fetches the latest price for an AWS EC2 m3.large instance in us-east-1. More examples can be found here.
curl https://pricing.api.infracost.io/graphql \
-X POST \
-H 'X-Api-Key: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
--data '
{"query": "{ products(filter: {vendorName: \"aws\", service: \"AmazonEC2\", region: \"us-east-1\", attributeFilters: [{key: \"instanceType\", value: \"m3.large\"}, {key: \"operatingSystem\", value: \"Linux\"}, {key: \"tenancy\", value: \"Shared\"}, {key: \"capacitystatus\", value: \"Used\"}, {key: \"preInstalledSw\", value: \"NA\"}]}) { prices(filter: {purchaseOption: \"on_demand\"}) { USD } } } "}
'

Concepts#

The API has two main types: Products and Prices. Each product can have many Prices. This simple high-level schema provides flexibility to model the exact values that the cloud vendor APIs return at the same time as having useful top-level product filters. The values returned by the API are the same ones that the cloud vendors return in their APIs.

The main properties of Products are:

NameAWS examplesMicrosoft Azure examplesGoogle Cloud Platform examples
vendorNameawsazuregcp
serviceAmazonEC2, AWSLambda, awskmsVirtual Machines, Functions, Azure DNSCompute Engine, Cloud Functions, Cloud DNS
productFamilyDedicated Host, Provisioned Throughput, Elastic GraphicsCompute, Storage, DatabasesCompute Instance, License, Network
regionus-east-1, cn-north-1, us-gov-east-1eastus, uknorth, US Govus-east1, europe, australia-southeast1
attributes (array of key-value pairs)usagetype: UGE1-Lambda-Edge-Request, clockSpeed: 2.5 GHzproductName: Premium Functions, meterName: vCPU DurationmachineType: n2-highmem-64, description: Static Ip Charge

The main properties of Prices are:

NameDescriptionExample
USDPrice from the cloud vendor in the preferred ISO 4217 currency code (e.g. EUR, BRL or INR). For non-USD currencies, prices are converted from USD to the preferred currency at query time.USD: 0.2810000000
unitUnit for the priceunit: Hrs
descriptionAny additional descriptiondescription: Upfront Fee
startUsageAmountStart usage amount for price tier, only applicable for tiered pricingstartUsageAmount: 0
endUsageAmountEnd usage amount for price tier, only applicable for tiered pricingendUsageAmount: 10000
purchaseOptionPurchase option varies between vendorson_demand, reserved, spot, Consumption, preemptible
termPurchaseOptionTerm of the purchase optiontermPurchaseOption: All Upfront
termLengthLength of the purchase optiontermLength: 1yr
termOfferingClassOffering class or type of the termtermOfferingClass: standard

What will you build?#

Whilst our main use-case for developing the Cloud Pricing API is the Infracost CLI, we're excited to see what the community does with this API. Please share your use-cases and issues with us on GitHub or Slack or email.

July 2021 update: self-hosted Cloud Pricing API

Whilst many Infracost CLI users connect to our hosted Cloud Pricing API (since no cloud credentials or secrets are sent to it), large enterprises that have restrictive security policies require self-hosting. Thus in July we focused on improving the self-hosting experience of the Cloud Pricing API. This is a GraphQL-based API that includes all public prices from AWS, Azure and Google; it contains over 3 million prices just now!

This is actually the third Cloud Pricing API our team has built; the complexity in cloud pricing has increased significantly in the last 10 years:

  • In 2010 (as part of PlanForCloud), we developed scrapers to fetch cloud prices as vendors didn't offer APIs then. There were over 10,000 prices at the time.
  • In 2015 (as part of RightScale), we developed a RESTful API. There were over 100,000 prices at the time.
  • In 2021 (as part of Infracost), we developed an open source GraphQL API for use with the Infracost CLI. AS previously mentioned, it containers over 3,000,000 prices just now.

Improvements#

We made the following main improvements in July:

  1. The datastore was switched from MongoDB to PostgreSQL since all the major cloud vendors have hosted options for this. Previously some of our users ran into compatibility issues with cloud vendor's hosted MongoDB versions.
  2. An official Helm chart was developed; the existing Docker compose file was also updated. This will deploy the Cloud Pricing API and a weekly cronjob to to update prices. By default it will also deploy a PostgreSQL pod/container but it can also be configured to use an external PostgreSQL instance for high availability, e.g. AWS RDS or Azure Database for PostgreSQL.
  3. The time it takes to update prices was reduced from around 1.5 hours to 2 minutes.
Deployment overview

The pricing DB dump is downloaded from Infracost's API as that simplifies the task of keeping prices up-to-date. We have created one job that you can run once a week to download the latest prices. This provides you with:

  • Fast updates: our aim is to enable you to deploy this service in less than 15mins. Some cloud vendors paginate API calls to 100 resources at a time, and making too many requests result in errors; fetching prices directly from them takes more than an hour.
  • Complete updates: we run integration tests to ensure that the CLI is using the correct prices. In the past, there have been cases when cloud vendors have tweaked their pricing API data that caused direct downloads to fail. With this method, we check the pricing data passes our integration tests before publishing them, and everyone automatically gets the entire up-to-date data. The aim is reduce the risk of failed or partial updates.

Over to you#

If you require self-hosting, please follow the self-hosting docs. If you had previously deployed the Cloud Pricing API, you can follow the migration guide.

Thanks for being part of the community! We look forward to hearing your feedback via GitHub issues -- you can also join our community Slack chat.

June 2021 update - HashiCorp partnership, Env0 and Spacelift integrations!

In June we focused on integrations and adding more resource coverage. You can upgrade to the latest version (v0.9.2) to pickup the new features. If you are using v0.8 please follow the v0.9 migration guide.

⚙️ Integrations#

Infracost already has CI/CD integrations with GitHub Actions, GitLab CI, Atlantis, Azure DevOps, CircleCI, Bitbucket Pipelines and Jenkins. This list keeps growing, so please let us know what you'd like to see added next. Infracost can now be used with the following infra-as-code management platforms too:

  • Terraform Cloud: we have partnered with HashiCorp to bring cloud cost estimates into Terraform Cloud's new RunChecks (currently in beta). Armon Dadgar, co-founder and CTO of HashiCorp, announced the partnership during HashiConf EU; screenshot below! If your company uses Terraform Cloud and would like to work with us and HashiCorp together on this integration, please reply to this email.

  • Env0: cloud cost estimates can easily be enabled in Env0. Our CEO Hassan, Env0's CEO Ohad Maislish, and Tim Davis are doing a webinar about cloud costs shifting left on 14th July. Signup to listen in.

  • Spacelift: cloud cost estimates can also be enabled in Spacelift.

Infracost HashiCorp partnership

⛅ New cloud resources#

Infracost now supports over 200 Terraform resources across AWS, Azure and Google. Over 500 free resources have also been identified; these are not shown in the CLI output since they are free.

We added support for the following cloud resources:

  • AWS: Backup, EFS One Zone, Kinesis Firehose & Data Analytics, Neptune.
  • Google: BigQuery, Load Balancer, VPN tunnel.
  • Azure: Application Gateway, Application Insights, Automation, Cognitive Search, Event Hubs, Kubernetes Load Balancer & HTTP Application Routing, Load Balancer, Redis Cache.

⌨️ Shell completion#

Run infracost completion --help to see how you can generate shell completion scripts for Bash, Zsh, fish and PowerShell. Once enabled, you can use the usual double tab to see autocomplete and help text for commands and flags.

Thanks for being part of the community! We look forward to hearing your feedback via GitHub issues or join our community Slack chat.

Announcement: Azure cloud cost estimates in pull requests

Infracost Azure

Thanks to our awesome community, I'm very excited to announce that you can now use Infracost to get cloud cost estimates for Microsoft Azure in pull requests. Try it now, it's free and open source!


Cloud costs for engineering teams

Cloud costs have become so complex that the industry (ourselves included) started addressing issues around breached budgets after the bill arrived. This is not the way it should be. It is like going shopping and having no idea how much things cost till after your card has been charged.

We are on a mission to empower engineering teams to use cloud infrastructure economically and efficiently. We do this by fitting in the developer workflow via CI/CD integration, reading the Infrastructure-as-code project, picking up the parameters that have a price point, looking up the prices for the configurations and leaving a comment in the Pull Request like "This change will increase your cloud costs by 25%" with a detailed breakdown. This way, the whole team is aware of the cost implications of the change.


Today's announcement

Today we are announcing that in addition to AWS and Google Cloud Platform, we have added support for Microsoft Azure. Not only has Azure support been requested by over 60 of our community members (https://github.com/infracost/infracost/issues/64), but we have seen a lot of growth from Microsoft in terms of the number of resources offered and enterprise adoption.

We have added support for over 65 Azure resources (and another 70 resources which are free), with many more planned. Visit our GitHub issues page and put a thumbs up on the resources you'd like covered and we will prioritize them.

But there is one more thing! We have also added support for Microsoft Azure DevOps Pipelines. This is in addition to our current supported CI/CD integrations such as GitHub Actions, GitLab CI, CircleCI, Bitbucket Pipelines, Atlantis and Jenkins.

Infracost Azure

Get started! We have made it super simple to get up and running:

1. brew install infracost # (docker, windows etc options available)
2. infracost register
3. az login # To set cloud creds, see note-1
4. infracost breakdown --path . # Run in your terraform directory. We also have an example Azure terraform file you can use to try it out.

For full details, see our Getting Started guide. From there, you can setup Azure DevOps Pipelines for the CI/CD integration.

note-1: Infracost does not need or access your cloud creds, however, Terraform needs this to create the plan file.

How Eagle.io achieves cloud cost attribution for their multi-tenant SaaS

I sat down with David Julia, who is the Head of Engineering at Eagle.io to talk about cost attribution, why it matters, who should care and how Eagle.io achieves this. We worked through their use-case, their tech stack, the tools they use, what worked and what did not work, and ultimately how they have achieved cloud cost attribution.

The following is lightly edited transcript (from YouTube) of the introduction section of our chat:

Ali: Hey everyone, I have David Julia here who is the head of engineering at Eagle.io. David thank you very much for taking time to speak with me today. Would you like to introduce yourself?

David: Yeah absolutely, so David Julia head of engineering at Eagle.io. We do environmental IoT so we're an environmental IoT data platform essentially, you connect up all sorts of devices to us were used in water monitoring, natural resource monitoring, various other things. We are used to monitor the cracks in Mount Rushmore for example, how Mount Rushmore is splitting and how that's going and whether they need to remediate anything so you know a broad variety of use cases, all these data loggers centers, connected to the platform and analytics and processing logic on that. A fun gig for me. I just started that a couple of months ago before that I was in software engineering consulting for a long time, about nine years at Pivotal Labs and then VMware. A fun little project that we're working on now and I had the chance to jump into some cost attribution stuff as part of it, hence the conversation today.

Ali: Which cloud providers are you using and what is the setup?

David: We're a multi-tenant SaaS platform so the whole value proposition of Eagle.io is unlike AWS IoT Core or Azure IoT, we think of those as primitive building blocks which if you want to build your own solution cool go nuts with them but we're much higher up the stack than that. Essentially for engineering firms or big municipal water companies, state water companies, mining companies that they don't really want to mess around with all this code and IoT Core and all of this different stuff that's out there, which is really cool admittedly, but not core to their business so they just hook up to us and very quickly they're able to get data in analyze it alert on it. If you're monitoring a dam for example you want to know if a dam wall is failing you don't really care about configuring microservices or anything like that. So we're a multi-tenant SaaS application lots and lots of different users but a lot of shared resources as well. So that's kind of the layout. We're mostly AWS.

Ali: Can you tell us then in this context, what do you mean by cost attribution and why do cloud costs matter as part of that?

David: So why cost attribution, well it's pretty fundamental especially when you're building a data intensive application like ours, but also in other circumstances you really just want to know 1) can I guarantee that I'm going to make good gross margin, and 2) if I add more customers am I going to be adding more profit or not is it going to be impacting my gross margin positively or negatively and if that's a very big question mark that's something to be nervous about and that's actually the situation we found ourselves in. We were negotiating with a particular enterprise customer who wanted to essentially more than 10x their usage of the platform and so they said well you got to give us a discount, for this to be financially feasible for us to do this and we want to do this deal you guys but we need to make it in the realm of dollars and that we can actually do. We said well we should be able to do this but our current pricing model doesn't scale for them or maybe it's just the numbers but if we give them too deep of a discount are we going to lose money on the thing like we don't know like what are our costs what's their usage what rights like what what's even driving our costs. So that's when we really sat down and said okay we need to figure this out.

Watch the whole interview on Infracost's YouTube channel. Subtitles have also been enabled.

If you are interested in working in environmental IoT, reach out to David on Twitter!

GitHub stars matter! Here is why

As Infracost has hit 3,000 GitHub stars 🎉, I wanted to share some thoughts as to why GitHub stars matter.

Why do people star repos?#

There are two main reasons why people star GitHub projects:

  1. Bookmarks: some people star GitHub repos to bookmark them for later use. For example I can see the repos I've starred1 and search within them for a keyword or sort them by how recently I starred them, or how active the project has been recently.

  2. Show support or appreciation: others star repos to show support or appreciation, similar to how "likes" are used in social media sites. This is a social signal, and it's very important in the very early stages of open source projects, acting as a feedback loop for project creators. Knowing that other people have seen the project and cared enough to click on the Star button can create motivation for the creators to continue working on the project initially.

The latter is why I personally star projects. Regardless of whether I've used the project in the past, using it just now, plan to use it, or think it's a cool idea, I want the project creator to know that I like what they're doing. Terraform and Pulumi are projects that I recently starred to show support.

Benefits of repo stars#

The main benefit of repo stars is creating confidence and a good first impression of the project. That in turn helps with the project getting users, and to a lesser extent contributors.

A 2018 academic research survey of over 700 developers found that "three out of four developers consider the number of stars before using or contributing to GitHub projects"2. GitHub stars are not the only metric that matters though. A project's activity level, for example its last release or commit, and its ease of use, for example the quality of its documentation, are also important factors in helping projects get users.

I say to a lesser extent as contributing, by creating a GitHub issue or submitting a pull request, requires significantly more effort than starring a repo. People who only star a repo are probably not yet active community members but they might become active in the future. This is why the Orbit Model classifies them as Observers3, as they can act as the top-of-funnel for growing users and contributors. hugely popular

In addition to helping projects get users, GitHub stars can help the project creators meet investors who are familiar with open source. Early on in Infracost's journey, we were surprised to get cold emails from VCs congratulating us on our star count. After speaking with a few, it became clear that they either had systems in place to monitor stars4, or had analysts who reviewed Trending Repos on GitHub for potential investment opportunities5. Some have gone even further. For example, the VC firm Runa Capital, who invested in Nginx and MariaDB, has started to track the fastest growing open source startups using GitHub stars and forks. Infracost was recently placed 5th on the ROSS Index6.

Infracost GitHub stars

Future of GitHub stars#

A16Z's Martin Casado thinks that there is a big trend towards bottom-up strategies in business-to-business (B2B) software that will shape the entire B2B landscape in the next 10 years7. I wonder if in the same way that social media influencers are changing how products are marketed and sold, GitHub influencers (someone with many GitHub followers) will change how enterprise software is marketed and sold? Developer Advocates are currently using Twitter and LinkedIn, but GitHub has a "follow" and a "status update" feature too. Will those remain as a simple way to get updates on code-related activities? Or could they be extended to enable GitHub influencers to post their demos, talks and blogs into the GitHub activity feed? Will companies be able to buy ads on GitHub and promote their open source projects?

Over to you - what have you learnt about GitHub stars, and how do you think they'll change in the future? I hang out on Twitter...


  1. https://github.com/alikhajeh1?tab=stars, this is a public page, so you can see the repos that any GitHub user has starred.

  2. H. Borges and M. Tulio Valente, "What's in a GitHub Star? Understanding Repository Starring Practices in a Social Coding Platform," Journal of Systems and Software, vol. 146, pp. 112–129, 2018.

  3. The Orbit Model is implemented via the Orbit product, which can be used to measure and grow open source communities.

  4. Openbase helps developers choose the right JavaScript package with more languages coming soon. See the React page to get an idea of the kinds of metrics they collect.

  5. https://github.com/trending, Infracost has hit the Go trending page a few times.

  6. https://runacap.com/ross-index/, Infracost was placed 5th in the fastest-growing open-source startups in Q4 2020.

  7. Growth, Sales, and a New Era of B2B

April 2021 update - EC2 reserved instances and Jenkins integration!

Two big milestones to celebrate this month: Infracost now supports over 100 AWS and Google resources and we have over 100 people in our community Slack channel.

You can upgrade to the latest version (v0.8.6) to pickup the new features. If you are using v0.7 (or older) please follow the v0.8 migration guide.

📉 EC2 reserved instances#

You can now do what-if anlaysis on AWS EC2 Reserved Instances (RI), as we have added support for these in the Infracost usage file. The RI type, term and payment option can be defined as shown below, to quickly get a monthly cost estimate. This works with aws_instance as well as aws_eks_node_group and aws_autoscaling_group as they also create EC2 instances. Let us know how you'd like Infracost to show the upfront costs by creating a GitHub issue.

aws_instance.my_instance:
operating_system: linux # Override the operating system of the instance, can be: linux, windows, suse, rhel.
reserved_instance_type: standard # Offering class for Reserved Instances. Can be: convertible, standard.
reserved_instance_term: 1_year # Term for Reserved Instances. Can be: 1_year, 3_year.
reserved_instance_payment_option: all_upfront # Payment option for Reserved Instances. Can be: no_upfront, partial_upfront, all_upfront.

♾️ Jenkins integration#

Our new Jenkins integration enables you to save an HTML page for each pipeline build, which shows the Infracost diff output. Checkout this demo that uses Jenkins' Docker agent to run Infracost; the Jenkinsfile can be customized based on your requirements. The integration can also be used to fail a build if its cost estimate crosses a percentage threshold. This safety net is often used to ensure no one breaks the bank 😃

Infracost Jenkins integration

⚙️ Customize output columns#

The infracost breakdown and infracost output commands show the monthly quantity, units, and monthly cost of resources by default. You can now use the new --fields flag to customize the columns shown in the table output to include price and hourly cost, or you can set it to only show the monthly cost if you prefer a simplified view (shown below). The HTML output format is being updated to support the same feature. The JSON output format will always include all fields.

Name Monthly Cost
aws_instance.web_app
├─ Instance usage (Linux/UNIX, on-demand, m5.4xlarge) $560.64
├─ root_block_device
│ └─ Storage (general purpose SSD, gp2) $5.00
└─ ebs_block_device[0]
├─ Storage (provisioned IOPS SSD, io1) $125.00
└─ Provisioned IOPS $52.00

* Array wildcards in usage file#

The Infracost usage file enables you to define resource usage estimates using their resource path, e.g. storage for aws_dynamodb_table.my_table. This can be cumbersome for resource arrays, such as AWS CloudWatch Log Groups, since you'd have to define the array items individually.

We've addressed this issue by supporting the wildcard character [*] for resource arrays. Infracost will apply the usage values individually to each element of the array (they all get the same values). If an array element (e.g. this[0]) and [*] are specified for a resource, only the array element's usage will be applied to that resource. This enables you to define default values using [*] and override specific elements using their index.

aws_cloudwatch_log_group.my_group[0]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200
aws_cloudwatch_log_group.my_group[1]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200
aws_cloudwatch_log_group.my_group[3]:
storage_gb: 1000
monthly_data_ingested_gb: 1000
monthly_data_scanned_gb: 200

⛅ New cloud resources#

We added support for the following cloud resources:

  • AWS: Redshift. The CPU-credit usage file params were improved for T2, T3 & T4 instances.
  • Google: Google SQL and Container Registry.

Thanks for being part of the community! We are always looking forward to your feedback, so please create GitHub issues here. We read every single one.

Infracost docs review with leaders from Stripe and Uber

Last week David Nunez (Documentation Manager at Stripe) and Stephanie Blotner (Technical Writer, Manager at Uber) sat down with me to review our docs. We also discussed feature-based vs task-based docs and how early-stage startups should think about technical documentation as a key part of their product.

David and Stephanie have around 20 years of experience between them, with a special a focus on writing for a developer audience. I learned a lot from this session; I'm sharing the recording and transcripts here as I think it contains great advice that applies to other startups too.

The following is lightly edited transcript of our meeting:

Ali: So today's I've got David from Stripe and Stephanie from Uber, maybe we can do some quick intros, then we're going to have a look at the Infracost docs and I've got some questions I'm going to ask them. We'll focus on how early stage startups should look at their docs as a function. David and Stephanie thank you for taking time to meet with me!

Stephanie: Hi! I'm Stephanie Blotner, Technical Writer and Manager at Uber. I mainly write for a developer audience and just really love helping people learn through documentation.

David: I manage the docs team at Stripe and similarly just have always obsessed over documentation. I mean it's 8am, we're talking about docs, and I'm excited about it! My favorite thing is talking to startup about docs as a) they'll avoid a lot of problems later on if they think about docs early and b) also it's fresh clay to mold. It's a lot easier than helping a big company undo years and years of decisions.

Ali: I'm going to share my screen [on the docs homepage], and maybe I'll just let you comment on it.

David: I like docs to be approachable, even before you read or start to understand what the thing is, users that are coming to the docs still have the same subconscious tendencies as they did on the website. How pretty is it, what's the layout, is the simple, so I think the things that I like: the colors are nice on the eye, it has the branding, it's not like here's some auto-generated docs that looks like another terminal. There's some good negative space. Layout wise you can do a better job of directing people to the main pieces, almost like your main site, where you show what the product does in action, there's some space, so the docs should play a similar complementary role. More of a docs home page than a getting started page. Show the top 3 things that you do really well, above the fold, and also a 1-line description. Not trying to do too much upfront like install it, try it, pay us... just getting them to scroll down and browse it is better.

Stephanie: Definitely agree that having a home-page is important, so focusing on one or two key paths that you want to funnel users through, for example highlighting the different CI/CD integrations. When it comes to the actual getting started page, I really like how concise and clean most of your documentation actually is, and that comes from you using headings, tables and lists, and not just a wall of text. In the installation section I really like having headings that are each like a task that the user should do. Moving onto the Usage section, I think you can follow the same model, where it's more perceptive and telling people what to do. When I read this for the first time, it wasn't clear that this was the part telling me how to use Infracost with my project. I wish it was a little bit more bossy, and tell me do this, do this, so as a user it makes me feel more grounded and clearer of what's expected of me.

Ali: So what would success look like when you've reached the bottom of the page. If you're observing a few people looking at your docs, how do you know it's working? Would you want them to explore? Focus on one specific task so they get some value immediately?

David: To take a step back, look at the GitLab docs, I watched your video with Sid so I thought hah maybe GitLab has good docs, and they do! So if you scroll down, this covers the use-case and layout perfectly, you might not have 6 use-cases, you might have 1 or 3, but here they're showing some graphics, a 1-line use-case description, so users can start to map-out if this product is for them, so they're drawing you, and they're not doing anything with this page. So to answer your question of what does success look like, that is a question for your company to answer. What do you want users to do? Is it really important for them to install it, or do you want them to do their first PR and see the product in action? Coz really you want the getting started page to focus on brand new users, expert users will use other parts of the docs.

Ali: So during demos, people smile when they see the pull request comment. And so far I've done the CTA very poorly.

David: yeah it's buried. Have you heard of any eye-tracking studies with docs? Like we're nerding out now but me and Stephanie talk about this a lot. There's a lot of studies that show people read docs using an F pattern, so they'll read the heading across then scan down and they'll see something that catches their eyes then they'll read again, then they'll keep scanning and scanning. So if you can imagine if this is what you want the hook to be (CTA to use CI/CD) then it's very buried. Users might not even scroll, so you want that to be very early, and maybe re-use the same graphic from your home page so users notice it and are excited to go install it. What is the demo link on the CI/CD integrations? The word Demo is eye candy for developers. That would be what I'd highlight too.

Stephanie: I think the point about CI/CD being buried a bit, raises an interesting point: when someone comes to a document, and it's long or requires them to scroll a bit, it's really helpful to put upfront and set expectations, like by the end of reading this page you will be able to do have you own pull request that looks like this then have that cool screenshot or a link to the demo.

David: So you asked earlier, what does success look like? Constant user research is the only way to measure that. Have your high-level goals, like MAUs (Monthly Active Users) that you pitch to investors, then do user interviews regularly. At Stripe we do that weekly, all types of users, and it doesn't really need to be formal. Like offer a $10 gift card to new users. Every time that you make a change, or have a regularly weekly or monthly time you set aside for that. The time they'll save you in development time pays exponentially back. That's how you can shape over time and see if this page is working. So I would get a fresh users, ask them to open this page and see what they think, ask them to be honest, and let them talk through those first impression. Then give them a task, so you can see what they do. It's easy to develop docs in a vacuum, so these are my tools, my environment and my terminal but users might have different setups.

Stephanie: The docs are actually part of the product. That's a key point.

David: It's often easier to do that vs product research, as it's just a document. And you can do it with sample docs, without even having a built product or working code. So docs are a great way to test upcoming things, they can simulate what they would do, so you can take that feedback and incorporate it into the product. We do that all the time and it helps us a lot.

Stephanie: It's also a great way to test out eye patterns, so if you share a link with someone and are watching their screen, you can immediately tell how they scroll, do they go all the way to the bottom, are they trying to click on stuff, are they lingering.

Ali: So far we have product docs, we don't have guides or use-cases. How do you think about that? Should we have both and split the focus?

Stephanie: It's really challenging when you're launching docs for the first-time, to know how to chose and what to write first. It's very cognitively intuitive to put-up feature docs, and I'm super-biased here but I think task-based docs are the way to go, to really guide users towards what they're trying to accomplish. Users are coming to your docs because they're trying to learn something, or they're trying to accomplish a task or goal, or they're really frustrated, something is broken and it's not working. The more you can guide them, the more you can reduce the cognitive overload from them. The better-off you are. But it's definitely more work on the creator side as you really have to understand who are you users and what are they trying to do.

David: I'm going to use stronger language and say that's the only way to do it properly, but it's also by far the hardest way to do it. A simpler way to do it is to just get your language and terminology mapped to how users are talking about it, so it's really easy to go with awesome names you came-up with your features, but users are really searching Google on how to save money with AWS. They're not like searching "use Infracost 2.0 cloud-saver". Write down 3 use-cases that you want Infracost to show-up on Google, and put that on your pages front and center. Then you can map-out the rest of the docs. Information-architecture wise, you should always think about their problems and the tasks they'd need to do to solve it, and work backwards from that.

Ali: So for this next part, I thought it would be cool if we could zone out a bit and ignore Infracost docs, to talk about more generic questions. To begin, when do you think seed-stage startups should start to think about docs, and maybe more importantly what is important for them to consider?

Stephanie: I think you should consider docs as part of the product. The same care and considerations and thoughts you put into the product should go into docs. Docs are an incredible tool to improve adoption, reduce onboarding and support time, brand reputation, overall wonderful-ness. Don't worry too much about being comprehensive, and trying to document all the things as you haven't figured out everything. Maybe use-cases, what this thing is, maybe a little about troubleshooting. People prefer less content actually, and gather feedback really early on docs.

David: Agree, it's part of the product, you don't want to say do this first or do it last. If you think of docs as a way to increase adoption, or reduce the time it takes for someone to start using your product vs a sales person reaching out to them and them trialing it over a period of time, you're making a lot more money efficiently. It's better when a user feels smart, and they can figure out your product and use it without any human intervention. The costs (user acquisition) and the user experience improves, so what else do you need?

Ali: In open source, the contributors can be from a variety experiences and from all over the world, should the core contributors own the docs?

Stephanie: Have the core team setup the docs, have principals of how you'd want docs to be, and review contributions. One way I've seen it done really well is similar to how Infracost leaves a comment on PRs, you can get people to read a check list, and have docs as one of the core things as they're making a pull request.

Ali: When you're submitting a PR, it's accepted that it obviously needs test. Do you feel like it's the same with docs? Does it create a barrier? Coz in the early stages of an open source project, you kinda want everyone to come check it out and contribute, so the discussions we've had with Alistair are sometimes like, let's just get them to contribute code and we'll write all the docs. So we can reduce the barriers, but I don't think that scales.

David: that definitely won't scale.

Stephanie: You want the core team to setup the docs framework and have standards for docs, so if you have tasks or use-cases, people can see how to document it. And that way you're reducing the barrier, so people can just follow it to plug things in.

David: I think that's the right way to think about it. So you don't want a high barrier, but do want to document yourself out of those tasks. Find a light-weight style guide, like the Microsoft Writing Style Guide is great, so use something like that and add a few things that are specific to Infracost. You don't want a ton of stuff, and also PR guidelines: what does a good PR review look like, show by example, so have examples where you are giving people feedback on the docs and put that in the run book for a good PR review. So the balance is you want lots of contributions, but also you want quality contributions. So showing the community how excited you are about good docs via contributions or having a leader board or some way to call-out good docs contributions. People don't usually contribute to open source docs as they think no one really cares. I've also found in engineering teams to put in an incentive in the engineering career ladder helps, like a simple thing to say as part of being a senior engineer you can document your code and review other docs as part of the peer-review process. Setting the example from the founding team, is important. I've been at companies where the executive don't even put grammar into their emails and that sends a signal that I'm too busy to capitalize my words, use periods and commas, and so why would anyone else obsess with quality of their writing as the leadership don't. If you signal without being overbearing, people will see that and follow along.

Ali: That's awesome! In your recent separate blog posts I was reading about the importance of having a writing culture, and I was going to ask a question about when startup founders should start thinking about that as part of the bigger company culture; but you've already answered that question. It's like: you set the tone as leaders of the company and you can do a lot early-on to encourage and make it part of the process vs something that's not really thought of from day 1. I don't have any other questions! Thank you very much - I don't know if you have any other tips or any other final thing I should go look at or think about?

David: I'd say resource-wise, Nielsen Norman Group is great. If you want to learn about F-patterns or information architecture, all the nitty gritty, their search is pretty good, you'll find stuff dating back 20 or 25 years, up till recently when they have videos. So I would say when you want to dive deep into something, that resource is great. My last tip would just be: that if you set the goal of having high quality content and you know it's important, just know that you're not going to get there on day 1, just little steps so you move in that direction, creating that habit and move towards that goal. You'll then someones have huge breakthroughs and make big leaps as long as you constantly keep working on it.

Stephanie: Exactly, perfect is the enemy of good, so don't put too much pressure specially if it's people's first time contributing to your code base and documentation - you don't want to be too hard on them.

Ali: That's awesome! Thank you both very much, David and Stephanie for your time. I'm going to put a lot of the stuff you said into practice and hopefully in 6 months time when you look at the Infracost docs, it'll be a lot better.