Snowflake has quickly become a favorite as a data warehousing platform, and for good reasons. This makes many teams jump in, only to realize later that their Snowflake spend is climbing faster than expected.

This is because managing Snowflake costs often becomes a sticking point despite the platform being impressive from a tech POV. That’s where Snowflake cost optimization comes in. 

In this guide, we’ll explore how Snowflake pricing works and how you can stay in control without giving up performance or flexibility.

What is Snowflake cost optimization?

Snowflake cost components

Snowflake’s architecture has 3 layers: 

  1. Storage
  2. Compute (which Snowflake calls virtual warehouses)
  3. Cloud services
Snowflake cost components

These layers are billed separately, and Snowflake pricing is usage-based, meaning you get charged for what you actually use.

Storage

Every file, every table, every backup, all adds up in Snowflake. Snowflake charges a monthly fee based on the average amount of storage used over the month. The data is stored in compressed format. Depending on what kind of data you’re working with, like if you’re pulling in a bunch of raw CSVs versus more compact file types, the compression can significantly lower Snowflake storage costs.

Compute layer

Virtual warehouses are basically compute clusters that run your queries and handle your data loads. They scale independently, and you can have more than one running at a time. The key thing to know here: compute is paid using Snowflake credits. You spin up a warehouse, and you start spending credits. You pause it, it stops costing you. Sounds fair, but it also means you’ve got to stay on top of usage. 

Cloud services layer

This layer handles all the coordination across the platform, such as authentication, metadata management, and query optimization. It also runs on Snowflake credits, but the cost here is usually a smaller percentage compared to compute. Still, it adds up if you’ve got a lot going on, especially with serverless features in play.

Speaking of credits, let’s clarify this: a Snowflake credit is a unit that measures usage. 

One credit = one unit of usage. Simple. You’re charged credits whenever you’re running a virtual warehouse, leveraging cloud services, or tapping into Snowflake’s serverless features.

One more component that Snowflake charges is data transfer between cloud regions or providers. This applies if you’re using features like external tables or exporting data from Snowflake to a data lake. 

These Snowflake cost components vary depending on whether you’re on Amazon Web Services (AWS), Microsoft Azure, or Google Cloud (GCP), and the pricing structure for that is a bit more granular. 

Look at the tables below for Snowflake data transfer charges for AWS, Azure, and GCP:

AWS pricing guide
[AWS pricing guide: Snowflake data transfer charges]
Azure pricing guide
[Azure pricing guide: Snowflake data transfer charges]
GCP pricing guide
[GCP pricing guide: Snowflake data transfer charges]

An example of how Snowflake calculates cost

Note: This example is courtesy of Snowflake.

Suppose we have a customer using Snowflake Capacity Standard Service with Premier Support in the U.S. 

They do 3 main things:

  1. Load data nightly using a small virtual warehouse.
  2. Support 8 users working 10 hours a day, 5 days a week, using a medium virtual warehouse.
  3. Store 4 TB of compressed data on Snowflake.

Data loading costs

Warehouse usedSmall Standard Virtual Warehouse
Rate2 credits per hour
Usage2.5 hours daily for 31 days/month
Monthly Credits2 credits/hour × 2.5 hours/day × 31 days = 155 credits/month

User activity costs

Users8 users
Warehouse usedMedium Standard Virtual Warehouse
Rate4 credits per hour
Usage10 hours/day, 20 workdays/month
Monthly credits for users4 credits/hour × 10 hours/day × 20 days = 800 credits/month
Total monthly credits (users + loading)800 + 155 = 955 credits/month

Storage costs

Data stored4TB compressed
Rate$23 per TB/month
Annual storage cost4 TB × $23 × 12 months = $1,104/year

Virtual warehouse cost

Credits used per year955 credits/month × 12 = 11,460 credits/year
Rate per credit$2 (with 5% discount: × 0.95)
Annual compute cost11,460 × $2 × 0.95 = $21,774/year

Total annual cost

Storage$1,104
Virtual warehouse$21,774
Grand Total$22,878 per year

8 best practices and techniques for optimizing your Snowflake spend and reducing costs

1. Disable automatic clustering on tables that are barely touched

While automatic clustering can improve query performance, it runs on serverless compute. This means it racks up Snowflake credits whether anyone’s actually using the table or not. 

If the table is only getting hit a few times a week, that background compute activity is just silently chipping away at your budget. This is where smart Snowflake cost optimization begins. 

Look for tables with automatic clustering enabled that barely get queried, say, fewer than 100 times per week. Ask yourself if these tables are part of a disaster recovery setup or being shared with another account. If not, it’s probably safe to hit pause.

To suspend automatic clustering, run:

ALTER TABLE your_table_name SUSPEND RECLUSTER;

This one step alone can help reduce Snowflake costs tied to unnecessary background compute.

2. Drop materialized views that don’t pull their weight

Materialized views store precomputed results so queries can run faster. 

But at the same time, they come with both storage and serverless compute costs to keep everything up to date. So if a materialized view is only being queried, say, ten times a week? You’re paying for upkeep that’s barely getting used.

It’s a solid move to suspend or remove any materialized views that aren’t actively helping performance. This falls right into the category of low-effort, high-impact snowflake cost optimization. But again, double-check if the view exists for data sharing or backup purposes before you go on a deletion spree.

To drop a materialized view, run:

DROP MATERIALIZED VIEW your_view_name;

3. Remove unused search optimization paths

Search optimization can speed up point lookups and analytical queries, but just like everything else in Snowflake, that speed boost doesn’t come free. 

These access paths require extra storage and compute resources to stay in sync with your data.

If Snowflake tells you that a particular search optimization path is being used fewer than ten times a week, it might be time to rethink things. Especially if you’re trying to reduce Snowflake costs without compromising your actual workloads.

You can remove search optimization with a simple command:

ALTER TABLE your_table_name DROP SEARCH OPTIMIZATION;

4. Clean out large tables that haven’t been touched in a week

The massive tables that sit there eating up storage and haven’t been queried at all in the past week not only inflate your Snowflake storage costs, but they also clutter up your environment and slow down everything from data discovery to data warehouse optimization and management. 

If a table isn’t serving any purpose (besides reminding you of a project from six months ago), drop it. But again, always check if it’s being used for recovery or data sharing before swinging the axe.

To delete a table, run:

DROP TABLE your_table_name;

5. Use transient or temporary tables for short-lived data

Have you ever created a permanent table just to delete it 12 hours later? 

When you’re dealing with short-lived data, using a permanent table doesn’t make sense. 

Snowflake charges extra for things like Time Travel and Fail-safe on permanent tables even if they don’t stick around long enough to need it.

Instead, go with a transient or temporary table. 

These are lighter-weight options that skip the fancy durability features and save you money in the process. 

It’s one of the simplest cost optimization techniques to implement, and it can have a real impact, especially in workflows where data turnover is high.

To create a transient table, use:

CREATE TRANSIENT TABLE your_table_name (…);

6. Allow multi-cluster warehouses to scale down

Multi-cluster warehouses can be incredibly powerful, especially when you’ve got a high volume of concurrent queries. 

But if you’ve locked the cluster count at a fixed number, say, 3 minimum and 3 maximum, you’re forcing Snowflake to keep all clusters running at all times, even when demand doesn’t justify it.

That’s wasted compute and wasted credits. Snowflake is built to scale, so let it. 

Lower the minimum cluster count so the warehouse can scale down during slower periods. It won’t affect performance during peak hours, but it’ll quietly reduce credit consumption when traffic drops.

To adjust the scaling behavior, run:

ALTER WAREHOUSE your_warehouse_name SET MIN_CLUSTER_COUNT = 1;

7. Reduce transaction lock wait times with batch updates

A sneaky Snowflake cost drain is when queries get blocked by transaction locks. 

This happens when multiple users run updates or merges on the same table at the same time. Each command locks the table, and while other queries are waiting, they’re still racking up cloud services credits. So even though nothing’s happening, you’re paying for the wait.

To avoid this, change how your updates work. Use batch inserts into temporary tables instead of single-row updates. Then run periodic merges from the temp table to the main one. This cuts down on locks and lets Snowflake handle things more efficiently.

For workflows that receive a steady stream of new data, consider using a scheduled task to handle updates at intervals, say, every 15 minutes, instead of processing every change as it comes in.

It’s a small shift, but it adds up fast. And it’s one of those Snowflake optimization techniques that improves both performance and billing.

8. Reduce the frequency and scope of cloning operations

Cloning in Snowflake saves a ton of resources compared to full copies. 

But if you’re cloning entire databases or schemas over and over again, that metadata usage starts to pile up. 

And since cloning relies on cloud services, doing it frequently means your costs quietly creep up.

So instead of cloning full environments, clone only what you actually need, maybe just a single table instead of an entire schema. 

Also, take a hard look at how often your teams are running these clones. If it’s part of an automated process, make sure it’s not firing more often than it needs to.

What is a KPI in Snowflake?

For Snowflake cost optimization, KPIs are your best friend. 

Snowflake offers a bunch of performance metrics that, when tracked together, paint a full picture. These include basically anything that has a noticeable impact on credit usage, query speed, or system efficiency. 

Snowflake performance index (SPI)

Snowflake Performance Index (SPI) is a macro-level view of how much performance has improved over time across typical customer workloads. 

It tracks millions of jobs every month to give a reliable baseline for measuring how well Snowflake is optimizing things under the hood. 

This tracking gradually surfaces improvements like query improvements, data ingestion speed, replication efficiency, and more. 

The best part is that many of these performance gains happen automatically, so you benefit without needing to change your code or reconfigure anything.

How to pick a Snowflake FinOps and cost optimization tool

There are a lot of moving parts in Snowflake, and the right tool should help you control the chaos, not add to it.

Here’s how to make the right call:

Define your objectives

Before you even look at tool comparisons, figure out what you’re trying to solve. 

Do you need detailed cost visibility, like knowing exactly who or what is burning through credits, or are you looking for smart, automated recommendations that can flag optimization opportunities without you digging through logs for hours? 

Clarifying your goals will keep you from chasing features you don’t actually need.

Decide how much automation you want

Some tools give you suggestions. Others take action like auto-suspending idle warehouses, resizing compute, or flagging inefficient queries in real time. 

Ask yourself: are you comfortable letting the tool make changes, or do you prefer having the final say? The answer will help you zero in on a solution that fits your workflow.

Make sure the tool supports granular cost allocation

You want to be able to break down costs by user, warehouse, role, or even by specific workloads.

The more detail you have, the easier it is to hold teams accountable and find areas where spend can be trimmed.

Check for query performance tuning support

Look for tools that not only track slow or costly queries but also help you understand why they’re inefficient, whether it’s due to joins, filters, or warehouse sizing.

Prioritize customizable dashboards and reporting

Every stakeholder needs different data. Finance might want a monthly credit burn summary, while engineering needs real-time warehouse spikes.

The tool should let you build dashboards and reports that speak to your team’s needs without having to export everything to spreadsheets every week.

Evaluate ease of integration and scalability

Last but not least, think about how well the tool fits into your current stack. 

Does it integrate smoothly with your Snowflake environment? Can it handle your current workload and scale as you grow?

Some tools might look great for small setups but fall apart once things get complex.

Final thoughts

At the end of the day, Snowflake cost optimization comes down to how much visibility you have. 

The more visibility you have into your Snowflake usage, the better decisions you can make to keep costs in check. 

That’s exactly where Ternary can help. 

Ternary gives your team the insights they need to manage Snowflake spend with confidence. 

FAQ

How much do Snowflake credits cost?

The cost of Snowflake credits depends on your chosen cloud provider, region, and pricing tier. Credits are consumed when using compute, cloud services, or serverless features.

What is the biggest contributor to Snowflake costs?

Compute is usually the biggest cost driver in Snowflake. Virtual warehouses charge based on per-second usage, and costs vary with warehouse size and workload.

Does Snowflake charge for storing old data?

Yes, Snowflake charges for storage based on the average compressed data stored per month. Keeping outdated or unused data increases your storage costs.