Your Snowflake Warehouses Are Probably Too Big

Your Snowflake Warehouses Are Probably Too Big

Your Snowflake Warehouses Are Probably Too Big

Nobody sets out to waste money on Snowflake. It happens quietly.

During setup, someone picks Medium because it seems reasonable. Or Large, because a
dashboard was slow that one time. The warehouse works. Queries finish. Nobody touches it
again.

Six months later, you’re burning double the credits you need to. Not because anything broke.
Because nobody went back to check.

We see this in almost every Snowflake environment we touch at DSC. Warehouse oversizing is
the single most common source of avoidable spend, and it hides in plain sight.

How it happens

When a team spins up a new warehouse, the priority is getting things working. Fair enough. But
“working” and “right-sized” are two very different things.

Here’s the pattern we see repeatedly:

A warehouse gets created at Medium or Large during implementation. Dashboards load.
Pipelines run. Everyone moves on to the next problem. The warehouse stays at that size
forever.

Nobody downsizes because nobody wants to be the person who broke the BI layer. And without
data on actual utilization, it feels like a gamble. So the default sticks.

The problem is that Snowflake pricing scales linearly with warehouse size. Medium to Large
doubles your credit burn. Small to Large quadruples it. If the workload doesn’t need that
compute, the difference is pure waste.

What it looks like in practice

This isn’t dramatic. It’s boring. That’s why it persists.

Queries finishing in 2 seconds on a Large that would finish in 3 seconds on a Small. A BI
refresh running hourly on Medium with one concurrent user. A transformation job that spikes for
30 seconds but holds a Large warehouse active for 10 minutes because auto-suspend is set to
the default 5 minutes.

None of these trigger alerts. All of them cost you money every single day.

Take one warehouse running 8 hours a day, 5 days a week, sized one step too large. That’s
roughly 160 hours per month of excess compute. At Snowflake’s standard pricing, you’re looking
at hundreds to thousands of dollars in waste from a single warehouse. Most environments have
5 to 15 of them.

Why nobody catches it

It’s not a visibility problem. Snowflake gives you the data. It’s an ownership problem.

Engineering manages the pipelines. Analytics owns the dashboards. Finance sees the invoice.
Nobody owns warehouse efficiency. There’s no recurring review. No baseline for what
“right-sized” means for a given workload.

And when spend comes up in a planning conversation, the response is usually to look at query
optimization or storage cleanup. The warehouse sizes themselves rarely get questioned
because they’ve always been that way.

The fix is simpler than you think

You don’t need to rearchitect anything. You need three things:


Utilization data per warehouse. Look at AVG_RUNNING and AVG_QUEUED_LOAD from
WAREHOUSE_LOAD_HISTORY. If average running queries are consistently below 1.0 over 30
days, the warehouse is oversized.

A benchmark test. Pick your top 3 warehouses by spend. Drop each one size. Monitor for a
week. If queries still finish within acceptable thresholds and nothing queues, keep the smaller
size.

An auto-suspend check. The default is 5 minutes. For most ad-hoc and BI warehouses, 60
seconds is fine. That alone can cut idle credit burn by 20-30%.

This isn’t a one-time exercise. Workloads change. Dashboards stabilize. Models get more
efficient. What was the right size 6 months ago probably isn’t today.

What we built for this

Share

Table of Contents

Read More

Your Snowflake Warehouses Are Probably Too Big
During setup, someone picks Medium because it seems reasonable. Or Large, because a dashboard was slow that one time.
Your Snowflake Warehouses Are Probably Too Big
During setup, someone picks Medium because it seems reasonable. Or Large, because a dashboard was slow that one time.
Your Snowflake Warehouses Are Probably Too Big
During setup, someone picks Medium because it seems reasonable. Or Large, because a dashboard was slow that one time.
Scroll to Top