In recent years, Data Ware­housing has seen tre­men­dous chan­ges with the widely adopted move toward cloud solu­ti­ons. But even among the cloud solu­ti­ons at offer as we speak, inno­va­tion is dri­ving new archi­tec­tures with new, uni­que cha­rac­te­ristics.
Snow­flake has been par­ti­cu­larly suc­cessful at dra­wing cus­to­mers’ atten­tion; part of its suc­cess can surely be attri­bu­ted to its sto­rage archi­tec­ture. In the fol­lo­wing, we shall have a clo­ser look at Snowflake’s sto­rage tier and lay out various ways in which it enables Snow­flake to func­tion in the way it does.
Snow­flake is cloud-agno­stic and runs on AWS, Azure and Google Cloud. For sim­pli­city, we will only be refer­ring to AWS Ser­vices, spe­ci­fi­cally S3, to illus­trate the con­cepts; howe­ver, other cloud  offe­rings work with Snow­flake in very much the same way.

Con­cep­tual Differences

Snowflake Storage Bild1

Snow­flake stores data in new ways. As oppo­sed to tra­di­tio­nal solu­ti­ons, it offers some­thing which might be labe­led a “shared-not­hing archi­tec­ture”. Usually, this archi­tec­ture implies that all the data has to be repli­ca­ted – incur­ring high cost by doing so. Snowflake’s approach is novel in the sense that while it does imple­ment a share d not­hing archi­tec­ture, the repli­ca­tion is opaque to the pro­ces­sing units.

Spe­ci­fi­cally, Snow­flake uses AWS’s S3 sto­rage ser­vice; pro­ces­sing is per­for­med by so-cal­led “Vir­tual Warehou­ses” (deno­ted by WH1 and WH2 in the illus­tra­tion). Each ware­house uses a local cache on solid state discs; those ren­der requests to S3 unneces­sary in case data was loa­ded before.

Snowflake Storage Bild2

For every query to be exe­cu­ted, the ware­house checks whe­ther local data sup­ple­ments the queried data; only miss­ing data is reques­ted from S3. Once data is recei­ved from S3, the local cache is popu­la­ted and the result is sent back to the end user.

This is the first ins­tance in which Snow­flake dif­fers from tra­di­tio­nal solu­ti­ons. By lever­aging S3, Snow­flake uses a high-per­for­mance, high-avai­la­bi­lity repli­ca­ted data sto­rage solu­tion wit­hout intro­du­cing addi­tio­nal com­ple­xity to its archi­tec­ture. Note that tra­di­tio­nal solu­ti­ons like Reds­hift or Spark require data shuff­ling among nodes to redis­tri­bute data to the nodes that need them. Because data loca­tion is part of other sto­rage sys­tems’ archi­tec­ture, dis­tri­bu­tion of data among the nodes (in DWH par­lance “par­ti­tio­ning”) has to be defi­ned and main­tai­ned. This intro­du­ces both extra com­ple­xity as well as pos­si­ble delays in query exe­cu­tion due to nodes being respon­si­ble for both shuff­ling data and exe­cu­ting the query at the same time. This very step is not neces­sary for Snow­flake, redu­cing com­ple­xity and making query exe­cu­tion times less sus­cep­ti­ble to delays by data shuff­ling. In short, Snow­flake com­bi­nes the advan­ta­ges of shared not­hing – low main­ten­ance and high per­for­mance – with those of shared disk – i.e. a sin­gle point of truth.

Snowflake Storage Bild3
Figure 1 Data Access in Snowflake

Sto­rage Format

Tra­di­tio­nal data­ba­ses like MySQL or Post­greSQL use row-ori­en­ted data for­mats, i.e. columns belon­ging to the same row reside in spa­ti­ally con­ti­guous memory. This for­mat is deal for tran­sac­tional data­ba­ses, where enti­ties are usually reques­ted as a whole and aggre­ga­ti­ons on columns are rarely run. In par­ti­cu­lar, updating columns depen­ding on other columns‘ values beco­mes tri­vial with this approach – data access is kept local in those cases.

Snowflake Storage Bild4
Snowflake Storage Bild5

Like many other modern ana­ly­ti­cal data­ba­ses, Snow­flake uses a column-based sto­rage for­mat. That is to say, blocks of con­ti­guous memory con­tain values of the very same column. In this way, aggre­ga­ti­ons can be imple­men­ted very effi­ci­ently and algo­rithms like Map-Reduce can easily be used for parallelization.

Fur­ther­more, because blocks only cover indi­vi­dual columns and hence data of the same dis­tri­bu­tion, more effi­ci­ent com­pres­sion is possible.While being very inte­res­t­ing as a detail of imple­men­ta­tion, this archi­tec­ture is opaque to the end user. Note how this is one of the many aspects for which Snow­flake excels at sim­pli­city – for other ware­housing solu­ti­ons like Reds­hift, column com­pres­sion opti­miza­tion and data skew are serious con­cerns that would need to be addres­sed here.


Par­ti­ti­ons are instru­men­tal in under­stan­ding the impact of sto­rage on per­for­mance in Snow­flake. „Micro­par­ti­ti­ons“ are the ato­mic blocks of data that Snow­flake deals with. Uncom­pres­sed, those can span any­thing from 50 to 500 Mega­bytes.
Each Micro­par­ti­tion car­ries a num­ber of sta­tis­tics on the column values it con­ta­ins – among others, the mini­mum and maxi­mum value. When sear­ching par­ti­ti­ons, Snow­flake will skip par­ti­ti­ons for which it can ensure that no rele­vant data is con­tai­ned within. For ins­tance, if a cer­tain par­ti­tion only con­ta­ins data for tran­sac­tions in March 2020, then this data is irrele­vant for queries on April 2020. This pro­cess is cal­led „pru­ning“ and is imple­men­ted using the afo­re­men­tio­ned meta data.
Snow­flake offers func­tion­a­lity to ana­lyse the state indi­vi­dual tables‘ par­ti­ti­ons are in by offe­ring sta­tis­tics on the degree of over­lap bet­ween par­ti­ti­ons and expo­sing par­ti­tion pru­ning sta­tis­tics in the query profile.

By design, Snow­flake is a data­base requi­ring little to no main­ten­ance. What lifts the bur­den of manual opti­miza­tion off users’ should­ers also depri­ves them of the ability to do so. One of the very few opti­miza­tion tech­ni­ques that are available in Snow­flake is the orde­red insert of data. Inser­ting data orde­red by columns on which we fil­ter enables Snow­flake to prune par­ti­ti­ons effec­tively, reduce the num­ber of par­ti­ti­ons scan­ned and thus reduce query­ing time. Due to immu­ta­bi­lity – a con­cept we are going to cover just in a minute – par­ti­ti­ons do not change once they have been crea­ted. In par­ti­cu­lar, it is not pos­si­ble to change the order of data once it has been writ­ten to disk. The­r­e­fore, it is advi­sa­ble to always insert data in a sui­ta­ble order to enable Snow­flake to per­form opti­mi­sa­ti­ons efficiently.

Micro­par­ti­ti­ons are immu­ta­ble. In other words, par­ti­ti­ons are never updated, but rather when­ever an update to any row in a par­ti­tion is to be applied, that par­ti­tion is loa­ded, the data is updated and a new par­ti­tion is writ­ten to disk. That implies that for any write access – regard­less of how small and insi­gni­fi­cant it is – an entire par­ti­tion will be writ­ten to disk. In very much the same way, sin­gle INSERT state­ments will also trig­ger the crea­tion of a Micro­par­ti­tion. Natu­rally, Snow­flake tries to batch INSERTS up, but when issuing iso­la­ted INSERT state­ments, this may not always be pos­si­ble. This yet again under­lines the importance of batch pro­ces­sing for Snow­flake – it is sim­ply not built to deal with small amounts of data.

It is note­wor­thy that this design choice is cri­ti­cal to a num­ber of fea­tures in Snow­flake, first and fore­most Time Tra­ve­ling. Making par­ti­ti­ons immu­ta­ble to chan­ges faci­li­ta­tes pro­vi­ding his­to­ri­cal data on tables very easily. It also means that data over­head can be con­sidera­ble in Snow­flake and must be eva­lua­ted upfront. Every TRUNCATE on a table will retire all Micro­par­ti­ti­ons curr­ently on that table. Those Micro­par­ti­ti­ons are gar­bage-coll­ec­ted on a regu­lar basis, but will con­ti­nue to incur cost until they exceed the time tra­ve­ling inter­val defi­ned for that par­ti­cu­lar resource. Hence, usage of the TRANSIENT fea­ture (which dis­ables time tra­ve­ling altog­e­ther) should be deli­be­ra­ted for tables wit­hout a strong need to retain history.


Object clo­ning is quite a novel fea­ture in the realm of data ware­housing and very uni­que to Snow­flake. Due to the afo­re­men­tio­ned archi­tec­ture of immune Micro­par­ti­ti­ons, tables are essen­ti­ally just a list of Micro­par­ti­ti­ons. This implies that when a table is copied, unless data is chan­ged, it is suf­fi­ci­ent to create a new table with the same list of under­ly­ing Micro­par­ti­ti­ons. In par­ti­cu­lar, it is not neces­sary to phy­si­cally copy data, unless updates to a micro­par­ti­tion are to be applied. This approach is com­monly refer­red to as “copy-on-write” and often used in the con­text of lazy eva­lua­tion.
Due to this imple­men­ta­tion, clo­ning of objects in Snow­flake is very cost-effec­tive; even data­ba­ses with hundreds of objects and data can be clo­ned in a mat­ter of minu­tes. The bot­t­len­eck here is rather the sheer num­ber of objects than the actual amount of memory “copied” over.

Having this fea­ture at one’s dis­po­sal, new, pre­viously sim­ply infe­a­si­ble approa­ches become pos­si­ble. What pre­viously could easily take a day now beco­mes a mat­ter of minu­tes. For ins­tance, it is now pos­si­ble to quickly clone a pro­duc­tion data­base for the pur­pose of sys­tem test­ing new fea­tures. Like­wise, data­ba­ses can be trea­ted very much like bran­ches in a ver­sio­ning sys­tem, where all deve­lo­p­ment envi­ron­ments derive – or clone – from a com­mon mas­ter branch. All of those very inte­res­t­ing appli­ca­ti­ons hinge on the immu­ta­bi­lity of Micropartitions.

Cost of a Brave New World

While Snow­flake relies on AWS S3 for data sto­rage, it mana­ges the sto­rage ser­vice its­elf and bills cus­to­mers dif­fer­ently from the offi­cial AWS pri­cing. Curr­ently, a tera­byte of data runs at 45$ per month for on-demand and about 25$ per tera­byte for capa­city sto­rage for the Frank­furt region on AWS. The lat­ter is Snowflake’s name for “reser­ved capa­city”, i.e. pri­cing for long-term com­mit­ments. Thus, the actual S3 pri­cing is only pas­sed to the cus­to­mer if he choo­ses the long-term commitment.

Note that it is in prin­ci­ple pos­si­ble to keep data on one’s own S3 sto­rage account and refe­rence the data in Snow­flake using the EXTERNAL TABLES fea­ture. Howe­ver, there are a num­ber of down­si­des to this approach:
Firstly, data is then not mana­ged in terms of Micro­par­ti­ti­ons, pos­si­ble incur­ring a high per­for­mance penalty. After all, the snow­flake query opti­mi­zer will not be able to apply any opti­miza­tion based on Micro­par­ti­tion meta data.
Secondly, the data is read-only, so neither write nor update ope­ra­ti­ons can be per­for­mance on those tables.
Thirdly, updates to exter­nal tables are pas­sed to Snow­flake via S3 event noti­fi­ca­ti­ons. Using this archi­tec­ture, it is quite hard to enforce strict data­base con­sis­tency (i.e. not just even­tual consistency).

This article hop­efully shed some light on the inter­nal workings of Snow­flake Sto­rage and how they sup­port you in offe­ring a well-per­forming ware­housing solu­tion. This article is part of a blog series on Snow­flake cove­ring a num­ber of dif­fe­rent aspects.
For more infor­ma­tion on this topic, you can attend our regu­lar online semi­nar for free or book a semi­nar on the topic with our experts.