When talking about PDW V2 Performance we have to distinguish between the MPP approach where one process i executed and the Clustered Columnstore Index. Hilmar already explained quite a lot about CCI in PDW but I wan’t to explain in how the CCI functions and what happens while querying.
First of all, you have to know that in SQL Server 2012 something called Columnstore Index (CI) was implemented, which provided a whole different approach of storing data, compared to the page compression. With CI it was possible to ready data many times faster and reduce the I/O amount multiple times. Unfortunately it also came with a huge downside. The CI was read-only and included only the columns specified. Of course it is possible to include all columns though, but then you would have 2 tables stored in 2 different formats.
With SQL 2014 all of this is going to change and since PDW is a really important product for Microsoft, the newly developed Clustered Columnstore Index (CCI), besides some other functionality like the new cardinalty estimator, where merged to the PDW V2 code.
Starting with V2 you decide if the table is either a regular page compressed table, or the newly CCI table. You also have the ability to update, delete and insert data in the rowstore and that is what we are going to explain now.
Creating a CCI
Lets see how the process of creating a CCI works. First of all we have the table we wan’t to store as Clustered Columnstore Index. To do that, the table will be split into so called row groups. Row groups are just a horizontal cut through the table, whereby each row group includes up to 1.048.576 rows. After this is done we have to cut our newly created row groups into so called segments. With this you have each column separated with maximum 1.048.576 rows included. Since we have the table all cut into pieces, the compression takes place. As each segment holds different data and the compression may vary.
But how do inserts, updates, etc. work?
Updating a CCI
In SQL 2014 and therefore PDW V2 Microsoft needed to change the previous CI to make it updatable. The fundamentally changed the way it worked by adding two little helpers. The so called Delete Bitmap and Delta Store. With these adjustments we are no able to modify the CCI. And since the CCI now represents the actually data compared to CI, the additional word „clustered“ was added. The actual table is stored in this format.
First thing we want to to, is to add data to the table. There are two scenarios which can occur. If you want to add just a single, or a couple rows the data is held in the delta store. (blue scenario) The delta store is just a regular table which holds the data until the necessary amount of rows is ready to convert it into CCI. This would be 102.480. In this is the second scenario. In case you reach the number just mentioned, the data will go straight to CCI. (red scenario)
To delete data the newly introduced Delete Bitmap gets involved. The delete bitmaps stores for each row in each row group, if the data row is either deleted or still active by marking 0 or 1. So instead of deleting the row within CCI, which means decompressing CCI, deleting row, compressing CCI, the row only gets flagged as deleted. The „real“ deletion takes place by rebuilding the index. In case the row was a newly one added in the delta store, the row gets deleted there.
Since we know how an insert as well as an delete works it is pretty self explaining how an update works. In case the data is already in CCI the row gets deleted in the delete bitmap and gets added again in the delta store. In case we are talking about a newly entered row from the delta store the update takes place there.
Cleaning up Delta Stores
At some point the amount of rows in delta store is enough so that the tuple mover process the closed delta stores and compresses them into CCI. My colleague Hilmar Buchta
wrote interesting articles about this which you will find here
Das könnte Sie auch interessieren: