Michael Mukovskiy

 

We have heard a lot: writeback can only be used with a very small number of users. Otherwise it is too slow.

We can rather tell from our experience, that the writeback performance depends first of all on query performance (SELECTs) and on writeback usage patterns (the frequency of COMMITs). Both can actually be manipulated. The query performance should always be a subject of monitoring and optimization and even for the usage patterns we can recommend users to group more UPDATEs under COMMIT. 

Here we put out of scope the writebacks on the high levels which lead to large distribution on leaves. In this case Analysis Services needs substantial resources for changes in session cache (UPDATE), then for BULK INSERTs into the writeback table and for the recalculation of other session writeback caches (COMMIT).

For writeback on leaves (or not far from leaves) Analysis Services does not have really so much to complete.

From the performance point of view the writeback can be considered as a scenario with a frequent cube processing with its usual problems: database locks, empty cache and canceling of queries. It is not some kind of suboptimal design of Analysis Services, but just a normal trade-off for systems with SELECTs und UPDATEs at the same time.

More writeback users issue COMMITs more often having as a result the degradation of the overall cube performance.

Let us consider a situation where a COMMIT starts while a long SELECT is running at the same AS database:

image

The COMMIT waits max. 30 seconds (default) for the long running query holding on lock the whole AS database.

Not only new SELECTs are blocked but also other UPDATEs, giving the impression of “slow writeback” to other writeback users! Moreover, the canceled long running queries (default 30 seconds) add negatively to the user experience.

And this is what we usually see: the long running queries for the same AS database are the main cause for the “poor writeback performance”. 

Another point: the queries of writeback users have a “cold cache” between first UPDATE and COMMIT because of the session deltas added to the results! 

So the main two points for the optimization are: design your cube and queries to run only few seconds even with the “cold cache” and separate cube(s) into several databases to restrict the locks. 

Normally it is sufficient to diagnose the problem using the SQL Server Profiler for AS and SQL instances.

And yet another hint: the frequent INSERTs and DELETEs in writeback tables can bring a substantial table fragmentation which slows down the relational part and as a result the whole writeback chain. It is not a bad idea to make TRUNCATE from time to time.