@tsteur opened this issue on March 8th 2015

When rendering a visualization, we applied queued filters, computed processed metrics, etc. always on all rows (meaning before generic filters were applied). This means when a datatable has 25k rows (which is not so much), then we ran computeProcessedMetrics, applyQueuedFilter, ... on 25k rows instead of only 100 displayed rows. By changing this it made rendering a report with 25k rows over 3 seconds faster on my server. The time spent shrank to filter rows shrank from > 3 seconds to < 40ms.

Also before this refactoring filters were applied in a completely random order which is entirely different to what we test in system tests resulting in different displayed data compared to what we test and compared to the exported data. Apart from that there was some duplicated code in applying filters in the visualization. We do now make sure to run the filters in the same order as we test it in system tests.

Note: Some UI tests are failing but only for some rows having the same value in the filter_sort_column. So it is not really a change, rather a fix since we do now apply the filters in the correct order.

@mattab commented on March 9th 2015

Looks good! :+1: let's test it on demo

This issue was closed on March 9th 2015
Powered by GitHub Issue Mirror