Category Archives: Performance

Mind the Concat() sort-weight

Summary: While looking into long expressions I noticed that the optional sort-weight argument has an impact on the distinctness of Concat(distinct…). Incorrect use of sort-weight can generate bloated expressions containing redundant code.

In my work tuning Qlik Apps I sometimes encounter very long expressions. An expression many thousands of characters long can be difficult to debug or comprehend the expression goal. To help in working with long expressions I’ve added an Expression histogram and an Expression Decoder feature to my QSDA Pro product. (These features are currently in beta, generally available in early Feb).

I’ve noted expressions of length greater than 50k across apps from different customers. What did these expressions have in common that made them so large?

  • They used the Concat() function in $() to dynamically generate a part of the expression.
  • They used the optional sort-weight argument of Concat() incorrectly.
  • They were much bigger than necessary — sometimes 100x — but the expanded expression worked as intended.

In the process of reviewing the expressions I learned something surprising. As a reminder here’s the syntax of the Concat function:

Concat({[SetExpression] [DISTINCT] [TOTAL []]} string[, delimiter[, sort_weight]])

We use the DISTINCT keyword to return the unique set of values from the string argument (usually a field). The Qlik documentation for DISTINCT says:

If the word DISTINCT occurs before the function arguments, duplicates resulting from the evaluation of the function arguments are disregarded.

https://help.qlik.com/en-US/sense/November2022/Subsystems/Hub/Content/Sense_Hub/ChartFunctions/StringAggregationFunctions/concat.htm

This means that the set of distinct values is the combinations of string and sort_weight (if used). Let me demonstrate with an example. Here’s a sample data table.

For the expression: Concat(Dim, ',') we receive output "a,b,c,c,c“.

Adding the DISTINCT keyword: Concat(DISTINCT Dim, ',') we now get “a,b,c“.

Adding a non-distinct sort-weight argument: Concat(DISTINCT Dim, ',', RecId) we now get "a,b,c,c,c” again. More output than I expected. It’s a distinct list of the combinations of Dim and RecId.

Adding a distinct sort-weight argument: Concat(DISTINCT Dim, ',', Weight) we now get "a,b,c“.

How about if we used an unlinked data island field for sort-weight? The Island field has two values.

Concat(DISTINCT Dim, ',', IslandField) returns "a,b,c,a,b,c“. Item count is the product of Dim * IslandField values. Remember this for later.

Ok, this is all very interesting but the behavior is super obvious and I would notice it if it came up in my App. What’s this got to do with ginormous expressions?

Developers sometimes use Concat along with Dollar Sign Expansion (DSE) to generate dynamic expression fragments. For example to ignore all fields from several tables in a set modifier:

Sum ({<
$(='[' & concat({<$Table={'Table1', 'Table2', 'Table3'}>}$Field,']=,[') & ']=')
>} Value)

Sometimes $(=Concat(...)) is used to build the list inside a Pick() or Match(). These type of expressions frequently have awkward syntax including lots of “& chr(39) &” type stuff. Inevitably the expression gets built by copying and modifying an expression from elsewhere in the App. An expression that contains a sort-weight. A sort-weight that doesn’t get removed. It may be an island field or a field that has a many to one relationship. The result is an expanded expression that works but is larger than it needs to be. No one notices (unless they use QSDA Pro) because it’s the expanded expression.

As a simple example, suppose the “ignore filter” expression above was supposed to generate something like "Quarter=,Month=,Year=“. If I inadvertently use a sort-weight field that has 100 distinct values the result will be repeated 100 times. The expression would still work but it would be 100x larger than necessary.

I recently found an example where Concat was used to generate an If() function from data (very clever) that should have had 15 branches. But an unrelated sort-weight field of 95 values resulted in 1425 branches! It “worked” but did a lot of unnecessary calculation.

If you are a solo developer or working in a small team you may never encounter this issue. But if you are a consultant or maintaining legacy Apps you may stumble across it. I’ve been playing with ways to flag this condition in QSDA Pro. QSDA already flags data island expressions . I’m testing creating a new flag specifically for Concat().

My colleague Oleg Troyansky uses QSDA in his Performance Tuning session at the Masters Summit for Qlik. Live events will return in Fall 2023 but in meantime you can attend individual workshops on-line during February through April. More information and schedule here.

Happy Qliking
-Rob

Share

Expression as Left Side of Set Modifier

Can I use an expression as the “fieldname” in a Set Modifier? In “<x={y}>” can “x” be an expression?

I always believed the answer to be “No”, x may only be a field name. That understanding is reinforced by what I read in the help doc for “set modifier”.

Yesterday I was surprised to discover that in Qlik Sense, this is a valid expression that returns sales for year 2015.

sum({<"=Year(OrderDate)"={2015}>}Sales)

This also works:

sum({<"=left(Country)"={'M'}>}Sales)

This is all news to me. And very interesting.

I stumbled across this accidentally when using the Expression Editor > Set Analysis > Insert button. I had selections in the derived field OrderDate.autocalendar.Year field. The set generated by the Insert tool was:

{<"=Dual(Year([OrderDate]),YearStart([OrderDate]))"={'2014','2015'}>}

That expression is the derived field definition that was specified in the script.

I have not yet formulated an opinion as to whether this is useful, or if there are any cautions or limitations when using. I’m at the curious stage at this point and will look into it more as well as read comments I am sure will follow.

-Rob

Share

How to Segment QVD Files

Summary: In this post I discuss when you may want to segment or “shard” QVD files and demonstrate some script patterns for producing and consuming segmented QVDs.

I recently received a question from a colleague who knew that I had done a lot of work with Qlik QVD files. He asked, “what’s the optimal size for a QVD file?”

My response was that in my experience there is no optimal physical size, but in many cases there are reasons to break a large QVD into multiple, smaller files. Dividing a large file into smaller files is called “segmenting” or “sharding”.

People generally start to think about segmenting QVDs when they encounter resource constraints while updating a QVD with an incremental load. In an incremental load, only new or changed rows are extracted from the database and merged with an existing QVD. This involves reading and writing the entire large QVD which can use significant time and I/O resources. This also means that the process takes increasingly longer as time marches on. Not pleasant.

Other reasons you may want to segment are consumption and archival patterns. It’s common to use the “latest n periods” of data in your active dashboard, for example the current 12 months. If the data is in a single large QVD, you have to roll off (delete) data older than 12 months. You can filter as you load, but that becomes an increasingly wasteful activity over time.

It’s likely that you will want to retain the older data in for use in special projects or historical dashboards.

Given the example scenario above, it would make sense to create one QVD for each month. This will provide predictable performance for incremental updates as well as the dashboard load. Older data could be kept forever and any set of months could be accessed efficiently.

How do we do perform this magic segmentation? Let’s assume an example QVD with these fields:

TransactionID: Primary key
TransactionDate: Original date of transaction
LastUpdate: Timestamp of last update to this row. Transactions may receive updates up to 60 days after creation.
other…: other fields such as Amount, Customer, etc

We want to create one QVD per month using the name “Transactions-YYYY-MM.qvd”. What determines which QVD a transaction is placed in? Is it the MonthStart(TransactionDate)? It depends…

The simplest technique is for the extract script to place everything loaded today into the current month QVD, regardless of the TransactionDate. The QVD name is assigned to a variable in the script using:

Let vQvdName = 'Transactions-' & Date(Today(1),'YYYY-MM') & '.qvd';

When later loading 12 QVDs into the dashboard, load front (most current QVD) to back with the clause:

Where not Exists(TransactionID)

The Where clause will ensure that only the most current row for that TransactionID will be loaded.

This simple technique might be ok for most scenarios. But it’s not very robust because it falls down when you want to do something like a full reload to add columns, or data is loaded off schedule. It also would require that if want to load something like 6 months from the middle, we have to be careful to include enough later QVDs to cover possible updates.

A more robust approach would be to store each transaction row in the QVD corresponding with it’s TransactionDate. Here is one script pattern to do just that. Our starting point for this script is that we have already extracted the new and changed rows to create table “Transactions”.

Step #1 is to collect the month values into a temp table:

TempMonths:
LOAD Distinct
MonthStart(TransactionDate) as TranMonth
Resident Transactions; 

Next we process each TranMonth value in a loop block. The block will build a temp table of rows for just one month and merge with any existing QVD.

For i = 1 to FieldValueCount('TranMonth')
Let vMonthName = Date(FieldValue('TranMonth', $(i)), 'YYYY-MM');
Set vQvdName = Transactions-$(vMonthName).qvd;


MonthTransactions:
NoConcatenate LOAD * Resident Transactions
Where MonthStart(TransactionDate) = FieldValue('TranMonth', $(i));


If FileSize('$(vQvdName)') > 0 THEN // If we have existing QVD
LOAD * From [$(vQvdName)] (qvd)
Where Not Exists(TransactionID);
ENDIF


Store MonthTransactions into [$(vQvdName)] (qvd);
Drop Table MonthTransactions;
Next i

Drop Table TempMonths, Transactions; 

The above segmenting script would support incremental reload, full reload or a load of any data in between.

So now we have many “Transactions-YYYY-MM.qvd” files. How do we load the current 12 months? Do we wake up early on the 1st of each month and quick change the script? No. We create a dynamic script based off the current day.

For i = 0 to -11 step -1  // 12 Months
Let vMonthName = Date(AddMonths(Today(1), $(i)), 'YYYY-MM');
Transactions:
LOAD *
From [Transactions-$(vMonthName).qvd] (qvd);
Next i 

If we had built the QVDs using any technique that allowed for the possibility of duplicate TransactionID, we would add a guard of “Where not Exists()”.

...
From [Transactions-$(vMonthName).qvd] (qvd)
Where not Exists(TransactionID); 

What About IntraDay High Volume Reloads?

In scenarios with Intraday loading and high transaction counts, I prefer to defer merging QVDs to off-peak times.

Let’s take an example scenario of a customer who generates approximately 10 million transactions per day, with peak hours creating about 2 million transactions. The main dashboard should be refreshed hourly for twelve hours each day and should contain the last 10 days of transactions. Of course all data should be kept around for various summary analyses and ad-hoc projects.

It makes sense to segment these QVDs by day. Our hourly incremental load will need to merge with — read and write — a fairly large daily QVD. Crucially, the load time gets longer as the day progresses and the QVD gets larger. And now I hear rumors of twice hourly reload. This pattern has a bad smell.

What to do? Let’s store the hourly incremental rows in a hourly QVD of their own. The dashboard will then pick up all hourly QVDs plus required daily QVDs. Overnight, when we have some breathing room, we will run a script to consolidate the hourlies into a daily QVD.

The hourly incremental QVD is created like:

Let vQvdName = 'Hourly-Transactions-' & Timestamp(Now(1), 'YYYY-MM-DD-hh-mm-ss') & '.qvd'; 
Store Transactions into [$(vQvdName)] (qvd); 

Then the dashboard will load the new data using a wildcard load for the Hourly QVDs and a loop for the prior days:

// Load all Hourly QVDs
Load * From [Hourly-Transactions-*.qvd] (qvd);
// Load previous 9 days of Daily QVDs
For i = 1 to 9 // 9 Days
Let vDateName = Date((Today(1) -$(i)), 'YYYY-MM-DD');
Transactions:
LOAD * From [Transactions-$(vDateName).qvd] (qvd);
Next i 

Getting LastUpdate From a QVD

One of the steps in incremental loading is determining what “zzz” value to use in the SQL “Where LastUpdate >= zzz”. We need the high value from the last load. Some people store this information in a side file or variable. I think the most reliable approach is to get the high value from the existing QVD.

Getting Max(LastUpdate) from a very large QVD can take some time (how to do this the quickest is always an interesting pub question). My preferred technique is to store a new field “MaxLastUpdate” in the QVD and then read only the first row of the QVD to retrieve the value.

Getting and Joining Max(LastUpdate) should be fairly quick because we are only dealing with the incremental rows.

Transactions:
SQL Select * From db.transactions where LastUpdate >= foo;
Left Join (Transactions)
Load Max(LastUpdate) as MaxLastUpdate
Resident Transactions; 

The lastest MaxLastUpdate value can then be retrieved by reading only the first row of the existing QVD. Here’s how it looks all together using the example of monthly QVDs.

Let vMonthName = Date(Today(1), 'YYYY-MM');
TempMonth:
First 1 Load MaxLastUpdate
From [Transactions-$(vMonthName).qvd] (qvd);
Let vMaxLastUpdate = TimeStamp(Peek('MaxLastUpdate'), 'MM-DD-YYYY hh:mm:ss');
Drop Table TempMonth;


Transactions:
SQL Select * From db.transactiions
where LastUpdate >= '$(vMaxLastUpdate)'; 


Left Join (Transactions)
Load Max(LastUpdate) as MaxLastUpdate
Resident Transactions; 


// Merge or segment with existing QVDs

I hope you found some useful tips in this article. No doubt you have some ideas of your own, feel free to add comments.

Want to learn more advanced scripting techniques? After 2 years of virtual sessions, the Masters Summit for Qlik is back with live events this fall. In September we’ll be in Madrid, Spain, and in November we’ll be in New Orleans, USA. If you want to take your Qlik skills to the next level, get access to all sorts of ready-to-use solutions and reference materials, share experiences and maybe a few horror stories with your peers then we hope to see you there!

Share

If() Tips

Summary:  I offer some tips for writing better performing and easier to maintain syntax when using the Qlik If() function. 

The Qlik If() function is very powerful and  frequently appears in Qlik Sense and QlikView apps.

Expressions using multiple If() functions can easily get out of hand and become difficult to maintain or debug, as well as poor performers.

In this post I’ll offer some advice on avoiding If() pitfalls and tips to write easier to understand expressions.

The Qlik syntax diagram for the If function is:

if(condition , then [, else])

That’s perfectly clear to most people, but I prefer to think of it more like:

if(condition , true result [, false result])

Tip#1: If() does not short circuit.

Both the true & false branches are calculated even when only one is possibly true.  For example:

If(Only(Currency = 'LC',  Sum(Sales), Sum ([Sales LC])

In this case both Sum() expressions will be calculated even though only one value will be utilized.  In most cases this behavior is not of concern and in many applications will perform very well.   However, a nested If() with many possible branches or a large data set may perform poorly.

For more on the short circuit issue see “How to Choose an Expression“.

 

Tip#2: Use indentation sparingly.

The true or false result may be an additional, “nested” If(), which is where we start to see some ugly syntax.  Following traditional programming conventions many people automatically indent the nested if like this:

If(Sum(Sales) > 100000, 'Large',
    If(Sum(Sales) > 75000, 'Med', 
      If(Sum(Sales) > 50000, 'Demi',  'Small')
    )
)

Essentially,  the expression above classifies into one of four values.  I don’t think indentation  adds to the readability and indentation will lead you into “tab hell” when you get many possibilities.  I prefer to write this expression as:

If(Sum(Sales) > 100000, 'Large'
,If(Sum(Sales) > 75000, 'Med' 
,If(Sum(Sales) > 50000, 'Demi'
,'Small'
)))

No indentation, all the closing right parens collected on one line at the end. Makes it very easy in the expression editor to see that you have the right number of parens.

The leading (vs trailing) commas are my personal preference.  This make it easier to comment out logic and in my view, the comma belongs to the If that follows it, not the preceding If.

I think the above syntax makes it very easy to understand that I am choosing  one of four results, and what the rule is for each result.  Syntactically each If() is the else parameter of the preceding If().  I don’t think of the Ifs as “combined”, rather as “sequential”.

Do indent when you are using If() as the then parameter,  as shown in Tip#4 below.

 

Tip#3: Simplify by testing from high to low. 

The business rule that created this sample expression may have been stated to the Qlik developer like this:

“Classify sales of 0 to 50 000 as “Small”, 50 001 to 75 000 as “Demi”, 75 001 to 100 000 as “Med” and above 100 000 as “Large”.

The developer may faithfully translate the requirement into this expression.

If(Sum(Sales) > 0 and sum(Sales) <= 50000, 'Small'
,If(Sum(Sales) > 50000 and Sum(Sales) <= 75000, 'Demi', 
,If(Sum(Sales) > 75000 and <= 100000, 'Med'
,'Large'
)))

This returns the correct result. Testing from low to high values forces the use of “and” which makes the expression more complex than necessary and potentially slower to execute.  In my experience, testing from high to low, as in the Tip#2 example, yields a cleaner syntax.

 

Tip#4: Use “and” when you mean and.

Here’s a sample expression requirement:

When Sales > 1000 and Region=’US’, it’s “Mega US”. When Sales > 750 and Region = ‘UK’, it’s “Mega UK”. Otherwise it’s “General”.

I have seen this written as:

If(Sum(Sales) > 1000, 
    If(Region = 'US', 'Mega US'),
If(Sum(Sales) > 750, 
    If(Region = 'UK', 'Mega UK'), 
'General')

While the “and” requirement may be satisfied with a then-if  nesting, I find it clearer with the “and” keyword.

If(Sum(Sales) > 1000 and Region = 'US', 'Mega US'
,If(Sum(Sales) > 750 and Region = 'UK', 'Mega UK' 
,'General'
))

What if the requirement for  both US & UK were 1000?  You could argue that this is clear case for nesting in that there is a shared  condition and perhaps it would be a good practice to not repeat ourselves on the Sum(Sales).

If(Sum(Sales) > 1000, 
    If(Region = 'US', 'Mega US',
    If(Region = 'UK', 'Mega UK'), 'General'), 
'General')

Notice  we needed to repeat the ‘General’ result to cover the null case.  So it’s not super clean, but it may be worth it to not repeat the sum(Sales) calculation.  Generally I find the performance difference between “and” and “nested if” to be insignificant and tend to favor whatever is the clearer syntax for the given requirement.

What about Pick(Match())? 

I’ve heard it occasionally claimed that a Pick/Match combination will run faster than a nested If.   The expression might look like this:

Pick(
    Match(
      -1
      ,Region= 'US' and Sum(Sales) > 1000
      ,Region= 'UK' and Sum(Sales) > 1000
      , -1
    )
,'Mega US', 'Mega UK','General')

In my own testing and reading I’ve never found any performance advantage to Pick/Match.  That said, sometimes the syntax is appealing.

One thing I don’t like about Pick/Match is the distance between the condition list  and the result list. It’s fairly easy to get the lists  mis-aligned as the expression grows.

I  wish Qlik had a Switch type function like:

Switch (
  condition1 : result1
  [,condition2 : result2, ...]  
  [: defaultResult]
)

 

Tip#5: Simplify by using Column(n) or Measure Name

If your if() refers to something that has already been calculated in the chart, you can use the Column(n) function to refer to the value of a measure/expression column. For example, in a color expression:

If(Column(2) > 0, Green(), Red())

This can be much neater than repeating the expression text and typically runs faster as well.

If you are on Qlik Sense May 2021 you can use Master Measure names in the expression like:

If([Total Sales] > 0, Green(), Red())

[Total Sales] need not be a measure in this chart.

Both QlikView and Qlik Sense also allow you to reference the Label of a measure/expression column in the chart. In most versions the syntax checker will declare this an error even though it calculates correctly. I tend to avoid the label technique due to this confusion.

 

Tip#6: Don’t use If() as a chart filter

Use If when you want to dynamically select from two or more alternatives.  If should not be used simply to filter data like this:

Sum(If(Region = 'EU' and CYTDFlag = 1, Sales)

Filtering is best done with Set Analysis. The same expression written with a Set:

Sum({<Region={'EU'}, CYTDFlag={1}>} Sales)

Set Analysis is much faster than If.  If you are new to Set Analysis, you might initially find the syntax more challenging than If.  But SA  is much more powerful than If and well worth mastering.

 

Tip#7:  Consider the other conditional functions. 

Alt() and Coalesce() can be a more compact and elegant approach to testing for nulls. Instead of:

If(IsNull(SalesRep), Manager, SalesRep)

use:

Coalesce(SalesRep, Manager)
// If you want to consider empty and 
// blank strings as Null:
Coalesce(EmptyIsNull(Trim(SalesRep)), Manager)

When testing against a list of values,  instead of multiple If() or “or”, use the Match() or WildMatch() functions instead.

If (Match(StateCode, 'VA', 'TN', 'FL', 'GA'), 'South',  'Other')

 

I hope you find these tips useful.  You can use my QSDA Pro tool to quickly filter and examine all the uses of the If() function in a Qlik Sense App, located on-prem or in SaaS.

-Rob

 

Share

CubeTester

When working on Qlik Sense performance issues I frequently find I want to measure the performance of specific expressions. I might want to know how variations of an expression may perform against each other.  In a slow chart with many measures I want calculation time individually for each measure to focus my efforts.  Or perhaps I’m just satisfying a general curiosity or trying to settle a bet.

You can measure the performance of expression variations by modifying the chart and measuring the overall chart response time with something like Chrome Add Sense or QS Document Analyzer.  That approach can get kind of clunky especially when you are focused on a subset of measures in the chart.

I prefer a more structured approach to testing expressions. The tool I reach for is CubeTester.

CubeTester is an open source Nodejs command line tool for testing the performance of Qlik HyperCubes (Dimensions and Measures).  The test specification is written in a json file as either a HyperCubeDef or the “simplified” Dimension/Measure syntax.

Here’s a sample test written in simplified syntax that tests three variations of a cube (chart) containing one Dimension and three Measures.

I’ll run  CubeTester specifying the file that holds this test:

node index.js test tests/columns.json

And receive this output:

There is no significant difference in performance between the variations. Importantly, I can also see that all three return identical  total values as well.

CubeTester supports two commands:

  • test : Run tests.
  • extract: Extract app charts into a test file.

There are a number of options that can be specified on the command line or in the test definition. See the readme for more information on available options.

in addition to testing variations or trying out a theory, here are some other cases where I’ve used CubeTester.

  • When working with a mashup where my HyperCube exists only in code, there is no chart to test.
  • In a slow rendering chart I can test individual measures, combinations of measures and non-data expressions (like color expressions) to find the culprit.

Using CubeTester I can quickly try out ideas and document my progress as I work through an issue. I’ve made some interesting discoveries!

Some notes:

  • Testing against a server uses certificates for authentication.  (Pull request welcome if you want more auth options).
  • Make sure you specify “wss” when using a server endpoint eg
    wss://your.server:4747
  • You’ll need to test with enough data to get calculation times of sufficient magnitude.  Two results of 5 milliseconds vs 7 milliseconds are not precise enough to draw conclusions from.
  • Calculation time is affected by the capacity of the target machine and what else is running.  I recommend to repeat tests until you see a stable pattern.  Use the –repeat option and take the lowest result from each repeat.

CubeTester is free to use. Have fun!

-Rob

 

Share

Creating Temporary Script Associations

Summary: I review using Join, Lookup() and ApplyMap() as script techniques  to calculate using fields from multiple tables. I ultimately recommend ApplyMap().

Qlik charts  can calculate values on the fly, using fields from multiple tables in the model.  The associative model takes care of navigating (joining) the correct fields together.  Our expression syntax doesn’t  identify what table a field exists in — the associative logic takes care of this detail for us.

We may want to calculate a measure, for example, “Net Amount”, applying a business rule requiring fields from several tables:

Our expression to calculate “Net Amount” might look like this:

Sum(Amount) - 
RangeSum(
  Sum(Quantity * UnitCost), 
  Sum(Amount * Discount), 
  Sum(Amount * SalesTaxRate), 
  Sum(Amount * ExciseTaxRate)
)

There may be cases (such as performance) where we want to pre-calculate “Net Amount” as a new field in the script.  In script, we don’t have the magic associative logic to assemble the fields.  When a script expression is used to create a new field, all fields must be available  in a single load statement.  This is straightforward when all the required fields are in the same table.  But what do we do when the fields come from multiple tables?

Here are three approaches to solving the problem of calculating a new field using multiple tables in script.

  • Join
  • Lookup() function
  • ApplyMap() function

I’ll demonstrate deriving the same “Net Amount” calculation in the script.

JOIN

The Join option will require us to execute multiple joins to assemble the fields onto each Orders row and then finally do the calculation.  The script might look like this:

Left Join (Orders)
LOAD
 ProductId,
 UnitCost
Resident Products
; 
Left Join (Orders)
LOAD
 CustomerId,
 Discount,
 State
Resident Customers
; 
Left Join (Orders)
LOAD
 State,
 SalesTaxRate,
 ExciseTaxRate
Resident States
;

NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
   Quantity * UnitCost,
   Amount * Discount,
   Amount * SalesTaxRate,
   Amount * ExciseTaxRate
 ) as NetAmount
Resident Orders
;
// Drop the extra fields from Orders.
Drop Fields State, UnitCost, Discount, SalesTaxRate,ExciseTaxRate
From Orders
;

It’s a fairly good option.  It can be a lot of code depending on how many fields and tables we need to traverse. We need to be aware of “how many hops” between tables and may require intermediate joins (State field) to get to the final field (SalesTaxRate & ExciseTaxRate).

When using Join we need to ensure we have no duplicate keys that would mistakenly generate additional rows.

LOOKUP

Lookup() seems the most natural to me. It’s the least amount of code and it even sounds right: “look-up”.  It’s a one-to-one operation so there is no danger of generating extra rows.

It’s my least used option due to performance as we shall see.

Lookup takes four parameters  – a field to return, the field to test for a match, a match value to search for and the table to search.  Using Lookup() our script will look like this:

NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
   Quantity * Lookup('UnitCost', 'ProductId', ProductId, 'Products'),
   Amount * Lookup('Discount', 'CustomerId', CustomerId, 'Customers'),
   Amount * Lookup('SalesTaxRate', 'State', Lookup('State', 'CustomerId', CustomerId, 'Customers'), 'States'),
   Amount * Lookup('ExciseTaxRate', 'State', Lookup('State', 'CustomerId', CustomerId, 'Customers'), 'States')
 ) as NetAmount
Resident Orders
;

Note that for SalesTaxRate and ExciseTaxRate, the third parameter — the match value — is another Lookup() to retrieve the State. This is how we handle  multiple hops, by nesting Lookup().

It’s a nice clean statement that follows a simple pattern.  It performs adequately with small volumes of data.

Lookup does have a significant performance trap in that it uses a scan  to find a matching value.  How long to find a value is therefore dependent on where in the field the value is matched.  If it’s the first value it’s very quick, the 1000th value much longer, the 2000th value exactly twice as long as the 1000th. It’s a bit crazy making that it executes in O(n) time, for which I prefer the notation U(gh).

APPLYMAP

I like to think of the ApplyMap() approach as an optimized form of Lookup().  We first build mapping tables for each field we want to reference and then use ApplyMap() instead of Lookup() in the final statement. Our script will look like this:

Map_ProductId_UnitCost:
Mapping
Load ProductId, UnitCost
Resident Products
;
Map_CustomerId_Discount:
Mapping
Load CustomerId, Discount
Resident Customers
;
Map_CustomerId_State:
Mapping 
Load CustomerId, State
Resident Customers
;
Map_State_SalesTaxRate:
Mapping 
Load State, SalesTaxRate
Resident States
;
Map_State_ExciseTaxRate:
Mapping 
Load State, ExciseTaxRate
Resident States
;
NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
   Quantity * ApplyMap('Map_ProductId_UnitCost', ProductId, 0),
   Amount * ApplyMap('Map_CustomerId_Discount', CustomerId, 0),
   Amount * ApplyMap('Map_State_SalesTaxRate', ApplyMap('Map_CustomerId_State', CustomerId, 0)),
   Amount * ApplyMap('Map_State_ExciseTaxRate', ApplyMap('Map_CustomerId_State', CustomerId, 0))
 ) as NetAmount
Resident Orders
;

The mapping setup can be a lot of code depending on how many fields are involved. But it’s well structured and clean.

In the final statement, we are “looking up” the value using ApplyMap() and it performs very quickly.  ApplyMap uses a hashed lookup so it does not matter where in the list the value lies, all values perform equally.

We can re-structure and simplify the mapping setup and subsequent use with a subroutine like this:

Sub MapField(keyField, valueField, table)
// Create mapping table and set vValueField var // equal to ApplyMap() string.
 [Map_$(keyField)_$(valueField)]:
 Mapping Load [$(keyField)], [$(valueField)]
 Resident $(table);
 Set [v$(valueField)] = ApplyMap('Map_$(keyField)_$(valueField)', [$(keyField)]);
End Sub

Call MapField('ProductId', 'UnitCost', 'Products')
Call MapField('CustomerId', 'Discount', 'Customers')
Call MapField('CustomerId', 'State', 'Customers')
Call MapField('State', 'SalesTaxRate', 'States')
Call MapField('State', 'ExciseTaxRate', 'States')

NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
 Quantity * $(vUnitCost),
 Amount * $(vDiscount),
 Amount * $(vSalesTaxRate),
 Amount * $(vExciseTaxRate)
 ) as NetAmount
;
LOAD
 *,
 $(vState) as State
Resident Orders
;

Note the use of the preceding load to handle the nested lookup of State.   You could also modify the Sub to handle some level of nesting as well.

I typically use the mapping approach as I find it always gives accurate results (with Join you must be careful of duplicate keys) and generally performs the best, and importantly, consistently.

Whether you are new to Qlik or an old hand I hope you found something useful in reading this far.

-Rob

 

 

Share

Does Data Sort Order Impact Chart Calc Time?

Lately I’ve been digging into an old Qlik performance question.  How much impact, if any, does the order of Qlik data tables have on chart calc time?  My experience is that for a chart or aggr() cube with a lot of dimension values,  ordering of rows by dimension values can have a significant and measurable effect.

Mike Steedle of Axis Group blogged about the issue  a couple of years ago.  Mike’s post includes a useful subroutine to organize any table by a specific field.

I’ve added my own study and sample files on the topic in this QlikCommunity post.

Mike and I are are working together on the next update to Qlik Sense Document Analyzer.  Mike is keen on analyzing the data model and making useful recommendations.  One of the optimization questions we are studying is whether it is possible to make a solid recommendation on data table organization.

I’m curious to hear what others have discovered on the topic.  Do you have any rules you follow in ordering table rows?   Any thresholds or object/expression scenarios where you find it’s worth the trouble to manage the ordering?

-Rob

 

Share

AutoNumber vs AutoNumberHash128

Summary:  AutoNumberHash128(A, B) runs about 30% faster than AutoNumber(A &’-‘ & B).

It’s a common practice to use the script AutoNumber() function to reduce the storage required for large compound keys in a Qlik data model. For example:

AutoNumber(A & '-' & B) as %KeyField

As a standard practice, we generally include a separator like ‘-‘ to ensure ‘1’ & ’11’ does not get confused with ’11’ & ‘1’.

The AutoNumber process can add significant run time to a script with many rows.

I’ve always wondered what the AutoNumberHash128() function was good for.

AutoNumberHash128(A,B) as %KeyField

This function first hashes A & B and then autonumbers the result. The end result is the same as the first example given using AutoNumber().  I find the AutoNumberHash128 syntax a bit simpler as a separator is not required.

What surprised me is that the AutoNumberHash128() function runs faster.  Typically about 30% faster than a plain AutoNumber with a concatenated string parameter.

Why is it faster?  The difference is in the function used to create the single value to be autonumbered.  Hash128 is considerably faster than string concatenation (&).

AutoNumberHash128() can take any number of fields, but it does not have an “AutoId” parameter.  The “AutoId” (second parameter) in AutoNumber() is recommended to ensure we get sequential integers when autonumbering more than one key field.  Sequential integers are the most memory efficient storage for keys.

Don’t despair.  AutoNumberHash128() will use the “default” AutoId.  That is fine if you are autonumbering only one key.  If you are doing more than one key, use AutoNumberHash128() for your largest — most rows — key and use AutoNumber() with AutoId for the rest.  You will improve the script run time of one key.

Another possible tradeoff when you have many large keys is to use AutoNumberHash128 for all keys and forgo the sequential integer optimization.  You will use only 8 bytes per key value which could be significantly less than the original string keys.

-Rob

Update 20 Sept 2022

Things have changed somewhat with the addition of the AutoNumber statement, which is the recommended method to autonumber keys. AutoId is no longer a problem.  In my recent testing  creating compound key fields, I still find that Hash128() is somewhat faster than the & operator.  Here’s the results. Option 4 is creating the key with the & operator and AutoNumber statement. Option 5 is creating the key with Hash128() and AutoNumber statement.

 

Share

Preceding Load Performance Update

Summary:  Preceding load used to slow down your script. but no more. Beginning with QV Nov 2017,  preceding load has no performance penalty.

I’ve posted several times about the elegance of preceding load.  I’ve also written about how preceding load can make your script run significantly slower.  Good news! Beginning with QV release Nov 2017 (12.20) the performance penalty has been eliminated.

To demonstrate  the improvement, let me start with  some test results from QV12.10 SR8,  prior to the improvement.

 

Test 0, the first bar, indicates the time in seconds to perform an optimized load of  a 20 million row QVD.  Test 1, which follows, is loading the same QVD but with the addition of two new calculated  fields in the same LOAD statement.  The calculations are trivial, so the increase in elapsed time is mostly due to the loss of the optimized load.

Test 2 creates the same calculated fields using preceding load and you can see the dramatic increase in elapsed time.  Test 5 adds a “LOAD *” to the preceding load stack and again shows a large increase in duration.

Tests 3, 4 & 6 repeat the same tests using Resident as the source instead of QVD.  Once again, a significant increase in duration when preceding is used.

I’ve been running this same test suite for several years across multiple QV releases, different machines and varying datasets.  The results are generally the same.

The problem, as explained to me by Henric Cronström and confirmed by my own observations, is that the preceding load code  uses only a single processing thread.  So while tests 1 & 3 above will use multiple threads, tests 2,4,5,6 will use only a single thread.   One way to think of this is not that preceding load runs slower, but that non-preceding load runs faster.

I never did understand why Preceding-Resident ran slower than Preceding-QVD, but I no longer care!

Here I add test results (in red) for QV Nov 2017 SR1 (Qv 12.20) .

You can see optimized QVD (test 0)  is about the same.  Adding calculated fields (test 1) is  slightly better between releases.

What is really significant is there is no longer any increase when using preceding load.  Further,  Resident performs faster than QVD as I would expect. (Note both tests used an SSD drive).

This is all great news as there are many cases where preceding load can help make your code more maintainable and understandable.  I hated to choose between clarity and performance.

What about Qlik Sense?   I’ve confirmed that Feb 2018 Desktop exhibits the new “no-penalty” performance.  I don’t know about previous releases.

No reason to fear preceding load!

-Rob

Share

Document Analyzer Batch Analysis

I’ve received several requests to provide a batch interface to the popular QV Document Analyzer tool that will provide for analyzing multiple QVWs with a single command.  It’s now available in the QV Document Analyzer bundle available for download here.

The script is a windows cmd file.  The usage from the command line is:

DaBatch.cmd somedir
 where “somedir” is a directory of QVWs to be analyzed.   Each QVW in the directory will be processed by Document Analyzer and the results will be saved for later review.
Before running, there are a number of configuration variables in DABatch.cmd you will want to review and modify as necessary.

 

REM *** Path to QV.exe executable ***

SET qvDir=C:\Program Files\QlikView\Qv.exe

This is location of the QV Desktop executable. The provided value is the default location for most users and is typically appropriate as-is.

REM *** Path to DocumentAnalyzer.qvw. Note that v3.6 or later is required! ***

SET DaPath=DocumentAnalyzer_V3.6.qvw
Where is the Document Analyzer.qvw to be found?  Note that DA V3.6 or later is required by DABatch.

 

REM *** Directory to store DocumentAnalyzerResults QVDs and QVWs. Will be created if it doesn't exist *** SET DaResultsDir=C:\temp\MyDaResults
Specify the directory where analysis results will be saved.  If this directory does not exist, it will be created.

 

REM *** Should the analyzer results be stored in a QVD (YES/NO)? ***
SET SaveResultsInQVD=YES
Do you want to save the DA results in a QVD for later analysis by the DaCompareTool.qvw?  The default of “YES” is usually appropriate here.   QVD result files include a timestamp so you will always get new files for each run. Change to “NO” if you don’t want result QVDs.

 

REM *** Should the analyzer results be stored in a QVW (YES/NO)? ***

SET SaveResultsInQVW=YES
If “YES”, a DA QVW will be saved for each analysis and named “DocumentAnalyzer_your_qvwname.qvw”.  If a file exists with this name, it will be overwritten. If you don’t want individual DA QVWs, change this variable to “NO”.

 

After launching DABatch, you will receive one prompt:
Analysis Title? <ENTER> for 'Baseline'
The prompt is requesting a title to be assigned to the Result QVDs that will be consumed by DaCompareTool.  To accept the default of “Baseline”,  press <Enter>.  Otherwise type a new value and press <Enter>.

If you have set “SET SaveResultsInQVD=NO” as a configuration option, the title value is irrelevant.  (Perhaps I should not prompt in that case; next version?).

While the script is running Document Analyzer windows will be launched for each QVW and progress message displayed.  It’s best to keep your hands off the keyboard to get proper timings.  Good time to get that coffee.

When execution is complete you’ll see a summary message.
*****************************************************
Batch Analysis complete. 3 QVWs analyzed.
*****************************************************

 

You can now review each”DocumentAnalyzer_your_qvwname.qvw” file or load the result QVDs into DaCompareTool.qvw for comparative analysis.

 

Please let me know in the comments section if you can think of enhancements that support your use case for DA batch analysis.

 

-Rob
Share