Category Archives: QlikView

If() Tips

Summary:  I offer some tips for writing better performing and easier to maintain syntax when using the Qlik If() function. 

The Qlik If() function is very powerful and  frequently appears in Qlik Sense and QlikView apps.

Expressions using multiple If() functions can easily get out of hand and become difficult to maintain or debug, as well as poor performers.

In this post I’ll offer some advice on avoiding If() pitfalls and tips to write easier to understand expressions.

The Qlik syntax diagram for the If function is:

if(condition , then [, else])

That’s perfectly clear to most people, but I prefer to think of it more like:

if(condition , true result [, false result])

Tip#1: If() does not short circuit.

Both the true & false branches are calculated even when only one is possibly true.  For example:

If(Only(Currency = 'LC',  Sum(Sales), Sum ([Sales LC])

In this case both Sum() expressions will be calculated even though only one value will be utilized.  In most cases this behavior is not of concern and in many applications will perform very well.   However, a nested If() with many possible branches or a large data set may perform poorly.

For more on the short circuit issue see “How to Choose an Expression“.

 

Tip#2: Use indentation sparingly.

The true or false result may be an additional, “nested” If(), which is where we start to see some ugly syntax.  Following traditional programming conventions many people automatically indent the nested if like this:

If(Sum(Sales) > 100000, 'Large',
    If(Sum(Sales) > 75000, 'Med', 
      If(Sum(Sales) > 50000, 'Demi',  'Small')
    )
)

Essentially,  the expression above classifies into one of four values.  I don’t think indentation  adds to the readability and indentation will lead you into “tab hell” when you get many possibilities.  I prefer to write this expression as:

If(Sum(Sales) > 100000, 'Large'
,If(Sum(Sales) > 75000, 'Med' 
,If(Sum(Sales) > 50000, 'Demi'
,'Small'
)))

No indentation, all the closing right parens collected on one line at the end. Makes it very easy in the expression editor to see that you have the right number of parens.

The leading (vs trailing) commas are my personal preference.  This make it easier to comment out logic and in my view, the comma belongs to the If that follows it, not the preceding If.

I think the above syntax makes it very easy to understand that I am choosing  one of four results, and what the rule is for each result.  Syntactically each If() is the else parameter of the preceding If().  I don’t think of the Ifs as “combined”, rather as “sequential”.

Do indent when you are using If() as the then parameter,  as shown in Tip#4 below.

 

Tip#3: Simplify by testing from high to low. 

The business rule that created this sample expression may have been stated to the Qlik developer like this:

“Classify sales of 0 to 50 000 as “Small”, 50 001 to 75 000 as “Demi”, 75 001 to 100 000 as “Med” and above 100 000 as “Large”.

The developer may faithfully translate the requirement into this expression.

If(Sum(Sales) > 0 and sum(Sales) <= 50000, 'Small'
,If(Sum(Sales) > 50000 and Sum(Sales) <= 75000, 'Demi', 
,If(Sum(Sales) > 75000 and <= 100000, 'Med'
,'Large'
)))

This returns the correct result. Testing from low to high values forces the use of “and” which makes the expression more complex than necessary and potentially slower to execute.  In my experience, testing from high to low, as in the Tip#2 example, yields a cleaner syntax.

 

Tip#4: Use “and” when you mean and.

Here’s a sample expression requirement:

When Sales > 1000 and Region=’US’, it’s “Mega US”. When Sales > 750 and Region = ‘UK’, it’s “Mega UK”. Otherwise it’s “General”.

I have seen this written as:

If(Sum(Sales) > 1000, 
    If(Region = 'US', 'Mega US'),
If(Sum(Sales) > 750, 
    If(Region = 'UK', 'Mega UK'), 
'General')

While the “and” requirement may be satisfied with a then-if  nesting, I find it clearer with the “and” keyword.

If(Sum(Sales) > 1000 and Region = 'US', 'Mega US'
,If(Sum(Sales) > 750 and Region = 'UK', 'Mega UK' 
,'General'
))

What if the requirement for  both US & UK were 1000?  You could argue that this is clear case for nesting in that there is a shared  condition and perhaps it would be a good practice to not repeat ourselves on the Sum(Sales).

If(Sum(Sales) > 1000, 
    If(Region = 'US', 'Mega US',
    If(Region = 'UK', 'Mega UK'), 'General'), 
'General')

Notice  we needed to repeat the ‘General’ result to cover the null case.  So it’s not super clean, but it may be worth it to not repeat the sum(Sales) calculation.  Generally I find the performance difference between “and” and “nested if” to be insignificant and tend to favor whatever is the clearer syntax for the given requirement.

What about Pick(Match())? 

I’ve heard it occasionally claimed that a Pick/Match combination will run faster than a nested If.   The expression might look like this:

Pick(
    Match(
      -1
      ,Region= 'US' and Sum(Sales) > 1000
      ,Region= 'UK' and Sum(Sales) > 1000
      , -1
    )
,'Mega US', 'Mega UK','General')

In my own testing and reading I’ve never found any performance advantage to Pick/Match.  That said, sometimes the syntax is appealing.

One thing I don’t like about Pick/Match is the distance between the condition list  and the result list. It’s fairly easy to get the lists  mis-aligned as the expression grows.

I  wish Qlik had a Switch type function like:

Switch (
  condition1 : result1
  [,condition2 : result2, ...]  
  [: defaultResult]
)

 

Tip#5: Simplify by using Column(n) or Measure Name

If your if() refers to something that has already been calculated in the chart, you can use the Column(n) function to refer to the value of a measure/expression column. For example, in a color expression:

If(Column(2) > 0, Green(), Red())

This can be much neater than repeating the expression text and typically runs faster as well.

If you are on Qlik Sense May 2021 you can use Master Measure names in the expression like:

If([Total Sales] > 0, Green(), Red())

[Total Sales] need not be a measure in this chart.

Both QlikView and Qlik Sense also allow you to reference the Label of a measure/expression column in the chart. In most versions the syntax checker will declare this an error even though it calculates correctly. I tend to avoid the label technique due to this confusion.

 

Tip#6: Don’t use If() as a chart filter

Use If when you want to dynamically select from two or more alternatives.  If should not be used simply to filter data like this:

Sum(If(Region = 'EU' and CYTDFlag = 1, Sales)

Filtering is best done with Set Analysis. The same expression written with a Set:

Sum({<Region={'EU'}, CYTDFlag={1}>} Sales)

Set Analysis is much faster than If.  If you are new to Set Analysis, you might initially find the syntax more challenging than If.  But SA  is much more powerful than If and well worth mastering.

 

Tip#7:  Consider the other conditional functions. 

Alt() and Coalesce() can be a more compact and elegant approach to testing for nulls. Instead of:

If(IsNull(SalesRep), Manager, SalesRep)

use:

Coalesce(SalesRep, Manager)
// If you want to consider empty and 
// blank strings as Null:
Coalesce(EmptyIsNull(Trim(SalesRep)), Manager)

When testing against a list of values,  instead of multiple If() or “or”, use the Match() or WildMatch() functions instead.

If (Match(StateCode, 'VA', 'TN', 'FL', 'GA'), 'South',  'Other')

 

I hope you find these tips useful.  You can use my QSDA Pro tool to quickly filter and examine all the uses of the If() function in a Qlik Sense App, located on-prem or in SaaS.

-Rob

 

Share

Can I Update a Mapping Table?

Summary: Can you concatenate rows to a Mapping Table? The answer is “Yes” and I show you how.

Recently my Masters Summit colleague Oleg Troyanksy asked me “What are the limits on updating an existing Mapping Table in Qlik script?”.  My immediate answer was “You can’t update a Mapping Table after it’s been created”.  Then Oleg showed me a script that surprised me — adding to a mapping table in a loop,  implicit concatenation.  The loop works, but what if we need to load from multiple sources?

When the contents of a mapping table come from multiple Load statements, I have always advised doing standard loads to a temp table and then a final Mapping Load from the temp table.

Turns out you can concatenate to a mapping table.  Maybe that temp table is unnecessary.  I don’t find any doc on the topic. Here’s what I’ve found from experimentation.

We may wish to do something like this:

MapX:
 Mapping Load * Inline [
 from, to
 1, A
 2, B
 ];
 
Concatenate (MapX)
Mapping Load * Inline [
 from, to
 3, C
 ];

That script will fail with  “Table ‘Mapx’ not found” error. You cannot directly reference a mapping table this way.

Interestingly, if we leave off the “Concatenate (MapX)”, it will concatenate and result in the desired mapping.  Implicit concatenation will kick-in and the second load will add rows to the mapping table.  I’ve included some ApplyMap() code in this example so you can copy/paste and test for yourself.

MapX:
 Mapping Load * Inline [
 from, to
 1, A
 2, B
 ];
 
Mapping Load * Inline [
 from, to
 3, C
 ];

Data:
 Load RecNo() as Recid,
 ApplyMap('MapX', RecNo()) as Mapped
 AutoGenerate 5;

The resulting output looks like this, proving that 3/C has been added to the map.

Unlike implicit concatenation for standard tables, the fieldnames  need not be the same. This script will concatenate. Note the different fieldnames in the second load.

MapX:
 Mapping Load * Inline [
 from, to
 1, A
 2, B
 ];
 
Mapping Load * Inline [
 X, Y
 3, C
 ];

What if there is an intervening standard load?  Will concatenation occur?

MapX:
 Mapping Load * Inline [
 from, to
 1, A
 2, B
 ];
 
Fact:
 LOAD 1 as X AutoGenerate 1; 

Mapping Load * Inline [
 X, Y
 3, C
 ];

The answer is no, concatenation will not happen. The second Mapping Load will create a new invisible mapping table.

So if I  can’t name the mapping table in a Concatenate prefix, is there some other way to explicitly concatenate?  Turns out there is. This will work.

MapX:
 Mapping Load * Inline [
 from, to
 1, A
 2, B
 ];

Fact:
 LOAD 1 as X AutoGenerate 1;

MapX:
 Mapping Load * Inline [
 X, Y
 3, C
 ];

When naming the mapping table again with the same label. explicit concatenation will occur!  This is unlike a standard load where repeating a label results in  a new table with “-1” suffix  (when fieldnames don’t match).

In summary, you can add rows to a mapping table. Repeating the table label again will ensure you are adding,  whether there is an intervening standard load or not.

Now you may be wondering …can I do an implied Join? I think not.

On Jan 20, 2021 I’ll be sharing more scripting tips and techniques in my “Advanced Scripting” on-line course as part of the “Masters Summit at Home” series.   Join me there for more on topics like creating reusable script and maintaining optimized loads.

Share

Data Browser Tricks

Summary: I demonstrate my latest “Data Browser” sheet for use in Qlik data modeling. Download here.

Today I want to share the latest version of my “Data Browser” sheet.

What’s a “Data Browser”? It’s my name for something you may already have. The ubiquitous “System Sheet” or sometimes “Profiler” where you may have listboxes or charts utilizing the system fields “$Field”, “$Table”, “$Rows”.  These fields are part of the shadow data model that hold useful metadata about the data loaded in your app.

Some very useful example profile sheets have been shared in the community over the years.

You can download the latest QlikView and Qlik Sense versions of my Data Browser here.

For QV, I just copy the objects into my new app in one go. For QS,  I either copy the charts one-by-one or if I’m lucky they are already in my template (who’s going to build the sheet copy extension?) .

I’ll do this walk through with the QV version, the QS version is similar.

When I make selections in $Field I can see values and frequency counts in a listbox. After selecting for example the “Sales” field, the values and frequency counts are shown in a listbox. I also have a  histogram and some descriptive statistics about the field values.

I can select field values and drill into data using  this sheet or my application sheets. For example, I might want to select the outlier high Sales value, then select Product in $Field to find out what Product is associated with this Sale, clear the Sales field to see what other Sales look like for this product and so on.

There are several properties such as Null Count (Information Density), field alpha/num content ($tags) that I can get from the built in Table Model Viewer.  The Viewer requires me to examine one field at a time. In the chart below I get an overview of those properties of interest.  Because it’s a chart, I can use sorting and selecting to focus.

Something I don’t get from the Table Viewer is the difference between numeric count and text count for a field.  I like to surface this problem (highlighted in yellow) early in my modeling.

One of my favorite features is the “Value Association” chart below. It’s likely a favorite because it took me a long time to work out the expression!

In  Table Viewer we use “Subset Ratio” to understand where we have connected and un-connected data in our model.  Subset Ratio is limited to reporting the relationship between key fields. It can’t tell me how data field  values in this table associate to data field values in other tables.  Subset ratio, like other stats in the Viewer, does not respect selections. For example, if I select a specific Customer, how many SalesReps are linked to the Customer?

The chart below (highlights added) covers all these use cases and can also surface problems in the model.

I’ll start by selecting a field central to my model, in this case “OrderID”.   I then sort by the “Pct” column.  This column represents the percentage of field values associated to the selected field,
“OrderID”.

In the green highlight, I’ve called out a Table “Sales” that has zero association with Orders. Something to look into.

In the orange callout, I can see that only 19% of my Employees are associated with Orders.  That might be a candidate for trimming the Employees table when loading.

In the red callout, I see something puzzling. Only 47% of my OrderDates are linked to Orders. That may be ok, I would need to review the data.  What looks not ok is that WeekDay Pct is also 47%. I would have expected something like 5/7 or 6/7. No fraction of weekdays would equal “47%”.  And there are 1092 values for WeekDay…something is off.

I’ll select WeekDay from the chart and examine the values in the listbox. Aha, the Weekday values were created incorrectly.  They were created with a Date() function instead of the correct WeekDay() function.

I used to hide the sheet before production but now I generally leave it in as I and others find it useful in production as well.

Besides the Table Viewer, another Qlik modeling tool I really like is Catwalk. If you haven’t used Catwalk I encourage you to check it out.  I won’t go in to explaining Catwalk as it has an excellent tutorial built in.

Do you have any favorite profiling or modeling tools to use with Qlik?

-Rob

Share

Creating Temporary Script Associations

Summary: I review using Join, Lookup() and ApplyMap() as script techniques  to calculate using fields from multiple tables. I ultimately recommend ApplyMap().

Qlik charts  can calculate values on the fly, using fields from multiple tables in the model.  The associative model takes care of navigating (joining) the correct fields together.  Our expression syntax doesn’t  identify what table a field exists in — the associative logic takes care of this detail for us.

We may want to calculate a measure, for example, “Net Amount”, applying a business rule requiring fields from several tables:

Our expression to calculate “Net Amount” might look like this:

Sum(Amount) - 
RangeSum(
  Sum(Quantity * UnitCost), 
  Sum(Amount * Discount), 
  Sum(Amount * SalesTaxRate), 
  Sum(Amount * ExciseTaxRate)
)

There may be cases (such as performance) where we want to pre-calculate “Net Amount” as a new field in the script.  In script, we don’t have the magic associative logic to assemble the fields.  When a script expression is used to create a new field, all fields must be available  in a single load statement.  This is straightforward when all the required fields are in the same table.  But what do we do when the fields come from multiple tables?

Here are three approaches to solving the problem of calculating a new field using multiple tables in script.

  • Join
  • Lookup() function
  • ApplyMap() function

I’ll demonstrate deriving the same “Net Amount” calculation in the script.

JOIN

The Join option will require us to execute multiple joins to assemble the fields onto each Orders row and then finally do the calculation.  The script might look like this:

Left Join (Orders)
LOAD
 ProductId,
 UnitCost
Resident Products
; 
Left Join (Orders)
LOAD
 CustomerId,
 Discount,
 State
Resident Customers
; 
Left Join (Orders)
LOAD
 State,
 SalesTaxRate,
 ExciseTaxRate
Resident States
;

NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
   Quantity * UnitCost,
   Amount * Discount,
   Amount * SalesTaxRate,
   Amount * ExciseTaxRate
 ) as NetAmount
Resident Orders
;
// Drop the extra fields from Orders.
Drop Fields State, UnitCost, Discount, SalesTaxRate,ExciseTaxRate
From Orders
;

It’s a fairly good option.  It can be a lot of code depending on how many fields and tables we need to traverse. We need to be aware of “how many hops” between tables and may require intermediate joins (State field) to get to the final field (SalesTaxRate & ExciseTaxRate).

When using Join we need to ensure we have no duplicate keys that would mistakenly generate additional rows.

LOOKUP

Lookup() seems the most natural to me. It’s the least amount of code and it even sounds right: “look-up”.  It’s a one-to-one operation so there is no danger of generating extra rows.

It’s my least used option due to performance as we shall see.

Lookup takes four parameters  – a field to return, the field to test for a match, a match value to search for and the table to search.  Using Lookup() our script will look like this:

NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
   Quantity * Lookup('UnitCost', 'ProductId', ProductId, 'Products'),
   Amount * Lookup('Discount', 'CustomerId', CustomerId, 'Customers'),
   Amount * Lookup('SalesTaxRate', 'State', Lookup('State', 'CustomerId', CustomerId, 'Customers'), 'States'),
   Amount * Lookup('ExciseTaxRate', 'State', Lookup('State', 'CustomerId', CustomerId, 'Customers'), 'States')
 ) as NetAmount
Resident Orders
;

Note that for SalesTaxRate and ExciseTaxRate, the third parameter — the match value — is another Lookup() to retrieve the State. This is how we handle  multiple hops, by nesting Lookup().

It’s a nice clean statement that follows a simple pattern.  It performs adequately with small volumes of data.

Lookup does have a significant performance trap in that it uses a scan  to find a matching value.  How long to find a value is therefore dependent on where in the field the value is matched.  If it’s the first value it’s very quick, the 1000th value much longer, the 2000th value exactly twice as long as the 1000th. It’s a bit crazy making that it executes in O(n) time, for which I prefer the notation U(gh).

APPLYMAP

I like to think of the ApplyMap() approach as an optimized form of Lookup().  We first build mapping tables for each field we want to reference and then use ApplyMap() instead of Lookup() in the final statement. Our script will look like this:

Map_ProductId_UnitCost:
Mapping
Load ProductId, UnitCost
Resident Products
;
Map_CustomerId_Discount:
Mapping
Load CustomerId, Discount
Resident Customers
;
Map_CustomerId_State:
Mapping 
Load CustomerId, State
Resident Customers
;
Map_State_SalesTaxRate:
Mapping 
Load State, SalesTaxRate
Resident States
;
Map_State_ExciseTaxRate:
Mapping 
Load State, ExciseTaxRate
Resident States
;
NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
   Quantity * ApplyMap('Map_ProductId_UnitCost', ProductId, 0),
   Amount * ApplyMap('Map_CustomerId_Discount', CustomerId, 0),
   Amount * ApplyMap('Map_State_SalesTaxRate', ApplyMap('Map_CustomerId_State', CustomerId, 0)),
   Amount * ApplyMap('Map_State_ExciseTaxRate', ApplyMap('Map_CustomerId_State', CustomerId, 0))
 ) as NetAmount
Resident Orders
;

The mapping setup can be a lot of code depending on how many fields are involved. But it’s well structured and clean.

In the final statement, we are “looking up” the value using ApplyMap() and it performs very quickly.  ApplyMap uses a hashed lookup so it does not matter where in the list the value lies, all values perform equally.

We can re-structure and simplify the mapping setup and subsequent use with a subroutine like this:

Sub MapField(keyField, valueField, table)
// Create mapping table and set vValueField var // equal to ApplyMap() string.
 [Map_$(keyField)_$(valueField)]:
 Mapping Load [$(keyField)], [$(valueField)]
 Resident $(table);
 Set [v$(valueField)] = ApplyMap('Map_$(keyField)_$(valueField)', [$(keyField)]);
End Sub

Call MapField('ProductId', 'UnitCost', 'Products')
Call MapField('CustomerId', 'Discount', 'Customers')
Call MapField('CustomerId', 'State', 'Customers')
Call MapField('State', 'SalesTaxRate', 'States')
Call MapField('State', 'ExciseTaxRate', 'States')

NetAmount:
LOAD
 OrderId,
 Amount - RangeSum(
 Quantity * $(vUnitCost),
 Amount * $(vDiscount),
 Amount * $(vSalesTaxRate),
 Amount * $(vExciseTaxRate)
 ) as NetAmount
;
LOAD
 *,
 $(vState) as State
Resident Orders
;

Note the use of the preceding load to handle the nested lookup of State.   You could also modify the Sub to handle some level of nesting as well.

I typically use the mapping approach as I find it always gives accurate results (with Join you must be careful of duplicate keys) and generally performs the best, and importantly, consistently.

Whether you are new to Qlik or an old hand I hope you found something useful in reading this far.

-Rob

 

 

Share

Num() — Script vs Chart

Summary: I review a subtle difference between using Num() in the script vs Num() in charts. I make mistakes so you don’t have to 🙂 

2020 marks my 13th year blogging about Qlik.  I’m still learning, and still making mistakes!  I’ll review a problem I created for myself recently, with the hope that you won’t have to repeat the exercise.

Here is a simplified example of the problem, enough to demonstrate the issue:  This is the starting data table.

We have some number of “Metric” and  “Value” along with fields that describe how the data should be formatted for display.  The “Format” field contains a Qlik format specification.  Format may be used in a Num() function as shown in the “Num(Value, Format)” measure in this table. Output is as expected,  so far so good.

In my  project, it became required to format the Value in the script. I moved the Num() measure to the script like this:

Num(Value, Format) as ValueFormatted

I expected ValueFormatted to yield the same result as the chart measure. Adding “ValueFormatted” to the chart yields this:

The results are not the same.  ValueFormatted for “Days to ship” is incorrect. The other values are correct.

Let’s introduce another variation. This time I’ll sort the input when I create ValueFormatted.

Left Join(Data) 
Load 
    *, 
    Num(Value, Format) as ValueFormatted 
Resident Data 
Order by Metric Desc ;

Now “Customers per location” is incorrect and the other values are correct! What gives?

The Num() function returns a Dual() value . Duals have both a string (display) and a numeric value.  When populating a data model field, Qlik will use a single string representation for a given numeric value for that field.  The string selected will be the first encountered for that numeric value.

“Customers per location” and “Days to ship” shared the numeric value 4.  In the data model, one or the other string representation — “4” or “4 days” — will be used, depending on which one is created first.

To get the correct results in this scenario — that is, unique strings dependent on Format — add the Text() function to extract the string at runtime.

Text(Num(Value, Format)) as ValueFormatted

Resolution came of course after I took the advice of my colleague Barry  to “take a walk”.

I hope my story might save you some trouble. Happy scripting!

-Rob

Share

Chart Search in QlikView?

Summary: I demonstrate how to self-collect qvw metadata in a load script and use the metadata to implement a chart search feature in my qvw.

Qlik Sense has that cool chart search feature. Can we have the same in QlikView? Something where we can search for a keyword like “price” and see all the charts that have “price” in the title and on selection go directly to that chart?  Maybe searching chart Dimensions and Expressions as well?

In this downloadable example qvw, I’ve included script on the last tab to read xml metadata from the current qvw and build a table of chart titles linked to associated sheets.  When a title is selected, an Action (Document Properties, Triggers) assigned to the field  will go to the associated sheet.

Download the example and check out the script.  In the script you’ll notice some configuration options to include Dimensions and Expressions in the search.  They are set off in the example but feel free to play.

You’ll also notice in the script some code for mapping  container objects to sheets. Unfortunately, the xml metadata does not contain this mapping so it has to been added if you want it.  Objects outside containers can be mapped automatically.

For me this was a “just for fun” exercise, no one asked for it (although I thought I saw something on Qlik Community…).  Let me know if you make a useful implementation of it or if you improve the process.

-Rob

 

 

Share

Parsing Non-Standard Signs

Summary: I demonstrate using Num#() and Alt() functions to read numbers with non-standard signs.  Download link at bottom of post.

When reading from text files, the default Qlik interpretation of numeric sign syntax is as follows:

100:  No prefix, positive number
+65: “+” prefix, positive number
-110: “-” prefix, negative number

In the default interpretation a “-” suffix or “()” are not recognized as valid numbers and are loaded as text values.

120-
(200)

Sign indicators like “CREDIT” or “DEBIT” are by default unknown to Qlik and the value will be loaded as text.

300 CREDIT
400 DEBIT

In a Table Box, Chart Dimension or Listbox, numeric values are by default right aligned and text values are left aligned by default. This is a simple way to check what is text and what is numeric.

Aggregation functions, such as Sum(), treat text values as zero.  So a chart using the example numbers above would look like this:

 

 

 

 

 

I can utilize the num#() script function to tell Qlik how to read numbers using other than  default signs. For example, to indicate that a trailing minus is used:

Num#(Sample, '0;0-') as Amount2

That takes care of “120-“.  But what about the other odd signs?  I can nest multiple num#() functions inside Alt() to test various patterns:

 Alt(
   Num#(Sample, '0;0-')
   ,Num#(Sample, '0;(0)')
   ,Num#(Sample, '0 CREDIT;0 DEBIT')
 ) as Amount3

The chart demonstrates that all values are correctly recognized as numbers.  They do retain their input values as the display format.

 

 

 

 

 

If I want to harmonize the display formats, I can add an outer Num() function to indicate the display format for all.

 Num(
   Alt(
     Num#(Sample, '0;0-')
     ,Num#(Sample, '0;(0)')
     ,Num#(Sample, '0 CREDIT;0 DEBIT')
   )
 ,'#,##0.00;(#,##0.00)') as Amount4

Downloadable QV & QS examples to accompany this post can be found here.

-Rob

Share

Making it Touchy

I like my apps to work like dog toys. When I poke them, they should “squeak”.  I find myself poking at screens with my mouse or finger wondering “What’s this?”.

When I build a QlikView app I try to make it “touchy”.  That is, if a curious user pokes at something on the screen I  provide  more information about that object in the form of a popup/tooltip.  For example, explain the meaning of a conditional warning icon:

 

 

 

 

In this post I’ll look at some options available in QlikView to provide touchy feedback to users of your app.  A qvw example to accompany this post is available for download here.

The Help Text property is available on every object Caption pane.  The value  can be fixed text or an expression.

 

 

 

 

When Help Text is present in a chart, the  ?  icon will appear in the chart caption.  Touching the ? will display the Help Text in a tooltip.

 

Object Help Text is a good place to document the details of the chart and highlight anything special about this chart.  Try to anticipate questions  like “does this include…” or “where does the x number  come from..” and provide those answers in advance.

You can provide as much text as necessary and use newlines and indentation for formatting.  You cannot change the font face or color.

When Help Text is used in a Text Object or a Button,  a caption is not required.  If  caption is off for those objects the tooltip will appear when hovering anywhere in the object.  This is useful as seen in the icon  example at the beginning of this post.

The Chart Dimension or Expression “Comment” property can provide a tooltip for individual columns in a text chart.  For Dimensions, any data model comment for that field will be used if the Dimension Comment is empty.

 

 

 

 

Using multiline Dimensions or Expressions in a text table provides  cell level popups:

 

 

 

(More about cell popups here)

Scatter charts can provide focus to points of interest. As a user I typically want to know more about specific points.  Using additional expressions with ‘Text as Popup”  can provide rich detail, even new aggregations using different dimensions.

 

Check out the downloadable example “Make it Touchy”  for more tips including using transparent text objects to create tooltips anywhere.

-Rob

Share

qcb-qlik-sse, A General Purpose SSE

In Qlikview we have the ability to add function to the scripting language by writing VbScript in the document module (sometime called the “macro module”).  Typical  additions included regular expression matching & parsing,

Qlik Sense does not have the module feature, but both Sense and QlikView share a similar feature,  Server Side Extension (SSE).  SSE is typically positioned as a method to leverage an external calculation engine such as R or Python  from within Qlik script or charts.   The Qlik OSS team has produced a number of SSE examples in various languages.

SSE seems to be a good fit for building the “extra” functions (such as regex) that I am missing in Sense.  The same SSE can serve both Sense and QlikView.

Installing and managing a SSE takes some effort  so I’m  clear I don’t want to create a new SSE for every new small function addition.  What I want is a general purpose SSE where I can easily add new function, similar to the way QlikView Components does for scripting.

Miralem Drek has created a package, qlik-sse,  that makes for easy work of implementing an SSE using nodejs.  What I’ve done is use qlik-sse to create qcb-qlik-sse,  a general purpose SSE that allows functions to be written in javascript and added in a “plugin” fashion.

My motivating  principles for qcb-qlik-sse:

  • Customers set up the infrastructure — Qlik config & SSE task — once.
  • Allow function authors to focus on creating function and not SSE details.
  • Leverage community through a shared function repository.

I’ve implemented a number of functions already.  You can see the current list here. Most of the functions thus far are string functions like RegexTest and HtmlExtract that I frequently have implemented in  QlikView module or I’ve missed from other languages.

One of the more interesting functions I’ve implemented is CreateMeasure(), which allows you to create Master Measures from load script.  This is a problem I’ve been thinking about for some time and qcb-qlik-sse seemed to be a natural place to implement.

If you want to give qcb-qlik-sse a try, download or clone the project. Nodejs 8+ is required.  Some people report problems trying to install grpc using node 12, so if you are new to all this I recommend you install nodejs v10 instead of the latest v12.

If you are familiar with github and npm, you will hopefully find enough information in the readme(s) to get going. If not, here’s a quickstart.

  1. Install nodejs if not already present.  To check the version of nodejs on your machine, type at a command prompt:
    node --version
  2. Download and extract qcb-qlik-sse on the same machine as your Qlik desktop or server.
  3. From a command prompt in the qcb-qlik-sse-master directory install the dependent packages:
    npm install
  4. Configure the SSE plugin in Qlik.  Recommend prefix is QCB. If configuring in QlikView or Qlik Sense Desktop the ini statement will be:
     SSEPlugin=QCB,localhost:50051
  5. Start the SSE:
     ./runserver.cmd

The “apps” folder in the distribution contains a sample qvf/qvw that exercises the functions.

I’d love to get your feedback and suggestions on usage or installation.

-Rob

 

Share

A Common SSE Plugin Project

Summary: I introduce qcb-qlik-sse, a community Server Side Extension to share custom Qlik functions. 

At the Masters Summit for Qlik, I dive into several different methods of creating reusable script and custom functions.  In QlikView we have the ability to write custom functions using VbScript/Jscript in the qvw Module.

Custom functions have been useful for things like regular expressions, geo calculations, url encoding, encryption and others.  I’ll call them “edge functions” — some of us need them, some of us don’t.

Qlik Sense does not have the module facility. How can we satisfy the requirement for custom function in Qlik Sense?  The Server Side Extension (SSE) facility can fill the need and is available to both Qlik Sense and QlikView.

An SSE Plugin runs as a separate task and provides communication with a Qlik Script or chart Expression via a TCP port. The same SSE Plugin can serve both QS and QV.

Anyone can write an SSE. The Qlik team provides the SSE base and you write a plugin that wires your new functions to Qlik. The new functions can be used in both Script and Charts.  A number of plugins have already been produced.

SSE seems to be the ideal place to provide a collection of edge functions.  Rather than a bunch of one-offs, I’m thinking a good idea would be to pool resources into a single effort that could be shared, much like QlikView Components did for Script.

I’ve implemented this idea as qcb-qlik-sse.  This server uses as it’s base the qlik-sse package created by Miralem Drek.

qcb-qlik-sse is written in javascript and runs in node.js.  At startup, the server scans it’s “/functions” directory and discovers what functions are available.  The general idea is that you can add new function by creating a new js file.  You can remove function you don’t want available by deleting the corresponding js file.

See what functions I’ve already implemented in the doc here. I’ve also provided an example qvf and qvw that exercise the functions.

If you want to try it out,  download the project and define the plugin to QS or QV as documented here.  You will also need node.js 8 or later installed.

Defined functions will show up in the suggestion list in both the script and expression editors.

 

If you want to add functions, some javascript skills are required.  Follow the directions in the readme and submit a PR.

I’ve labeled the project as “experimental” at this stage because I anticipate there could be some significant restructuring as I get feedback.

Let me know your ideas and if you find this useful!

-Rob

Want to kick the tires on reusable code and make your Qlik team more efficient? Come to the Masters Summit for Qlik, a three day advanced training event for Qlik Developers. 

 

 

Share