All posts by Rob Wunderlich

The Now() Trap

The Now() function operates differently in script and charts. This post will highlight one difference that has tripped up even experienced developers.

Our example script requirement is to extract rows of data where the “TransactionDatetime” field value is within the last 24 hours. We’ll use the “Now()” function to establish the current time. Here’s the doc from the QV Ref Guide.

now( [ timer_mode ] )

Returns a timestamp from the system clock. The timer_mode may
have the following values:
  0 Time at script run
  1 Time at function call
  2 Time when the document was opened

Default timer_mode is 1. The timer_mode=1 should be used with
caution, since it polls the operating system every second and hence
could slow down the system.
 
Which timer_mode option do you choose? Hmm… you want 24 hours prior to the script execution so it seems like “0” is the best option. Plus there is that scary warning about “1” slowing down the system.

The integer 1 represents a day — 24 hours — so you code a where clause in your LOAD like:
 
WHERE TransactionDatetime >= now(0) -1
 
During development, you reload several times. You examine the data selected and it seems to be working — you are getting data from the last 24 hours. It’s 4:00pm Tuesday afternoon and you promote this to the server and set a reload schedule of daily at 8:00am. A test run on the server shows the data selection is working as planned.
 
The document reloads on schedule at Wednesday 8:00am. A review of the app shows data going back to Monday at 4:00pm.  After Thursday morning’s reload the data range is Tuesday 8:00am to Wednesday 8:00am. What happened to the rest of Wednesday and all of Thursday?
 
The value for Now(0) is set at the end of script execution. When used in script, Now(0) returns the end time of the previous script execution, not the time of the current script execution.
 
So on Thursday morning Now(0) returns Wednesday 8:00am — the end of the last execution. That is not what we were looking for.  
 
In Charts, Now(0) returns the end of the latest script execution, which would be Thursday 8:00am.
 
Now(1) always returns the time when the function is executed — the “current” time. During development when reloads are frequent, script Now(0) is usually pretty close to Now(1) and you may not notice the difference. But when a document goes on a daily reload schedule, Now(0) is usually an entire day behind what you want!
 
The correct choice in script is generally Now(1), which returns the time when the function is actually executed, not a time related to previous reload.
 
To Summarize, use Now(1) in script,  Now(0) in charts.

If you need to establish a deterministic (consistent throughout the entire script) value for “Now” in script, set a variable at the beginning:
 
LET vScriptStart – Now(1);
 
Then use vScriptStart as the reference point in your script. It may also be used in charts as well.

-Rob

Share

Use cases for Generic Load

The Qlikview  “Generic Load” is not frequently used and is therefore sometimes forgotten. Generic Load has some interesting applications and can be a useful item in your script toolbag.

Generic Load  is the complement of “Crosstable Load”. In a loose sense, you could say that aCrosstable Load creates a Generic table and  Generic Load creates a Cross table.

Consider this table which contains a separate row for each Phase of a Project.

Now suppose you want to flatten this table to a single row per Project. You have a variable number of Phases per Project. The resulting data model should look like this:
The model above can be accomplished with a two GENERIC LOADs against the ProjectTable:
GENERIC LOAD Project, ‘Start Phase ‘ & Phase, StartDate
RESIDENT ProjectTable;

GENERIC LOAD Project, ‘End Phase ‘ & Phase, EndDate
RESIDENT ProjectTable;

Generic Load creates additional Qlikview tables. The additional tables cannot be avoided by combining a CONCATENATE or JOIN prefix. In the next example I’ll offer a technique to consolidate the tables.

Here’s another application of Generic. Consider this example table.

Suppose you want to generate flag fields for each of the possible order statuses? The flags could be created with a single Generic Load.

Flags:
GENERIC LOAD Order, ‘Status_’ & Status, 1
RESIDENT OrdersTable;

The resulting data model now contains flags for each Order.

As mentioned previously, Generic Load creates additional tables. The table view after the above Generic Load is:

You can see Generic makes a new table for each new field it creates. That’s fine if it doesn’t cause synthetic keys or other problems. If you want to merge the Flag fields into the fact table (OrdersTable), you can do it after the Generic Load with a bit of code like this:
FOR i = NoOfTables()-1 to 0 STEP -1 
  LET vTable=TableName($(i)); 
  IF WildMatch('$(vTable)', 'Flags.*') THEN 
    LEFT JOIN (OrdersTable) LOAD * RESIDENT    [$(vTable)]; 
    DROP TABLE  [$(vTable)]; 
  ENDIF 
NEXT i


Here’s the table view after the Joins.

Generic load is not an everyday tool, but can prove useful in specific situations.

The qvw examples used in this post may be downloaded from here.

-Rob Wunderlich
Share

Reusing expressions — chart column references

It’s often useful to avoid repeating lengthy expressions. Here’s one technique for reusing expressions.

Consider the chart below.  The Expression for the column labeled “Net” is:
  =sum(Quantity * Price) * (1-Discount)

Let’s add another column for “Commission”, defined  as 10% of Net. Instead of repeating the Net calculation, use the expression:

  =Net * .1

Qlikview allows referencing the value of another column in the same chart by name. The name is the value assigned to the column label. “Net” in this case is not a Field. It is a symbolic reference to an expression. It may be used in other expressions in the same chart, but not in places where a Field name is expected.

  =Net / 2          Allowed
  =above(Net)   Allowed
  =Sum(Net)        Not allowed

Column references can be very useful in Expression attributes. Let’s bold any Commission value over $200. We’ll use a Text Format expression of:
    =if(Commission > 200, ‘<B>’)

giving this result:

-Rob

Share

Best way to count keys? Don’t.

I was recently reviewing a problem where a chart Count() function produced differing results between QV 8.5 and 9. The field being counted was a key field. Counting a key field without using DISTINCT, especially a one-to-many key, can produce ambiguous results and should be avoided.

The recommended approach is add a counter field and sum() that field. For example, in an Orders table, add the field:

1 as OrderCounter

and then count Orders with:

sum(OrderCounter)

Yes, count(OrderCounter) will work as well. The Qlikview literature still states that sum() is faster and preferable to count(). John Witherspoon recently showed me some tests that demonstrate count() being faster than sum() in Version 9, so it’s possible that recommendation should be examined if you are working on a very large application.

-Rob

Share

Simplify with Preceding Load

Most QV script developers are introduced to “preceding load” as a LOAD that precedes an SQL SELECT. But a LOAD may also precede another LOAD, which can be a very useful tool.

Let’s review a typical preceding load.

Table1:
LOAD Customer, Sales, today(1) as LoadDate ;
SQL SELECT Customer, Country, Sales FROM SalesResults ;

  • The absence of a “FROM” or “RESIDENT” clause in the LOAD is what makes this a “preceding load”.
  • The SQL SELECT will be executed first. The results of the SELECT will be used as input to the LOAD statement.
  • Table1 will have three fields — Customer, Sale, LoadDate.
  • The Field “Country” will not be present in Table1 because “Country” is not repeated on the LOAD statement.
  • The field “LoadDate” does not exist in the SQL SELECT and is added by the LOAD.



Let’s look at an example of where preceding load can be useful. When loading data, you may need to use expressions to parse or cleanse data. For example, extracting a timestamp from a string in a text file.

Table1:
LOAD
timestamp(timestamp#(mid(@1:n,3,12), ‘MMDDYYhhmmss’)) as EventTime,
mid(@1:n, 17) as Event
FROM myfile.txt (fix, codepage is 1252);

What if you want additional time dimensions from the data? You could add expressions like:

date(date#(mid(@1:n, 3, 6)) as EventDate
month(date(date#(mid(@1:n, 3, 6))) as EventMonth

The script would soon get messy with “paren-disease” and become harder to maintain. Preceding Load to the rescue.

Table2:
LOAD *,
floor(EventTime) as EventDate,
month(EventTime) as EventMonth,
year(EventTime) as EventYear,
hour(EventTime) as EventHour
;
LOAD
timestamp(timestamp#(mid(@1:n,3,12), ‘MMDDYYhhmmss’)) as EventTime, mid(@1:n, 17) as Event
FROM myfile.txt (fix, codepage is 1252);

  • The syntax is greatly simplified by reusing the “parsed once” EventTime.
  • Table2 will contain six fields: Event, EventTime, EventDate, EventMonth, EventYear, EventHour.
  • The “*” in the top load includes the fields emitted by the bottom load — EventTime & EventTime.

Preceding Loads may also be stacked more than two deep as in this example.

Table2:
LOAD *,
if(match(EventMonth, ‘Aug’, ‘Dec’) OR weekday(EventDate) > 5, ‘Holiday’, ‘Standard’) as Rate;

LOAD *,date(floor(EventTime)) as EventDate,
month(EventTime) as EventMonth,
year(EventTime) as EventYear,
hour(EventTime) as EventHour;

LOAD timestamp(timestamp#(mid(@1:n,3,12), ‘MMDDYYhhmmss’)) as EventTime,
mid(@1:n, 17) as Event
FROM myfile.txt (fix, codepage is 1252);

Preceding load is a useful tool to simplify the syntax of your script and make it easier to maintain.

-Rob

Share

Understanding Join and Concatenate

The Qlikview script functions JOIN and CONCATENATE can sometimes be used to tackle the same problem, but there are important differences that should be understood.


Examine the sample tables below. Note that they share one common field name, “Key”. Also note that Table1 has a Key value “3” that is not present in Table2.

JOIN will combine rows where the Key value matches. The keyword OUTER will also retain rows that do not match rows in the other table. Here’s what the merged table will look like after an outer join.

OUTER JOIN (Table1) LOAD * RESIDENT Table2;


Values A1 and C1, which were in different tables, now occupy the same row in the result table. The row with Key 3 has missing values for C & D, because there was no matching Key in Table2.


Creating a chart that uses “Key” for dimension will produce results similar to the Table Box above.



The important point is that values with the same Key value have been merged together into a single row. If value A1 is selected, note that values C1 & D1 remain associated (white). The set A1,B1,C1,D1 is indivisible.

Now let’s look at Concatenate. Concatenate appends the rows of one table to another. Concatenate never merges any rows. The number of rows in a concatenated table is always the sum of the rows from the two input tables. Here’s what our sample data will look like after Concatenate.

CONCATENATE (Table1) LOAD * RESIDENT Table2;

Rows with like Key values are not merged together. The rows from Table2 are simply appended to Table1. Because the tables have different fields, rows will have null values for the fields from the “other” table.

If the data is used to build a chart that utilizes the common field “Key” as dimension, the chart looks just like the JOINed table.

Let’s make the selection “A1” in Field A and see it’s impact on our visible charts and tables.

When A1 is selected, the association to C1 & D1 is lost and C/D values become null in both the Chart and Tablebox. We cannot select both A1 and C1. This is a different result than the JOINed example.

Let’s consider a more realistic example where we may choose between JOIN and CONCATENATE. Consider the two tables below. Note that only one BudgetAmount row is present for each Region-Year combination. In the Sales table, the SalesAmount is broken down by Department within Region.

If we load both tables we can produce a chart using expressions like =Sum(BudgetAmount).

The Budget and Sales values have been summed correctly.

We then notice that we have an undesirable synthetic key, created by the Budget and Sales tables sharing the Year and Region fields. One approach to eliminate the synthetic key would be JOIN or CONCATENATE. But which one in this case?

Let’s try JOIN and see what the Chart looks like.

OUTER JOIN (Budget) LOAD * RESIDENT SALES


The summed Budget numbers are incorrect!

A look at raw data of the joined table will identify the problem. The JOIN repeated the BudgetAmount value on each Department row.


Let’s try with CONCATENATE.

CONCATENATE (Budget) LOAD * RESIDENT Sales;


The numbers are now correct and we’ve accomplished the goal of eliminating the synthetic key.

A peek at the data in the Concatenated table will make it clear why the chart is now correct. There is only BudgetAmount value or each Year-Region.

JOIN and CONCATENATE are both very useful and frequently used functions in Qlikview. It’s important to understand the differences between them.

-Rob

Share

BOQC: Flexible Interval Classification

Another post in the “Best of QlikCommunity” series.

In this QlikCommunity Forum thread the poster asked about using the class() function to create a dynamic dimension of 30 minute intervals in a chart. He wanted to format the class values as display friendly time ranges.

My solution would have been to use mapping to format the classes to the desired display format. However, this would have been a lot of data entry for 48 intervals in a day.

John Witherspoon posted a more elegant solution utilizing a simple expression.

dual(time(floor(timestamp, 1/48),’h:mm TT’) & ‘ – ‘ &
time(ceil (timestamp, 1/48),’h:mm TT’)
,floor(frac(timestamp),1/48))


Using John’s example, I was able to extend the idea to easily provide for a user selectable interval size.

Read the thread for details.

-Rob

Share

Analyzing Field Usage in a QVW

Wouldn’t it be nice to know what fields are “unreferenced” in your document? By unreferenced, I mean fields that are not used in Expressions, Dimensions or Keys. These are fields that if removed from your document, would have no impact on the visible elements of the application.

Removing unused fields is sometimes important in addressing performance issues and generally makes your application easier to maintain and understand.

The Expression Overview dialog is great for finding where a field is used, but what about the unused fields? I don’t know of any QV supplied tool that can identify unused fields, so I created one myself. My tool is a QVW named “DocumentAnalyzer” and it’s available for download from:

http://robwunderlich.com/Download.html

First off, let me make it clear that this tool is imperfect. It’s difficult to do a precise field usage analysis from “outside” of the product. I hope that this work of mine will encourage (goad?) Qliktech into writing a field usage analysis tool within the QV product. I’ll be happy if my work becomes obsolete.

I’ll explain the usage and limitations of DocumentAnalyzer as well identify some interesting code tidbits for anyone who may want to enhance or borrow from this app.

The app code itself consists of two pieces. 1) A Macro Module that extracts meta information from the document to be analyzed (the “target” document) 2) A load script that processes the extracted meta data.


The Macro does a lot of filesystem access and requires System Access module security. If you have not allowed System Access, the macro will warn you and provide instructions for setting it.

Using DocumentAnalyzer is a two step process, driven by buttons in the Main sheet. The first step is to choose a target document. Pressing the button will bring up a standard windows file chooser dialog. If a file chooser dialog cannot be created, the user is instructed to type a filename directly in the input box.

Once a target document is selected, press the “Process Document” button. The Macro module will be invoked and extract the metatdata to a series of files in your temp directory. After extraction the load script will read the extracted files. At the end of the load script, the metadata files will be deleted.

The tool will open the target document and navigate through the screens. Keep your hands off the keyboard while this process runs. When the load script is complete, you’ll receive a msgbox like this. Press OK.

The first chart of interest can be found on the “Fields” sheet. The Field References chart lists each FieldName and indicates whether the FieldName was used as a Key, Dimension, In an Expression, Macro or Variable. FieldNames not referenced anywhere are highlighted in yellow. These are fields you might consider dropping from the document.

FieldNames that contain special characters are flagged in red. The usage of these fields cannot be accurately determined due to limitations of the parsing method I’m using.

The Exception sheet displays you may consider warnings about the data presented. Of particular interest is the “Unmatched Fields” listbox. Check out the Help in this listbox to understand it’s contents.
The Parsing Algorithm
The identification of field rerefences is performed by the load script. The parsing is rather primitive so I’ll provide some explanation here so you can understand it’s limitations. You can also find this information on the “About” sheet of DocumentAnalyzer.
Field names are discovered in expressions by replacing “special” characters with a delimiter and then parsing into words using the subfield() function. For example, the expression:

“Avg(Price * Quantity) * 1.05”is delimited to:
“AvgPrice Quantity 1.05”
which is then parsed into four words — Avg, Price, Qty, 1.05 . The words are then matched against fieldnames. You will get a false match if a function name such as “Avg” is also used as a field name. If you just want use DocumentAnalyzer, no need to read further. If you want to learn something about the code, read on.


The Code


The Macro writes the metadata to a series of files. The files are normally deleted at the end of the script.
If you want to keep the metadata files, comment out this line in the script (on the Cleanup tab):
LET x = DeleteFolder(‘$(f)’); // Delete the data files

The Macro Module is of a fair size — about 500 lines. If you examine the code, you’ll find some conditional code (IsInternal()) devoted to allowing the macro to run internally in a qvw or externally from cscript.exe. The entire macro module can be copied to an external file and run with cscript. I coded for “dual” execution environments because I get a better editor and debugger in the external environment. I do the development running externally and then paste the script into the QVW for final testing.

Meta information (Field names, Dimensions, Expressions, etc) are extracted from the target document using the QV API. Getting Dimension values was fairly easy, they are only a few API paths for the various object types.

Finding all the expressions was the most challenging part and took the most time to solve. There are many different properties where expressions may be used in sheet objects. They may also differ by release. I could not see discovering and writing all the API calls to extract every possible expression. I experimented with a number of approaches, including generating code from the API doc — never got this to work correctly. The most promising approach was using the file export from the Expression Overview dialog. This gave me a complete list of expressions, but the exported file was not consistently usable. The export file is a tab delimited file. If an expression uses tabs or newlines it can make file impossible to navigate.
My eventual solution was to export the objects into XML using the
WriteXmlPropertiesFile outputFile API method and then extract the expressions from the XML files. I first tried to get the expressions using load script, but found this too cumbersome. I settled on using XPath to extract the data I needed from the XML files. XPath is a sort of “query language” for XML. Where SQL returns a set of rows, XPath returns a set of XML elements. This required only a few XPath expressions to cover all the possible expressions.
Once I perfected the XPath method, I switched to doing the Dimension extraction this way as well.
In QV 8.5, Sheets do not have an XML representation. So Sheet expressions (background color, conditional show, etc) are extracted indivdually.
The Document Variables. Macro Module and Script are written to files as well. The Script is not currently processed by the load.
Contact me if you have any problems or questions on using the app. Contact information is on the “About” sheet.
Happy analyzing!
-Rob

Share

BOQC: ApplyMap instead of Join

Today marks my first blogging of “Best Of QlikCommunity” (BOQC) where I plan to highlight what I find to be particularly useful or interesting posts on the QlikCommunity Forums.

There are cases when the ApplyMap() function is a very useful alternative to Join. For some time I have noticed both Oleg Troyansky and Michael Nordstrom dropping this hint on QlikCommunity but I never quite understood the power of the technique until a post Oleg made today:
http://community.qlikview.com/forums/p/16017/62222.aspx#62222

The original forum question was how to multiply two fields from different tables to derive a new calculated field. The tables share a common key field.

I usually would have approached this with two Joins. That approach works, but sometimes I don’t really want my data model to reflect Joined tables. I just want to do the calculation.

If you want to see the ApplyMap() solution, read the post linked to above. The thread explains it better than me repeating it here.

Share

Effective Visual Communication

Now you own a brand new power saw (Qlikview). But do you know how to build a house?
A key feature of Qlikview is graphical representation of data. Do you have the skills to use that capability effectively? The challenges of effective visual communication are not unique to Qlikview. The principles of visual communication can be learned and there are many resources available to learn from.
An important and useful work to me has been the book “The Visual Display of Quantitative Information” by Edward R. Tutfe (ISBN-0961392142). In this book, Tufte defines and articulates many important principles of statistical graphical representation, including the “Lie Factor” “Chartjunk”, and the “Data-Ink ratio”. The principles of Chartjunk and the Data-Ink ratio ask us to consider which of these two charts communicate more effectively.

Tufte has authored several more books that expand and illuminate these themes of quality and effectiveness in graphical communication. All of the books are beautifully printed. Tufte also teaches seminars and maintains a web site at: http://www.edwardtufte.com/tufte/

Another work I have found very useful is “Information Dashboard Design” by Stephen Few (ISBN-0596100167). This book applies visual communication principles from Few, Tufte and other researchers to the practical application of “dashboard design” . Few’s book analyzes the shortcomings of some poor designs and then goes on to enumerate a number of elements for effective designs. He closes the book with some examples of optimal designs.

Stephen Few also teaches public workshops. His schedule and blog can be found at http://www.perceptualedge.com/.

Lastly, the IBM Many Eyes project http://manyeyes.alphaworks.ibm.com/ is an ongoing source of inspiration and amusement.

-Rob
Share