All posts by Rob Wunderlich

Extracting data using Microsoft Logparser

I’ve fielded several questions lately about loading Group membership information from Active Directory. The Active Directory sample in the Qlikview Cookbook uses AdsDSO and can load “single-valued” fields such as Name and Mail. AdsDSO will not load “multi-valued” fields such as “memberOf” or “member” — fields that define group members.

To read multi-valued fields, you’ll have to install some sort of tool to extract the data into a format QV can load. There are a number of free and commercial utilities available that will extract AD information into a text file.

My favorite tool for AD extracts is the free Microsoft Logparser. Google for it, you’ll find lots of information as well as the download link. There is also a Logparser book and forum available.

Logparser can read data from many different inputs — Active Directory, IIS logs, Windows Event logs, Registry — to name a few. Logparser can write to several different output formats, CSV being the most useful for QV.

Logparser uses a SQL syntax for it’s queries. Here’s an example:

logparser -objClass:Group “select cn, member into tmpAdGroups.csv from LDAP://mydomainController”

This Logparser query will create the output file “tmpAdGroups.csv”. The file will contain one row for each group (cn). The members of the group will be returned as a single field with the members separated by the pipe “|” character. The members are easily separated in the QV load using the QV subfield() function:

subfield(member, ‘|’) as member

Other uses I’ve found for using Logparser with Qlikview:

  • Extracting data from Windows Event logs.
  • Preprocessing IIS log files. The fields contained in a IIS log can vary between sites and may also change dynamically within the same physical file. Logparser can neutralize these differerences and produce a common input for QV load.

Logparser is a favorite tool of mine. I use it frequently for non-Qlikview tasks as well.

-Rob

Update 12/12/2008 I’ve published a complete example of using logParser to extract Group and User data from AD for loading into QV. The example is in version 9 of the Qlikview Cookbook available at http://robwunderlich.com/Download.html

Share

Using the Evaluate API in Macros

A common application of QV macros is to make initial selections. These selections may be applied when the document is opened, a sheet is activated or a button is pressed. The selections usually involve some type of dynamic value, such as “today()”.

In the examples below, the VBS continuation character “_” is used to indicate the statement continues on the next line. You may type the statement on a single line if wish and omit the “_”.

A typical macro to select yesterday’s date looks like this:

ActiveDocument.GetField(“ShipDate”).Select _
Date()-1

It’s all VBScript, but the functions used come from two different products. The ActiveDocument…Select is from the QV API, and the Date() function is a VBScript function. To make this work, you need some knowledge of VBS functions. In addition, there may also be a type mismatch — the VBS function returns a number and the Select expects a string. Fixing that issue requires adding a CStr() or CDbl() function.

What if you want to make a more complex selection. Like the weekday for today? Or a selection that requires knowledge of the data values — max(ShipDate)?

Wouldn’t it be easier if you could use QV Expressions to define the selection expression? You can, with the ActiveDocument.Evaluate API function. Here are some examples.

ActiveDocument.GetField(“ShipDate”).Select _
ActiveDocument.Evaluate(“date(today(1)-1)”)

ActiveDocument.GetField(“Weekday”).Select _
ActiveDocument.Evaluate(“weekday(today(1))”)

ActiveDocument.GetField(“ShipDate”).Select _
ActiveDocument.Evaluate(“max(ShipDate)”)

The Evaluate argument is any expression that can be evaluated by QV. So you can stick with the QV functions you are used to. Those functions are also more likely to produce the correct data type for your selection. And most importantly, you get easy access to the QV data — (“max(ShipDate)”).

You can even use QV search operators such as “>”. For example, to select the last 7 days:

ActiveDocument.GetField(“ShipDate”).Select _
ActiveDocument.Evaluate(“‘>’ & date(today(1)-7)”)

I hope you find this tip useful.

-Rob

Share

The match() Function

In SQL, the “in” operator is commonly used to test if a value exists in a list of values. For example:

SELECT * FROM TABLE WHERE CODE IN (‘a’, ‘b’, ‘f’);

New QV developers often spend some time looking for QV’s equivalent of the “in” operator. It’s the match() function.

LOAD * WHERE match(CODE, ‘a’, ‘b’, ‘f’);

The match() function, and its siblings “mixmatch” and “wildmatch”. are documented in the “Conditional Functions” section of the Ref guide and the help:

match( s, expr1 [ , expr2, …exprN ] )
Compares the string s to a list of strings or string expressions. The result of the comparison is an integer indicating which of the comparison strings/expressions matched. If no match is found, 0 is returned. The match function performs a case sensitive comparison.

mixmatch() works just like match() except it does a case insensitive comparision.

wildmatch() is another form that can be particularly useful. wildmatch() allows (but does not require) the “?’ and “*” wildcard characters in the match arguments.

wildMatch(text, ‘*error*’)

will match:

“An error has occurred”
“Error processing account nnnn”

QV 8.5 provides a “like” operator that allows for testing against a single value with wildcards:
text like ‘*error*’

Wildmatch can test against multiple values:

wildMatch(text, ‘*error*’, ‘*warning*’)

The match() functions return a number indicating which of the comparison strings was found. You can use this index number nested in a pick function to do “wildcard mapping” as an alternative to a nested if() function.

pick(
wildmatch(PartNo,
‘*99’, ‘P1586’, ‘?15*’, ‘?17*’, ‘*’
),
‘Taxes’, ‘Premium Fuel’, ‘Fuel’, ‘Lubricant’, ‘Other’)

-Rob

Share

An Example is Worth a Thousand Thread Replies

There’s a lot of information being exchanged on the QlikCommunity Forum http://www.QlikCommunity.com these days. Customers and Consultants ask technical questions, and other Customers, Consultants and QT employees provide very useful answers. Today’s post is a tip on how to improve the chances of your Forum question being answered quickly and accurately.

Many back and forth replies to a forum thread are about clarifying the question. If possible, post a qvw file example with your question. (I can’t, my file is too big! The data is private! Keep reading for ways to handle these concerns).

Reasons to post an example qvw:

  • An example will help clarify your question. The Forum is conducted in English, but English is a second language for many, if not most, of the Forum users. An example will provide additional understanding of your question.
  • More likely to get an accurate and complete response. Many questions require the responders to fiddle with expression or script syntax. If I have a qvw to work with, I’m more likely to test my answer before posting it, saving you the trouble of learning that I forgot a comma in my recommended solution.
  • Time. Most Forum members answer questions on a volunteer basis and their time is limited. For myself, I can only take the time to answer a limited number of questions. I’m more likely to pick the questions that are clear and provide the data I need. If I have to code up my own test data to work on the problem, I’m less likely to respond.

Some of the reasons you may be reluctant to post your qvw — size and privacy.

The maximum attachment size allowed on the Forum is 1MB. You can make the example qvw smaller by using the QV Data Reduction feature.

  1. Make some selections to reduce the number of selected values in the qvw.
  2. From the menu bar, select File->Reduce Data ->Keep Possible Values.
  3. Use File->Save As to save the reduced copy under a new name.

    If you use “Save”, QV will still open the “Save As” to help you remember not to overwrite the master copy.

You can protect the privacy of sensitive information, such as account numbers, revenue or customer names by using the QV Scrambling feature. In the menu bar, select Settings->Document Properties->Scrambling.

Here you can select a field to scramble and press the “Scramble” button to perform a random scrambling of the field . No one can determine it’s original contents. Like values will scramble to the same value which maintains the value linkages.

In some cases, you may still be unable to post your qvw even with reduction and scrambling. Or it may make your example more clear to post the data inline with your question. In that case, post your example data in the question using comma delimited format, so it can easily be pasted to a LOAD INLINE. For example:

Accounts:
AccountNo, Name
1234, ABC Corp
4567, DEF Co

Transactions:
AccountNo, TranId, Amount
4567, 1, 2000

One last tip. Before you post, remember to search for existing answers to your question. In the past, search on the Forum was not so robust. But QT has recently added an embedded Google search feature. This is great! It supports the full range of Google search operators. Try it. The “Google Custom Search” link is available at the top of each Forum page.

Finally, don’t forget to mark your question as “solved” when you’ve received a satisfactory answer.

Happy posting!

-Rob

Share

Loading Multiple Excel Sheets

Load from Excel is usually pretty straightforward, but sometimes you’ll need to load multiple sheets and make some determinations at runtime. Details such as sheetnames may not be known at script creation time.

The QV statements “SQLTables” and “SQLColumns” may be used to discover information about the sheets and columns available in a workbook. Both of these statements require an ODBC connection. The ODBC connection may also be used to subsequently read the data, but I find using the LOAD biff more convenient.

First make a OLEDB connection to the workbook:
CONNECT TO [Provider=Microsoft.Jet.OLEDB.4.0;Data Source=workbook.xls;Extended Properties=”Excel 8.0;”];

Specify the workbook name, relative to the current directory, in the “Data Source=” parameter. This example uses a “DSN-less” connection. It does not require you to predefine an ODBC datasource.

The SQLTables statement return a set of fields describing the tables in the currently connected ODBC datasource, in this case the workbook. A “Table” is an Excel Sheet.

tables:
SQLtables;

Now I’ve got a list of sheets in the QV “tables” table. The field name that contains the sheetname is “TABLE_NAME”. I’ll loop through the set of TABLE_NAME values and load each one using a standard biff LOAD.

FOR i = 0 to NoOfRows(‘tables’)-1
LET sheetName = purgeChar(peek(‘TABLE_NAME’, i, ‘tables’), chr(39));
Sales:
LOAD *
FROM workbook.xls (biff, embedded labels, table is [$(sheetName)]);
NEXT

Sheetnames that contain blanks will be surrounded by single quotes. The purgeChar() function above removes any single quotes that may be present in the sheetname.

What if I only want to load those sheets names whose name begins with “Sales”? Wrap the LOAD statement in an IF statement to test the sheetname:

IF wildmatch(‘$(sheetName)’, ‘Sales*’) THEN
LOAD …..
END IF

How about this case? I want to load any sheet that contains the three columns “Sales”, “Year” and “Quarter”:

columns:
SQLColumns; // Get list of columns
// Join list with columns of interest
RIGHT JOIN (columns) LOAD *;
LOAD * INLINE [

COLUMN_NAME
Quarter
Sales

Year
]
;

// Create a count of how many columns of interest each sheet has
selectSheets:
LOAD TABLE_NAME as SheetName, count(*) as count
RESIDENT columns
GROUP BY TABLE_NAME
;
// Keep only the SheetName that have all 3 columns
RIGHT JOIN
LOAD SheetName
RESIDENT selectSheets
WHERE count = 3


// Load the selected sheets
FOR i = 0 to NoOfRows(‘selectSheets’)-1
LET sheetName = purgeChar(peek(‘SheetName’, i, ‘selectSheets’), chr(39));
LOAD….
NEXT


You may wonder if you could use the Excel Driver instead of the Jet provider like this:

CONNECT TO [Provider=MSDASQL;Driver={Microsoft Excel Driver (*.xls)};DBQ=workbook.xls];

The connection will complete and you can use this connection for SQL SELECTs. However, when SQLTables is called, the connection will enumerate tables/columns for all the *.xls files in the current directory.

This provider uses the parameter “DefaultDir=” (default is .) to control which directory is enumerated for SQLTables and SQLColumns calls. The DBQ parm plays no part. You may find this useful as an alternative to using a traditional “for each filelist…” loop to process multiple files.

Complete text of the examples presented here can be found in the QV Cookbook at:
http://robwunderlich.com/Download.html)

Share

Using the Google Chart API inside Qlikview

I recently had a requirement to create a heat map of the US States and set about exploring if there was an easier way than creating a QV scatter plot mapped over an image.

I found the Google Chart API. With some help from the QlikCommunity forum, I got a fairly pleasing result.

The Google Chart API is provided free by Google. You pass in an http request with parameters that describe the data and desired layout, and a chart image is returned. I won’t cover the details of the API, it’s well documented at http://code.google.com/apis/chart/. Rather, I’ll share my experience integrating it with QV.

Here’s a screenshot of my results. The map will update as selections are made in the Sales chart. The generated map is not clickable. It’s just a static image.

A working QVW of the above may be found in the Qlikview Cookbook at:
http://robwunderlich.com/Download.html

The map is a Straight Table with a single expression. The expression is the http://… string used to generate the map. The representation for the expression is set to “Image”. Thanks to Tom on the forum for showing me this technique.

The OnAnySelect document event is used to trigger a “showMap()” macro that creates the variables needed for the http string. In a production application, you would want to be more selective and use field level events on the fields relevant to the chart.

In addition to chart layout parameters, the http string contains two parameters that describe the data.

  • chld= provides the list of states
  • chd= provides the data values for the states

The States and Values are associated by ordinal position in the respective lists. I could not find a way to keep the lists in sync by using QV expressions alone. The solution was to use the Sales chart as my “data source”. The macro walks the rows of the table to build two variables — the State codes and the Sales values. Here’s a snippet of the macro code:

set obj = ActiveDocument.GetSheetObject("CH01")
' Collect the locations
locations = ""
for RowIter = 1 to obj.GetRowCount-1
locations = locations & obj.GetCell(RowIter,0).Text
next
ActiveDocument.GetVariable("vValues").setContent values, false

Another area where VBScript was useful was in encoding the data values as specified by the API. I chose the “simple encoding” method. The Sales values are translated to relative values within a range of single characters defined by the API. The doc http://code.google.com/apis/chart/#simple provided a javascript encoding example which I converted to VBS.

The encoding algorithm requires that the maximum value of the dataset be known to properly spread the individual values across the relative range. To determine the maxvalue in the macro, I use the QV evaluate() function to “callback” to the QV expression language.

maxValue = ActiveDocument.evaluate(
“max(aggr(sum(Sales), State))” )

Producing a chart with the Google API does have some downsides. The user must be connected to the internet and the chart will render slower than a native QV chart. It also does not provide for making selections in the chart and tooltip values like a QV chart does. But I found it to be a simple solution to my requirement. I hope that someday the QV product will provide regional maps as chart types.

Update October 3, 2008: Alistair on the QlikCommunity forum has posted an example of calling the Google Chart API without using macros, which I find to be the preferred method:

http://www.qlikcommunity.com/575/?tx_mmforum_pi1%5Baction%5D=list_post&tx_mmforum_pi1%5Btid%5D=3763&tx_mmforum_pi1%5Bfid%5D=9

The next update to the Qlikview Cookbook will include the “macro-less” technique.

Share

Memory sizes for data types

An earlier post of mine When less data means more RAM discussed the ways in which storage (“Symbol” space) needed for field values can increase depending on how a field is loaded or manipulated. This generated some followup questions on the QlikCommunity forum about the optimal storage sizes for fields of various data types.

What’s presented below is information gleaned from the documentation, QT Support and experimentation. The numbers come from the document memory statistics file. I hope someone from QT will help me correct any errors.

QV fields have both an internal and external representation. There is a video “Datatype Handling in Qlikview” available on QlikAcademy that explores this subject.This post is concerned with the internal storage of fields.

Numbers

I’ve found that the storage size appears to be related to the number of total digits. Storage size in bytes, for various digit ranges:

1-10 digits, size=4
11 or more digits, size=13

The above sizes assume that the internal storage format is numeric, which is usually the case if loading from a database. Numbers loaded as text such as from a text file or inline, may be stored as strings which will occupy different sizes.

Dates, Times and Timestamps

Different Database systems provide various degrees of precision in timestamps and I assume the ODBC driver is also involved with the exact value provided to QV during the load. QV times are the fractional part of a day, using up to 9 digits to the right of the decimal point.

– Best size for a Date, 4 bytes.
– Best size for a full Time, 13 bytes.
– Best size for a full Timestamp, 13 bytes.

These sizes can increase when the field is manipulated. Want to get the date portion of a timestamp? Don’t use

date(aTimestamp)

date() is a formatting function, it doesn’t “extract” the underlying date portion. In many cases, it actually increases storage size because the result may be a string. Instead, use

floor(aTimestamp)

this will produce a 4 byte integer result.

A common technique for reducing the memory footprint of timestamps is to separate the timestamp into two fields, integer date and fractional time. You can further reduce the number of unique time values by eliminating the hundredths of seconds, or even eliminating the seconds if your application is ok with minute precision.

Strings

Thanks to QT support for providing this detail on Strings.

“The representation is that each symbol has a pointer (4/8 bytes on 32/64-bit platform) + the actual symbol space. This space is the number of bytes (UTF-8 representation) + 2 (1 is a flag byte and 1 is a terminating 0) + 0, 4 or 8 bytes that store the numeric representation of the field.”

So on the 32bit version, a non-numeric string occupies 6 bytes more than the length of the string itself. A numeric string occupies 10 more bytes. For example:

“a” uses 7 bytes
“1” uses 11 bytes

The only way to reduce the string footprint is to reduce the number of unique values. This can be done by breaking the string into component parts if that makes sense in the application. For example, the first 3 characters of a 10 character product code may be a product class. Breaking the field into ProductClass and ProductNumber fields may reduce the number of unique values.

If the strings are keys that don’t need to be displayed, the autonumber() or autonumberhash128() functions can be used to transform the values to 4 byte integers. With these functions you can also get the “sequential integer optimization” which reduces the symbols space to zero.

I’ve found that concatenating fields in autonumber like
autonumber(f1 & f2)
can sometimes produce false duplicates. Better to instead use autonumberhash128 like
autonumberhash128(f1, f2)
This seems to always produce correct results.

Sequential Integer Optimization

For each field, QV maintains both a Symbol table — the unique values of a field — and a State array that tracks which values are selected. If the symbol values are consecutive integers, a very clever optimization takes place. The Symbol space is eliminated and the State array is used to represent both selection state and value. This is a very beneficial effect of using the autonumber functions.

The values need not begin at zero for the optimization to take place, they only need to be consecutive. A set of 5000 consecutive dates will occupy no Symbol space. Take one date out of the middle and the storage reverts to the standard 4 bytes for each date.

It’s not always necessary to be concerned about memory usage. But when it is, I hope this information proves useful.

Share

64bit Implementation Experience

When I started using Qlikview, I mistakenly believed I would not need the 64bit version of Server. I thought that because my Analyzer users were using the QV Windows Client, the memory required to hold the document would come from the user’s machine. Wrong. When a document is opened from the server, the document is loaded into server memory.

The 32bit Server uses a single 2GB address space to contain all the currently loaded documents. When the number of users increased, and more importantly, the number of concurrent documents, the Server ran out of memory. This unfortunately causes a Server crash, taking all the users down, not just the user that pushed it over the limit. It became clear we needed the 64bit edition.

Upgrading the Server (QVS) to 64bit was easy. It immediately solved the memory issue and allowed for many documents to be used simultaneously with no problem.

QV Publisher (QVP) turned out to be a different story. I initially installed Publisher on the same machine as Server but immediately ran into a problem with the availability of 64bit ODBC drivers.

Any ODBC Driver used in 64bit Windows must be written as 64bit capable. I was using four ODBC data sources – IBM DB2, MS SQLServer, Lotus Domino and SAS. 64Bit SQLServer drivers are supplied with the OS. DB2 64bit drivers are available, but they can be expensive. The sticking point was that there were no 64bit drivers available for Lotus Domino and SAS.

My first step was to move Publisher to a 32bit machine. This turns out to be a recommended practice anyway – host Server and Publisher on different machines. But I also had an application in development that would require 64bit for a full reload. How would I reload this application when it moved to production? I expected I would see more of these applications that required 64bit for reload.

Publisher provides for defining multiple Execution Services (XS) on different machines. XS is the service that performs the reload process. The multiple XS’s can be viewed and managed from a single Publisher Control panel screen. This feature allowed me to define an additional XS on a 64 bit server.

My configuration now consists of three servers. A 64bit QVS, one 32bit QVP and one 64bit QVP. The 32bit QVP is loaded with all the ODBC drivers I need, the 64bit QVP has no drivers installed. The restriction in this configuration is that reloads on the 64bit QVP may only load QVDs and other non-ODBC datasources. In some cases, this may require a script to be split into two or more documents. Thus far, this restriction has proven to be only a minor inconvenience. The two reloads can be connected together by utilizing a RequestEDX task to trigger the second reload task.

We chose not to migrate the developer workstations to 64bit due to the limited availability of ODBC drivers and other software. Most of the applications that require 64bit for reload can still be developed on a 32bit machine by loading a limited number of records. We did set up a single shared 64bit workstation that can be used by any developer when they require 64bit.

Migrating QVS to 64bit provides the capacity to support many concurrent documents and users. If you plan to use the 64bit QVP, check on 64bit driver availability as part of your planning process.

Share

When less data means more RAM

I attended Niklas Boman’s excellent Performance Tuning talk at Qonnections in Miami. One of his tuning recommendations was to reduce the number of rows and columns when possible. This will probably always have a positive impact on chart calculation time, but if done incorrectly, reducing the quantity of data can have an adverse impact on RAM usage.

Consider a QVD file with one million rows. The QVD was loaded from a database and contains two fields:

aNum – unique integers, 1M unique values.
aDate – dates distributed equally throughout 2000-2003, 1,460 unique values.


QV stores each of these values as integers, occupying 4 bytes of RAM each. Nice and compact.

Which of the following statements will create a QVW that uses more RAM? Statement A, which loads 1000K rows or Statement B, which loads only 750K rows?

Statement A:// Load all 1,000,000 rows
LOAD * FROM qvdData.qvd (qvd);


Statement B:
// LOAD only 2001+ which should be 750,000 rows
LOAD * FROM qvdData.qvd (qvd)
WHERE year(aDate) > 2000;

Pat yourself on the back if you answered “B”. B will use more RAM! More RAM for less data? Why? Because “B” causes an unoptimized load which results in QV converting the Integer representations of the data to String representation.

QV can load QVDs in one of two modes – Optimized or Unoptimized (more in the Ref Guide). In an optimized load, the RAM image from the QVD is loaded directly into memory. An optimized load is indicated in the Loading message in the progress window. (Note to development: would be nice if the optimized message appeared in the log as well).

In unoptimized mode, the QVD image is “unwrapped” and the data processed discretly. This causes the internal formatting to be lost and the data is stored internally as Mixed. So each “aNum” that previously occupied 4 bytes, now takes 9 bytes. “aDate” now averages 18.96 bytes each.

It’s the WHERE clause that forces the unoptimized load. Generally, adding fields or anything that causes a field value to be examined will force an unoptimized load. Examples of unoptimzed loads:

LOAD *, year(aDate) as Year FROM qvdData.qvd (qvd) ;
LOAD *, rowno() as rowid FROM qvdData.qvd (qvd)

Even a WHERE clause that does not reference any field will be unoptimized:
LOAD * FROM qvdData.qvd (qvd)
WHERE 1=1;

How can you tell how much RAM a field is using? “Document Settings, General, Memory Statistics” button will generate a .mem text file that contains a storage size for the “Symbols” (values) of each field. You can view the .mem file directly or load it into a QVW for processing. The 8.5 beta provides a “Qlikview Optimizer.qvw” for just this purpose. I’ve uploaded this file to the “Share Qlikviews” section of QlikCommunity if you don’t have 8.5.

WorkaroundsI’ve found that I can usually “fix” the field by setting the desired format in the Document Properties and checking the “Survive Reload” box. You can also apply formats in the load script, but I find this tedious if I have more than a few fields. Here are some alternative workarounds.

To create additional fields, use a RIGHT JOIN after the optimized load.
Instead of:
LOAD *, year(aDate) as Year FROM qvdData.qvd (qvd);

Use:
tdata:
LOAD * FROM qvdData.qvd (qvd);
RIGHT JOIN LOAD DISTINCT *, year(aDate) as Year
RESIDENT tdata;


For a subset selection, version 8 allows an optimized load using where exists() if the exists clause refers to only a single field. This means you’ll have to generate the desired values before the load using the same field name. Something like this:

//Generate table of the dates we want 2001-2004

LET vStartDate=num(MakeDate(2001,1,1)-1);
LET vEndDate=num(MakeDate(2004,12,31));
DateMaster:
LOAD date($(vStartDate) + IterNo()) as aDate
AUTOGENERATE 1
WHILE $(vStartDate) + IterNo() <= $(vEndDate);

// Optimized load of the subset dates
tdata:
LOAD * FROM qvdData.qvd (qvd) WHERE exists(aDate);
DROP TABLE DateMaster; // No longer needed

In some cases, the above example will give you an additional optimization. Something I call the “sequential integer optimization” which I’ll discuss on another day.

Worrying about RAM is not always necessary and many times is not worth the effort, especially if it makes your script harder to follow. However, for large datasets, particularly in the 32bit environment, you may be forced to optimize RAM usage. Using the mem files allows you to identify the most productive candidates for tuning.

The QV Reference Guide points out that an optimized load will run faster than an unoptimized load. I think it would be useful to have brief discussion of the impact on RAM usage as well.

Share