All posts by Rob Wunderlich

64bit Implementation Experience

When I started using Qlikview, I mistakenly believed I would not need the 64bit version of Server. I thought that because my Analyzer users were using the QV Windows Client, the memory required to hold the document would come from the user’s machine. Wrong. When a document is opened from the server, the document is loaded into server memory.

The 32bit Server uses a single 2GB address space to contain all the currently loaded documents. When the number of users increased, and more importantly, the number of concurrent documents, the Server ran out of memory. This unfortunately causes a Server crash, taking all the users down, not just the user that pushed it over the limit. It became clear we needed the 64bit edition.

Upgrading the Server (QVS) to 64bit was easy. It immediately solved the memory issue and allowed for many documents to be used simultaneously with no problem.

QV Publisher (QVP) turned out to be a different story. I initially installed Publisher on the same machine as Server but immediately ran into a problem with the availability of 64bit ODBC drivers.

Any ODBC Driver used in 64bit Windows must be written as 64bit capable. I was using four ODBC data sources – IBM DB2, MS SQLServer, Lotus Domino and SAS. 64Bit SQLServer drivers are supplied with the OS. DB2 64bit drivers are available, but they can be expensive. The sticking point was that there were no 64bit drivers available for Lotus Domino and SAS.

My first step was to move Publisher to a 32bit machine. This turns out to be a recommended practice anyway – host Server and Publisher on different machines. But I also had an application in development that would require 64bit for a full reload. How would I reload this application when it moved to production? I expected I would see more of these applications that required 64bit for reload.

Publisher provides for defining multiple Execution Services (XS) on different machines. XS is the service that performs the reload process. The multiple XS’s can be viewed and managed from a single Publisher Control panel screen. This feature allowed me to define an additional XS on a 64 bit server.

My configuration now consists of three servers. A 64bit QVS, one 32bit QVP and one 64bit QVP. The 32bit QVP is loaded with all the ODBC drivers I need, the 64bit QVP has no drivers installed. The restriction in this configuration is that reloads on the 64bit QVP may only load QVDs and other non-ODBC datasources. In some cases, this may require a script to be split into two or more documents. Thus far, this restriction has proven to be only a minor inconvenience. The two reloads can be connected together by utilizing a RequestEDX task to trigger the second reload task.

We chose not to migrate the developer workstations to 64bit due to the limited availability of ODBC drivers and other software. Most of the applications that require 64bit for reload can still be developed on a 32bit machine by loading a limited number of records. We did set up a single shared 64bit workstation that can be used by any developer when they require 64bit.

Migrating QVS to 64bit provides the capacity to support many concurrent documents and users. If you plan to use the 64bit QVP, check on 64bit driver availability as part of your planning process.

Share

When less data means more RAM

I attended Niklas Boman’s excellent Performance Tuning talk at Qonnections in Miami. One of his tuning recommendations was to reduce the number of rows and columns when possible. This will probably always have a positive impact on chart calculation time, but if done incorrectly, reducing the quantity of data can have an adverse impact on RAM usage.

Consider a QVD file with one million rows. The QVD was loaded from a database and contains two fields:

aNum – unique integers, 1M unique values.
aDate – dates distributed equally throughout 2000-2003, 1,460 unique values.


QV stores each of these values as integers, occupying 4 bytes of RAM each. Nice and compact.

Which of the following statements will create a QVW that uses more RAM? Statement A, which loads 1000K rows or Statement B, which loads only 750K rows?

Statement A:// Load all 1,000,000 rows
LOAD * FROM qvdData.qvd (qvd);


Statement B:
// LOAD only 2001+ which should be 750,000 rows
LOAD * FROM qvdData.qvd (qvd)
WHERE year(aDate) > 2000;

Pat yourself on the back if you answered “B”. B will use more RAM! More RAM for less data? Why? Because “B” causes an unoptimized load which results in QV converting the Integer representations of the data to String representation.

QV can load QVDs in one of two modes – Optimized or Unoptimized (more in the Ref Guide). In an optimized load, the RAM image from the QVD is loaded directly into memory. An optimized load is indicated in the Loading message in the progress window. (Note to development: would be nice if the optimized message appeared in the log as well).

In unoptimized mode, the QVD image is “unwrapped” and the data processed discretly. This causes the internal formatting to be lost and the data is stored internally as Mixed. So each “aNum” that previously occupied 4 bytes, now takes 9 bytes. “aDate” now averages 18.96 bytes each.

It’s the WHERE clause that forces the unoptimized load. Generally, adding fields or anything that causes a field value to be examined will force an unoptimized load. Examples of unoptimzed loads:

LOAD *, year(aDate) as Year FROM qvdData.qvd (qvd) ;
LOAD *, rowno() as rowid FROM qvdData.qvd (qvd)

Even a WHERE clause that does not reference any field will be unoptimized:
LOAD * FROM qvdData.qvd (qvd)
WHERE 1=1;

How can you tell how much RAM a field is using? “Document Settings, General, Memory Statistics” button will generate a .mem text file that contains a storage size for the “Symbols” (values) of each field. You can view the .mem file directly or load it into a QVW for processing. The 8.5 beta provides a “Qlikview Optimizer.qvw” for just this purpose. I’ve uploaded this file to the “Share Qlikviews” section of QlikCommunity if you don’t have 8.5.

WorkaroundsI’ve found that I can usually “fix” the field by setting the desired format in the Document Properties and checking the “Survive Reload” box. You can also apply formats in the load script, but I find this tedious if I have more than a few fields. Here are some alternative workarounds.

To create additional fields, use a RIGHT JOIN after the optimized load.
Instead of:
LOAD *, year(aDate) as Year FROM qvdData.qvd (qvd);

Use:
tdata:
LOAD * FROM qvdData.qvd (qvd);
RIGHT JOIN LOAD DISTINCT *, year(aDate) as Year
RESIDENT tdata;


For a subset selection, version 8 allows an optimized load using where exists() if the exists clause refers to only a single field. This means you’ll have to generate the desired values before the load using the same field name. Something like this:

//Generate table of the dates we want 2001-2004

LET vStartDate=num(MakeDate(2001,1,1)-1);
LET vEndDate=num(MakeDate(2004,12,31));
DateMaster:
LOAD date($(vStartDate) + IterNo()) as aDate
AUTOGENERATE 1
WHILE $(vStartDate) + IterNo() <= $(vEndDate);

// Optimized load of the subset dates
tdata:
LOAD * FROM qvdData.qvd (qvd) WHERE exists(aDate);
DROP TABLE DateMaster; // No longer needed

In some cases, the above example will give you an additional optimization. Something I call the “sequential integer optimization” which I’ll discuss on another day.

Worrying about RAM is not always necessary and many times is not worth the effort, especially if it makes your script harder to follow. However, for large datasets, particularly in the 32bit environment, you may be forced to optimize RAM usage. Using the mem files allows you to identify the most productive candidates for tuning.

The QV Reference Guide points out that an optimized load will run faster than an unoptimized load. I think it would be useful to have brief discussion of the impact on RAM usage as well.

Share