All posts by Rob Wunderlich

Will AI Replace Visual Analytics?

We just wrapped up a great Qlik Connect conference in Orlando. I was intrigued by the discussions of what AI may mean to the future of our industry. I found the panel with the AI Council especially stimulating, some great minds sharing their research and perspectives. I was especially struck by the common refrain that “we know AI will impact our businesses, but there is much unknown as to exactly what this will look like”.

I’m of the mind that if we don’t know, let’s start working through some thought experiments. A potential that came up for me is: Generative AI, specifically RAG and private Enterprise LLMs, could replace the visual analysis function of Qlik Apps.

In Qlik Apps we create Dashboard visualizations. These visualizations provide answers to known questions, e.g. “what is current total sales vs target?” We also generate reports, a broader, although still summary view of a fixed point in time, for metrics like financial health. Both of these functions use carefully curated and validated data and will continue to do so.

In my Qlik trainings I use a story where a Leader sees the Dashboard values are out of compliance and tasks a Manager or staff analyst to explain the discrepancy. Sales are not meeting target, why?

A well designed Qlik App contains sufficiently detailed data with additional sheets and charts — containing dimensions and detail beyond the dashboard sheet — to filter and sift the data and allow a user to answer the ad hoc “why” question.

Through interaction with the analysis charts, we may find that the contributions of one salesrep, “Joe”, is causing the non-compliance. If we take Joe out of the mix we are on target. It’s likely this type of insight — what dimension(s) are responsible for the non-compliance — can be determined by a machine agent and a visual chart to support the determination can be generated on the fly. We are seeing that today.

Joe has been a stellar performer in previous periods. What has changed and what action should we take?

Could a Gen AI assistant be useful in this case? I shopped this scenario with a number of respected colleagues at the conference. One thread was “Joe’s spouse recently died and that understandably has affected his performance. A staff analyst could uncover this fact but an AI assistant would not”.

Could AI be aware of this event? There might be a record in the Enterprise HR system. Or public records such as obituary notices may be available. Or perhaps the staff analyst reached out to Joe via Teams and Joe shared this information. Could an AI assistant interview Joe and elicit the same information?

At the conference there was lots of talk about “trust”, which I understood to generally mean trust in the validity of the data. Should we also be talking about how we will ensure that employee, customer and partner interactions with Enterprise AI are consistent with privacy policies and regulations? If “AI” asks me for information, am I confident the information I provide will be properly categorized and subjected to relevant privacy and usage policies?

Can the AI assistant recommend an action plan? At Qlik Connect we saw how Qlik Answers can process unstructured documents like HR benefit policies. Could the assistant determine that Joe is eligible for company paid grief counseling services and draft an email to Joe offering help?

Another action may be to temporarily double team some of Joe’s accounts to relieve pressure on Joe and ensure that customers are getting optimal service. The AI assistant could identify appropriate salesreps based on product lines, region or other criteria.

What about “hallucinations”? That is, totally wrong answers from the AI assistant. I don’t see that as much of an issue. The assistant is not taking action directly, it is merely advising the leader. Staff analysts can “hallucinate” as well. I’ve been schooled a few times myself when I’ve delivered non-sensical answers. Leaders know their business. Also, the use of Gen AI also does not necessarily eliminate the staff analyst role. It may be that the analyst is using the AI assistant and curating the results for the leader.

If Gen AI can identify causes, generate supporting visuals and recommend action, do we need analysis charts in the Qlik App? Do we need detail data in the app if AI is already looking at a broader set of data? My best understanding of RAG is yes, we will require some level of detail data to provide context. But does that data need to live in the Qlik app data model if we are not using it in charts?

What do think? Will Gen AI kill off analysis charts? How do you see Generative AI changing what we build and deliver with Qlik?

-Rob

Share

Reducing Qlik App Resources with QSDA

Longish post today where I show off the power of QSDA Pro to reduce Qlik App resources, saving you money and creating a more responsive experience for your users.

As a follow on to my last post “Help! My Qlik Cloud Reload Exceeds Quota!” I’m going to dive deeper into the available QSDA features to reduce memory requirements of a Qlik App. As a bonus we’ll use QSDA to improve the performance of a chart.

There are many good reasons to optimize your Qlik Apps to a leaner size. Smaller apps tend to perform better and place less demand on your server resources. If you are a Qlik Cloud customer, or planning a move to Qlik Cloud, App memory size impacts your license tier — and therefore your direct monthly costs.

In this post our starting point is I that have identified the App “BikeShop Sales” as having a large (597 MB) footprint. I got this information from Qlik App Metadata Analyzer as shown below.

Qlik App Metadata Analyzer

I could also have identified this memory footprint in QSDA Pro when we select this App from the list.

QSDA Applist

Let’s see how QSDA Pro can help. I’ll launch a new Analysis and give it the Description “Baseline”.

When the analysis completes press “View” to see the results.

QSDA Analysis Summary page

In the Resources card I see that unused fields occupy 383 MB — almost 2/3 of the base data model! Unused fields are fields that are not used anywhere in the front end. They can be removed without impacting the contents of charts or the user experience.

I also see some other suggested savings in dropping unused tables (“All fields unused but keys — 439 KB”) and a 530 KB savings by AutoNumbering keys.

A fast and simple way to remove unused fields is to drop them at the end of the script with a “DROP Fields …” statement. AutoNumbering is also best done at the end of the script. Let’s implement the QSDA recommendations. Open the Qlik Data Load Editor for “BikeShop Sales” in another browser tab.

The QSDA Tools menu provides a handy Script Generator to generate recommended script statements. Select the script generator from the Tools menu.

QSDA Tools menu

In the Script Generator > AutoNumber tab select the top checkbox to select all recommended fields. Press the “Copy” button to copy the generated AutoNumber statement to your clipboard. Paste the statements at the end of your Qlik Script. Repeat for the Drop Fields and Drop Tables tabs. (I won’t use the Load Statement tab just yet).

Now that I have updated the script with the AutoNumber and Drop statements, Reload.

Reload with AutoNumber, Drop Fields, Drop Tables

Returning to the QSDA App Card, I can see that Base App Memory is now 213 MB. Quite the savings from 507 MB!

App card after reload

At this stage, I recommend running a new analysis to confirm that the script changes have not introduced any problems or new issues.

Note that Reload Peak Memory is unchanged, because we loaded the unused data before we dropped it. Reload memory has an impact on the performance of on-prem servers and for Qlik Cloud, reload memory is a direct cost.

Can we use less reload memory by not loading the unused fields and tables? It may be that a field is used the load script but is not used in the front end, so you have to examine your script to determine. This very clean script loads directly from QVDs so the answer is clear.

I’ll start by commenting (or removing) the tables we don’t need instead of dropping them at script end. To identify the unused tables I can refer to the previously generated script, the Flags, or return to the Script Generator tool in the Baseline analysis.

What about Tables with both used and unused fields? Can I easily load just the used fields and omit the unused fields? QSDA Script Generator to the rescue again. Open the Script Generator > Load Statement in the Baseline analysis and sort on the “Unused Bytes” column. Select the SalesOrderDetail table.

A Load statement fieldlist for [SalesOrderDetail] will be created with unused fields commented out. You can exclude the unused fields entirely by unchecking the “Include Unused Fields” box. I’ll start with just the SalesOrderDetail table for now. Note that the other tables with big Unused Fields savings, Notes and Customers, are already being excluded completely.

Before reloading, I’ll remove the AutoNumber and Drop statements I added earlier. Many of those drops are no longer valid as I am not loading the fields at all. Here is the updated load script.

Updated script to exclude unused fields

After reloading, I return to the QSDA App Card. Reload Mem is reduced from 693 MB to 268 MB! I’ll run another analysis to see where things stand.

I can address the remaining unused fields with DROP Fields or using the Load Statement method. I will add the AutoNumber statements back in as well. I hope the process is clear enough now so I won’t show these remaining steps in detail here.

Let’s turn our attention to another great value of QSDA — Performance improvement. The “Calc Time Distribution” chart plots the calculation time of each chart in the App. I see I have an outlier that takes about 4 seconds to calculate.

What is this chart and why is is taking to long to calculate? Flip over to the Viz page and sort by the Calc Time column. The viz in question is a Table named “Salesperson Scorecard” on sheet “Dashboard” and it takes 4.142 seconds to calculate. Expanding the row shows the chart Dimensions and Measures and also exposes the “Viz Deconstruction” icon

Viz Details

Clicking the Viz Deconstruct icon will calculate each chart Measure in turn and tell us what is taking so long to calc in this chart. We can see the total calc time is over four seconds and measures three and four take considerably longer than the first two. But why?

Let’s focus on this chart to get some insight into the cause and solution of the poor performance. Back on the Viz page, Right-click the Viz of interest and select “Filter to Selected”. This will filter our pages to only those objects — Dimensions, Expressions, etc — associated with this Viz.

Click over to the Expressions page and we’ll see that our long running expressions have flags. Flags are the advice system of QSDA, identifying errors, bad practices and performance issues. Our two second expression “Longest Deal” has a “Key Field used in Expression” flag.

Clicking the “i” icon next to the flag will open the Flag Details panel which contains a link to the help article for this flag.

In the help article we’re informed that using a key field in an expression can sometimes cause poor performance. The article goes on to advise using a non-key field instead, creating a new field if necessary. I’ll take this advice and create a new field in the SalesOrderHeader table:

Reload and change the chart Measure to use the new field.

  Max(Aggr(Sum(LineSales), SalesOrderRecord))

Run a new analysis to check the calc time of the chart (and check we didn’t impact anything else!). In the summary we’ll see a significant decrease in total app calc time and we can drill into our chart to see what’s changed.

Whoa! That measure has gone from 2 seconds to 0.008 seconds.

But Rob, didn’t I just blow up my RAM by making a copy of this big field? Don’t guess, use QSDA to see exactly what the cost is.

Additional RAM of SalesOrderRecord field

The cost is 50 KB of RAM and no significant increase in Reload Mem. Was it worth it? I think so.

If you chose to put the new field on the SalesOrderDetail table instead of SalesOrderHeader, the QSDA results will quickly show you that this was a bad idea. RAM is much increased and performance not so good.

QSDA Pro gives you the tools to:

  • Reduce your App memory and compute requirements.
  • Focused advice and insight for improving your App.
  • Compare different solutions and measure the impact of change.
  • Track your progress and document the impact of your work.

I hope this post makes it clear why I say that QSDA Pro is “the tool that every Qlik developer needs”.

Do you have questions about QSDA Pro or want to schedule a demo for your team? Reach out to us.

If you’re attending Qlik Connect in Orlando, stop by the Motio booth on the show floor. I’ll be there (guy in the hat) to answer your questions and show you even more features of QSDA Pro that can help you create leaner, cleaner and faster Qlik Apps.

-Rob

Share

Help! My Qlik Cloud Reload Exceeds Quota!

If you are a Qlik Cloud (SaaS) customer you may have seen this dreaded message during script reload. Ack! Ack!

Your quota may be different than 25Mb, but the problem remains the same. How do I modify this application to fit within my subscribed quota?

QSDA Pro V3.1 to the the rescue! QSDA analyzes your app and provides recommendations and easy to use tooling to drop unused data and optimize the remaining data, potentially greatly reducing the size of your Qlik App.

QSDA needs a successfully loaded app for analysis, so our first step is to reload the app using the “Limited load” feature of the Qlik script debugger.

Click the Debug button in the Qlik script editor, check the Limited load option, accept the default of 100 rows and click “Run” to initiate the reload. The reload will come in under your quota. When the reload completes, key Ctrl-s to persist the data.

You don’t have a business-ready app, but this is enough for QSDA to scan all charts, expressions and master items to determine what data is actually required.

In QSDA Pro we now select our Cloud connection and select the app of interest, in this case “BikeShop Sales2”. Click the “Open” button to prepare the app for analysis. I will also uncheck the “Collect Calctime” option because chart calculation times are not meaningful with our limited data. Description is optional, in this case I’ve entered “Limited Load”.

Click the “New Analysis” button to launch the QSDA Pro analysis. The analysis may take a minute or two to complete depending on the capacity of your tenant and the current Qlik Cloud load. When the analysis is complete press the “View” button to see the results.

In the Summary view we see there are 89 unused fields. Unused fields are fields that are not used in any charts, master items or expressions. These are fields that can be dropped at the end of script without impacting the user experience in the app. The actual storage numbers (KiB) are not meaningful because we only loaded 100 rows. The bar chart is useful in that is indicates about 2/3 of our data is unused. Hope!

QSDA provides a Script Generator in the Tools menu to generate a “DROP Fields …” script statement for unused fields. Select the script generator from the Tools menu.

In the Script Generator > Drop Fields tab select the top checkbox to select all recommended fields. Press the “Copy” button to copy the generated Drop Fields statement to your clipboard.

Paste the Drop Fields statement at the end of your Qlik Script and reload.

Reload successful! Victory!

A subsequent QSDA analysis shows the app size has been reduced from the problematic 37Mb to 15MB!

QSDA Pro can quickly and efficiently get your Qlik Apps below your Qlik Cloud quota by pruning unnecessary data. Don’t pay for more than you need.

In a follow up post I’ll walk through some additional QSDA features to help you reduce Qlik app footprint even further and improve chart response time to boot.

Download QSDA Pro and try it for yourself. Or reach out to us to learn more about license options or arrange a demo or POC for your team.

-Rob

Are you going to Qlik Connect? I’ll be at the Motio booth on the show floor ready to demo QSDA Pro or answer any questions you may have. Or just to say Hej 🙂

Share

Formatting Intervals

In this post I’m going to show some tips for formatting intervals / durations in Qlik.

Datetimes in Qlik utilize a serial number that equals the number of days since December 30, 1899. The integer value represents the day and the fractional value represents the time of the day. The difference between two datetimes is an interval or duration.

Qlik provides the Interval function to format numeric interval values as dual strings for display. Interval() may be used both in charts and script. The Interval function accepts an optional format string that specifies the output format.

An example interval numeric value could be 3.3414351851898, three days and a few hours. The expression

Interval(3.3844097222245, 'hh:mm:ss')

will yield “81:13:33”. Note the 81 hours representing 3 days and 9 hours. We can exceed 24 hours because interval accumulates overflow in the largest unit. For example, we could show the duration as minutes using

Interval(3.3844097222245, 'm') 

yields “4873”

We can add the format character “D” (or “d”) to indicate days.

Interval(3.3844097222245, 'D hh:mm:ss')

will yield “3 09:13:33”.

Is it clear what this number string represents? The “:” may allow a user to intuit the hh:mm:ss. But what about the “3”? Can we make this clearer by adding some additional text to the output? (From now on I’ll use a field named “Duration” in my examples). Will this work?

Interval(Duration, 'D days hh:mm:ss')

No. We will get some funny looking output because the “d” and “s” in “days” will be interpreted as format characters.

The format pattern does not have a syntax to declare “this is a literal, not a format character”. We can work around this limitation by using a non-format character as a proxy and patching it up with a Replace function. I’ll use “a” to represent the word “days “

Replace(Interval(Duration, 'd a, hh:mm:ss'), 'a', 'days')

You may have noticed that the values are now left justified. This is because Replace() returns a string, whereas Interval() returns a Dual value. We probably want to retain the Dual nature to allow for proper sorting. I’ll fix it by adding a Dual() function.

Dual(Replace(Interval(Duration, ‘d z, hh:mm:ss’), ‘z’, ‘days’), Duration)

How about if I want to show only days and hours?

The format string “d h” will show days and hours. Minutes and seconds will not display. However…the internal numeric value will remain unchanged. Generally not a problem, but may cause some duplicate lines if the value is used as a Dimension or filter. Because of this, I like to round the value to match my display.

Interval(Round(Duration, MakeTime(1)), 'd hh')

MakeTime(1) is the value of one hour. If you want to round to 4 hour segments use MakeTime(4). Instead of Round() you can also use Floor() or Ceil(). For example, to make 15 minute buckets use Floor(Duration, MakeTime(0,15))

Let’s put Round and Replace together to display a clearly formatted days and hours. I’ll use “b” as proxy for “hours”.

Dual(
  Replace(
  Replace(
   Interval(Round(Duration,MakeTime(1)), 'd a, h b')
  ,'a', 'days')
  ,'b', 'hours')
, Duration)

Almost there. I don’t like the “0 days”. How about we don’t display days if zero? Let’s make “days” conditional on Duration being >= 1.

Dual(
  Replace(
  Replace(
    Interval(Round(Duration,MakeTime(1)),
      if(Duration >= 1, 'd a, ', '') & 'h b'.)
  ,  'a', 'days')
  ,  'b', 'hours')
, Duration)

I like it. It has become a bit wordy. I may want to use it elsewhere in this app (or others), perhaps for a different field. I’ll make it reusable by assigning the expression to a Variable with Parameter. Add this statement to the load script.

Set FormatDuration = Dual(
  Replace(
  Replace(
    Interval(Round($1,MakeTime(1)), if($1 >= 1, 'd a, ', '') & 'h b')
  , 'a', 'days')
  , 'b', 'hours')
, $1);

“$1” is a placeholder for the fieldname. The chart expression is now simply:

$(FormatDuration(Duration))

Looking for more tips & tricks? Have some of your own to share? Join us in September at the Masters Summit for Qlik in Vienna. In addition to our focused hands-on workshops on migrating, managing and developing in Qlik Cloud, we’ll be sharing lots of tips and techniques gleaned from years of Qlik experience.

-Rob

Share

QSDA Release 2.6 Brings Customized Flag Levels

QSDA Pro 2.6 is available now and a new feature is Flag Levels. Each flag type is assigned a level of Info, Warning or Error with a matching color. Color helps you focus on what’s important.

Flag Levels may be customized and refined for your requirements using the new Admin > Project Config page.

QSDA 2.6 also adds three new flags:

Calculated Dimension“.  The default level for this flag is Warning. 

No Application Thumbnail“. The default level for this flag is Warning. 

Missing Theme“. The default level for this flag is Error. A missing theme can occur when moving an App from one Qlik environment to another.

Want to learn more about QSDA Pro? Are you QSDA Pro curious or want a refresher on the capabilities and value of QSDA Pro to your Qlik development team? Join me on 24 January, 2024 for “Accelerate Your Qlik Development Through Automation” where I’ll walk you through the features of QSDA Pro and show you how QSDA can help you produce faster, leaner and cleaner Qlik Apps.

-Rob

Share

Charts from the 1900 Paris Exhibition

Summary: For the data viz nerds among us, I share my fascination with a groundbreaking data viz project at the 1900 Paris Exhibition.

In Tim Harford’s excellent book “The Data Detective: Ten Easy Rules to Make Sense of Statistics” the author mentions a set of infographics produced by W.E.B. Du Bois and assistants for the 1900 Paris Exhibition. The “Exhibition of American Negroes” is a powerful collection of photographs and data visualization that provide a strong counter to racist propaganda and ignorance of the time.

I encourage you to explore the entire contents of the exhibit, but in this post I’m going to highlight some of the data visualizations. You can browse the collection here, the charts are on the first two 100 entry grids.

Du Bois and his Atlanta University students made 63 hand-drawn diagrams. The data visualizations were created on presentation cardboard and conceived for a temporary installation at the fair. The original diagrams still exist and have been displayed in various museums.

The hand drawn nature of the charts makes them feel very personal and they tell their stories very well. I expect that the team was pushing the boundaries of data viz knowledge and technique of the time.

This multivariate diagram is one of my favorites

Link to larger size at Library of Congress

There are charts covering many aspects of African American life — professions, geography, income, property — and importantly showing change over time. This is a chart comparing marital status of African Americans to Germans. Why Germans? I’ll let you research that.

Link to larger image at Library of Congress

Measured against our current standards for data viz, we could make some criticisms about choices of shape and scale, but I don’t see any dishonesty. One of the design choices I found amusing was what to do when the bar size exceeded the available space. Wrap it!

Link to larger image at Library of Congress

I hope you find this bit of data viz history as fascinating as I do. I’ve provided some additional reading links below. Again, you can browse the collection at the Library of Congress. I’m curious to hear which visualizations you find most interesting and why.

Additional Reading

An excellent overview of the sociological context and impact of the exhibition: “W.E.B. Du Bois in Paris: The Exhibition That Shattered Myths About Black America” by Jacqueline Francis and Stephen G. Hall

WEbDuBois.org The 1900 Paris Exhibition index. List of sources and materials related to the exhibition.

Share

Masters Summit for Qlik Cloud

Masters Summit New Qlik Cloud Track!

The Masters Summit Team is developing all new courses for 2024. The new content is focused on:

  • Qlik Cloud (perfect for clients who are on or moving to cloud already, or evaluating a migration).
  • New capabilities in the platform, like AutoML, Automation, etc.

Before we get too far down the road, we want your input to gauge interest.

We are opening pre-registration for our 2024 conferences:

Vienna, Austria – October 02-04 Pre-register for Vienna now

Philadelphia, USA – November 04-06 Pre-register for Philadelphia now

If you or one of colleagues would be interested in attending one of these Summits, and believe you can get approval for the expense we would ask you to kindly pre-register now.

We will not be taking any payment at this time and you should not make any travel arrangements.

Why pre-register?

If you pre-register on or before January 15, 2024, you can attend the event for a specially-discounted price of $1800 (that’s a 25% discount off our usual early-bird price)

This is your opportunity to help shape the future of Masters Summit events and attend with a deeply-discounted price. If we determine enough of you are interested in returning, based on pre-registration, we will move forward with our 2024 plans

See the new Sessions below.

Managing your tenant

  • Setting up development and promotion flows
  • Approaches for managing data.
  • Monitoring your tenant

Migrating to Qlik cloud

  • Planning your migration
  • Common pitfalls
  • Available tools, and how to use them.

Performance Tuning and Capacity Management

  • Best practices for performance and data scalability with the Qlik capacity licensing in mind.
  • Review opportunities in data modelling and visualizations to improve performance.
  • Review available tools for monitoring performance.

Real-world AutoML Applications

  • Understand machine learning predictive models and what business problems they can tackle.
  • Profiling data for ML, basic statistics, feature engineering, avoiding common pitfalls.
  • Evaluating and explaining your models using AutoML results
  • Deploying and monitoring your model

Enterprise Reporting

  • Building reports from exiting sheets or objects
  • Output presentation quality reports in various formats including annotations.
  •  Automating the distribution of reports to a variety of delivery channels.
  • Review existing report connector blocks.
  • Discover the self-serve options for subscription and alert-based reporting.

Qlik Automations

  • Automating Analytics Operations and Automation
  • Dynamic Actions and Business Process Integration
  • Going beyond: Qlik-CLI and the Qlik APIs

Qlik Cloud Integration

  • Understand what APIs are available, the necessary skill sets required to develop with them and how to correctly structure your project architecture.
  • See the art of the possible with a mix of real-world solutions and innovative demos.
  • Discover useful tools to help speed up development and reduce costs.
  • Get tips and tricks for the dark art of authorization and authentication in both client and server implementations.

We hope to see you in 2024.

Sincerely, The Masters Summit Team.

Share

Using Attributes in QVD Files

Here’s a fun Friday post.

As I was updating my incremental load materials for the upcoming October/November Masters Summit for Qlik, I ran across an old note about creating “Attributes” in QVDs using Tags. Due to a bug in QVD generation I couldn’t implement the idea at the time. But the bug is long since fixed so here I present the general idea of “QVD Attributes” (my name, not an official Qlik thing).

We can assign Tags to Qlik Tables using the script TAG statement:

Tag TABLE TranData with 'MaxId=$(vMaxId)';
Tag TABLE TranData with 'Source=Salesforce';

If the table “TranData” is subsequently written to a QVD using the STORE statement, the tag will be included in the QVD file.

We can read tags directly from a QVD with script:

LOAD String%Table as Tag
FROM [lib://mylib/TranData.qvd]
(XmlSimple, table is [QvdTableHeader/TableTags/String]);

Note that we are reading only the XML header from the beginning of the file. This is very fast.

If we use a structured tag format like “key=value” we can create a generalized “Attribute” store in a QVD that can tell us something about the contents of that QVD. That could be useful when we want to use the QVD in a subsequent script.

How do we retrieve these tags in a useful and consistent way? I think loading them as variables would be a good approach. We can create a reusable subroutine to load all “key=value” tags from a QVD and generate variables named “Tag:key”. We then call the Sub, using the path to any QVD:

Call LoadTags('lib://QVDFiles/TranData.qvd');

We might utilize the new variables in something like an incremental Load.

SQL SELECT * From Trans Where Id > $(Tag:MaxId);

Or as a control to conditionally execute a script block.

If '$(Tag:Source)' = 'Salesforce' Then
// Use the Salesforce Load statement
EndIf

What is all this stuff? Subroutines? Preceding Load? Incremental Load? Tags? Join me at the Masters Summit for Qlik in Dublin or Orlando where we will teach you all about these topics and make you a scripting Master.

At the bottom of this post is a complete sample including the subroutine. You only need to change the “QVDFiles” to your lib to run this on your system.

-Rob

// Subroutine to Load Tagname=Value pairs from a QVD as "Tag:Tagname" Variables.
Sub LoadTags(_path)
    _TempTags:
    Load
        SubField(Tag, '=', 1) as TagName,
        SubField(Tag, '=', 2) as TagValue 
    ;    
    LOAD String%Table as Tag
    FROM [$(_path)] 
    (XmlSimple, table is [QvdTableHeader/TableTags/String])
    Where String%Table like '*=*';
    
    For _i = 0 to NoOfRows('_TempTags') - 1
        Let _vname = 'Tag:' & Peek('TagName', $(_i), '_TempTags');
        Let [$(_vname)] = Peek('TagValue', $(_i), '_TempTags');
    Next _i
    Set _vname=;
    Set _i=;
    Drop Table _TempTags;
End Sub
// END of Subroutine


// Generate some sample data
TranData:
LOAD  
	RecNo()*2 as Id,
	Rand() as Amount,
    'A' as Customer
AutoGenerate 6;

// Something we may want to record from the data is the max Id value, for later incremental load. 
TempId:
LOAD Max(Id) as MaxId Resident TranData;
Let vMaxId = Peek('MaxId');
Drop Table TempId;

// We can tag with $(variable) values or literal values. 
Tag TABLE TranData with 'MaxId=$(vMaxId)';
Tag TABLE TranData with 'Source=Salesforce';
// Some other tag that is not part of our name=value scheme.
Tag TABLE TranData with "ABC";
// STORE and DROP the QVD
Store TranData Into [lib://QVDFiles/TranData.qvd] (qvd);
Drop Table TranData;


// Call our subroutine to load tags as variables, from the disk QVD.
Call LoadTags('lib://QVDFiles/TranData.qvd');

// Demonstrate how we might use these tag values.
// In a SQL Where clause.
SET select = SQL SELECT * From Trans Where Id > $(Tag:MaxId);
Trace $(select);

// In a conditional to select a block of script.
If '$(Tag:Source)' = 'Salesforce' Then
	Trace Going to use the Salesforce load statement;
EndIf
Share

Guest Speaker for Dublin Masters Summit

I’m pleased to announce that Brian Booden will join us at the October Masters Summit for Qlik in Dublin, Ireland. Brian is a seven (!) time Qlik Luminary / Partner Ambassador, co-host of The Data Mix, and a well known Qlik and Analytics expert.

Brian will share some inside tips on using Qlik Application Automation and I expect a few interesting tricks. I’m hoping he shows off the capabilities of his WhatsApp for Qlik integration!

Masters Summit for Qlik is the premier Qlik Sense & QlikView advanced education event. In three packed days of lectures and hands on exercises, you’ll learn from experts and take your Qlik Dev skills to the next level in data modeling, scripting, expressions and visualization. We’ll also cover performance tuning and Qlik API Integration for beginners. You’ll come away with a wealth of new knowledge, code samples and documentation.

Find out what over 1500 of your colleagues have learned already.

Share

Creating Time Groups in Your Data Model

I frequently create Date groups in my Qlik apps to facilitate selections like “Yesterday” or “Last Week”. Some years ago I published a pattern for creating Date groups in the Recipes section of QlikCookbook.com and it’s been a very popular download. I’ve responded to a few requests on Qlik Community looking to do the same thing for Time groups, so I decided to publish a recipe for Time Grouping as well.

The recipe demonstrates selecting ranges such as “Day Shift”.

Selecting by Time Group

How do we create ranges that map to our data? In the script, create a Range table that defines range Names, Start and End times.

Ranges:
LOAD
Range as Range,
Time#(RangeStart, 'hh:mm:ss') as RangeStart,
Time#(RangeEnd, 'hh:mm:ss') as RangeEnd
INLINE [
Range, RangeStart, RangeEnd
Midnight Shift, 00:00:00, 05:00:00
Early Morning, 05:00:01, 09:00:00
Daylight, 06:00:00, 17:59:59
Day Shift, 09:00:01, 17:00:00
Early Evening, 17:00:01, 20:00:00
Evening, 20:00:01, 23:59:59
12am-6am, 00:00:00, 05:59:59
6am-12pm, 06:00:00, 11:59:59
12pm-6pm, 12:00:00, 17:59:59
6pm-12am, 18:00:00, 23:59:59
];

Then use IntervalMatch to link the time field in our data (EventTime in this example) with the Range field.

JOIN (Ranges) IntervalMatch (EventTime) LOAD RangeStart, RangeEnd RESIDENT Ranges;
// No longer need RangeStart, RangeEnd, but may keep them for documentation or debugging.
DROP FIELDS RangeStart, RangeEnd;

There are additional notes provided in the downloadable example, but that’s it for the basic pattern. You can download the example from here Time Grouping

You may have noticed I used Time#() to read the text time values. That makes it easy to write the range values as readable hh:mm:ss.

If you are generating the ranges using another technique like Interval#() or division, be mindful of rounding. A Qlik time value is represented numerically by the fraction of 1, a whole day. 0.5 is 12:00:00 PM, halfway through the day.

There are four methods I’m aware of to generate a Qlik time. For example to generate the numeric time value corresponding to 12:00:05 AM:

Time#('12:00:05', 'hh:mm:ss')
Interval#('12:00:05', 'hh:mm:ss')
MakeTime(0,0,5)
5/86400

All four will generate a floating point number to represent the time. Will they produce exactly the same result? Results that would pass an equality test?

Time#() = Interval#() = MakeTime() = x/y?

For some inputs they are equivalent. For a significant number of inputs the odd man out is Interval#(). Given the 86,400 second values in a day, Interval#() will not match the others 36% of the time. Try sharing that interesting fact with your spouse at the breakfast table.

Join me at the Masters Summit for Qlik in Dublin or Orlando this fall to learn advanced scripting techniques for Qlik as well as inside tips. Our team of expert consultants will cover data modeling, set analysis, visualization techniques and Qlik integration as well as answer your “how do I” questions. Register now or get more information at the Masters Summit for Qlik website.

Happy Scripting
-Rob

Share