QTP Performance Issue

Ohmes, Matt Matt.Ohmes@COGNOS.com
Mon, 9 Jul 2001 19:49:08 -0400


Hi Ron,
Can I ask a few questions Ron?  First, how long does it take to just to the
input phase of the run? (Comment out the Output and Item Final statements).
I suspect that is pretty fast. That means your time is taken up in updating
(which wouldn't be surprising).

I'm guessing "final <new-code> <t-new-code> really means,
  Item <new-code> final t-new-code

I'm also guessing the subfile in the Access statement is the same one as in
the Output statement, right?

Is the subfile indexed?  How many subfile records don't find a linking
record off the KSAM file (i.e. default to '999')?

Is there any reason you don't just create a new subfile?

Matt

Matt.Ohmes@Cognos.Com
Cognos Corporation
909 E. Las Colinas Blvd.
Suite 1900
Irving, TX  75039
214-259-6200
"Matt doesn't really know anything.  He just likes to pontificate a lot.
We refuse to acknowledge that he works for Cognos or that we have ever
met him or anyone with whom he's ever been associated.  Don't lend him
money and don't let him talk to your sister!" ;-)



-----Original Message-----
From: Ron Burnett [mailto:ron@cryptic.rch.unimelb.edu.au]
Sent: Monday, July 09, 2001 5:25 PM
To: powerh-l@lists.swau.edu
Subject: QTP Performance Issue


Hello everyone,

I have a really serious performance issue with QTP running on MPE/iX
(an HP928 with 383 MB memory).

I have a subfile of 240 MB (containing just under 100,000 records),
which I link to a trivial-sized CM KSAM reference table to convert
a particular code value.  The pseudo-code is

	access <subfile> link <old-code> to <old-code> of <ksam-file>
optional
	define t-new-code char * n = new-code of <ksam-file> &
		if record <ksam-file> exists else "999"
	output <subfile> update
	final <new-code> <t-new-code>
	set lock file update

Now, this process takes over 10 hours on an otherwise unoccupied system!
I suspect there is a great deal of memory thrashing going on because of
the size of the file and the total available memory on the machine.

I also suspect that the 'lock' statement may be inefficient, so I removed
it because I can have exclusive access to the system for this purpose,
and can guarantee that there will be no other access to my data structures
or dictionary when this process is executing.  But its still taking an
unacceptably
long time to run.

Any ideas on improving the run time?  I've got around 900,000 records to
process and this is only one of nine steps to complete.  I can cut the
individual
process 'chunk' to around 50,000 records.  Or would converting the CM KSAM
table to NM KSAM help?  Or is there a 'lock' statement that would be
significantly more efficient, given the exclusive-use circumstances?

Where's my supercomputer?

Cheers,
Ron Burnett
ron@cryptic.rch.unimelb.edu.au




= = = = = = = = = = = = = = = = = = = = = = = = = = = =
Mailing list: powerh-l@lists.swau.edu
Subscribe: "subscribe" in message body to powerh-l-request@lists.swau.edu
Unsubscribe: "unsubscribe" in message body to
powerh-l-request@lists.swau.edu
http://lists.swau.edu/mailman/listinfo/powerh-l
This list is closed, thus to post to the list you must be a subscriber.