QTP Performance Issue

Karman, Al AKarman@USFreightways.com
Mon, 9 Jul 2001 17:52:36 -0500


Ron,

Got Quiz?

fwiw, I use QTP only when necessary:
	When dataset updates need to occur.

Why not spin a new subfile off instead of updating the current one?

Headache-free in Illinois,

Al Karman
IT Consultant
US Freightways 
akarman@usfreightways.com
773.824.2284


-----Original Message-----
From: tknowles@csc.co.nz [mailto:tknowles@csc.co.nz]
Sent: Monday, July 09, 2001 5:38 PM
To: powerh-l@lists.swau.edu
Cc: Ron Burnett
Subject: Re: QTP Performance Issue


Ron
Sounds like the 'nobreakset' problem, see below ...

I recall a performance problem with QTP related to using KSAM and
VTSERVICES.  Seems that QTP was spending all its time allowing/disallowing
a
break from the terminal (which apparently is very slow for a VT
connection).

Unfortunately, I think this may have been limited to CM KSAM files and
doesn't apply to your situation.

If you want to confirm that, you can try to bypass the problem with the
NOBREAKSET parameter (QTP INFO="NOBREAKSET"), by running with STDIN=$NULL,
use LOCK FILE RUN, run across a hardwire connection, or run it in batch.

Good luck!

Glenn A. Mitchell

Tony Knowles



Ron Burnett <ron@cryptic.rch.unimelb.edu.au>@cube.swau.edu on 10/07/2001
10:24:51

Sent by:  powerh-l-admin@cube.swau.edu


To:   powerh-l@lists.swau.edu
cc:
Subject:  QTP Performance Issue


Hello everyone,

I have a really serious performance issue with QTP running on MPE/iX
(an HP928 with 383 MB memory).

I have a subfile of 240 MB (containing just under 100,000 records),
which I link to a trivial-sized CM KSAM reference table to convert
a particular code value.  The pseudo-code is

     access <subfile> link <old-code> to <old-code> of <ksam-file> optional
     define t-new-code char * n = new-code of <ksam-file> &
          if record <ksam-file> exists else "999"
     output <subfile> update
     final <new-code> <t-new-code>
     set lock file update

Now, this process takes over 10 hours on an otherwise unoccupied system!
I suspect there is a great deal of memory thrashing going on because of
the size of the file and the total available memory on the machine.

I also suspect that the 'lock' statement may be inefficient, so I removed
it because I can have exclusive access to the system for this purpose,
and can guarantee that there will be no other access to my data structures
or dictionary when this process is executing.  But its still taking an
unacceptably
long time to run.

Any ideas on improving the run time?  I've got around 900,000 records to
process and this is only one of nine steps to complete.  I can cut the
individual
process 'chunk' to around 50,000 records.  Or would converting the CM KSAM
table to NM KSAM help?  Or is there a 'lock' statement that would be
significantly more efficient, given the exclusive-use circumstances?

Where's my supercomputer?

Cheers,
Ron Burnett
ron@cryptic.rch.unimelb.edu.au




= = = = = = = = = = = = = = = = = = = = = = = = = = = =
Mailing list: powerh-l@lists.swau.edu
Subscribe: "subscribe" in message body to powerh-l-request@lists.swau.edu
Unsubscribe: "unsubscribe" in message body to
powerh-l-request@lists.swau.edu
http://lists.swau.edu/mailman/listinfo/powerh-l
This list is closed, thus to post to the list you must be a subscriber.



= = = = = = = = = = = = = = = = = = = = = = = = = = = =
Mailing list: powerh-l@lists.swau.edu
Subscribe: "subscribe" in message body to powerh-l-request@lists.swau.edu
Unsubscribe: "unsubscribe" in message body to
powerh-l-request@lists.swau.edu
http://lists.swau.edu/mailman/listinfo/powerh-l
This list is closed, thus to post to the list you must be a subscriber.