QTP Performance Issue
Ken Paul
KenRPaul@Concentric.Net
Mon, 09 Jul 2001 17:32:11 -0600
Hi Ron,
At 08:24 AM 7/10/01 +1000, you wrote:
>Hello everyone,
>
>I have a really serious performance issue with QTP running on MPE/iX
>(an HP928 with 383 MB memory).
>
>I have a subfile of 240 MB (containing just under 100,000 records),
>which I link to a trivial-sized CM KSAM reference table to convert
>a particular code value. The pseudo-code is
>
> access <subfile> link <old-code> to <old-code> of <ksam-file> optional
> define t-new-code char * n = new-code of <ksam-file> &
> if record <ksam-file> exists else "999"
> output <subfile> update
> final <new-code> <t-new-code>
> set lock file update
>
>Now, this process takes over 10 hours on an otherwise unoccupied system!
>I suspect there is a great deal of memory thrashing going on because of
>the size of the file and the total available memory on the machine.
>
>I also suspect that the 'lock' statement may be inefficient, so I removed
>it because I can have exclusive access to the system for this purpose,
>and can guarantee that there will be no other access to my data structures
>or dictionary when this process is executing. But its still taking an
>unacceptably
>long time to run.
>
>Any ideas on improving the run time? I've got around 900,000 records to
>process and this is only one of nine steps to complete. I can cut the
>individual
>process 'chunk' to around 50,000 records. Or would converting the CM KSAM
>table to NM KSAM help? Or is there a 'lock' statement that would be
>significantly more efficient, given the exclusive-use circumstances?
>
>Where's my supercomputer?
>
>Cheers,
>Ron Burnett
>ron@cryptic.rch.unimelb.edu.au
I have a couple of thoughts and a couple of questions.
Exactly how big is your "trivial-sized CM KSAM reference table"?
Was your 10 hours in batch or interactive mode?
Have you tried sorting the subfile by "old-code" so that the system doesn't
have to keep looking up the new key value and possibly causing memory
pressure. That way you should be processing the subfile in a sequential
manner (which you have been and this is the fastest method) and you would
also be processing all the records with the same "old-code" which should
still be in memory.
I can't see why this should take 10 hours but maybe it is the "nobreakset"
problem mentioned earlier or you could always try QUIZ as Al mentions.
Hope this helps,
Ken
Ken Paul
Independent Consultant
KenRPaul@Concentric.Net
(303) 694-0920