QTP rollback & commit

Ohmes, Matt Matt.Ohmes@COGNOS.com
Fri, 18 May 2001 10:46:24 -0400


Hi Bill,
Your are right, of course, when you say that creating a "transaction
subfile" first won't solve the problem of the "real" run failing.  The real
run can fail for a number of reasons (as you've discovered), which is why
you should definitely have backups of your tables, either using DB utilities
or subfiles. 

But that doesn't negate the value of a transaction subfile. I would very
often create a transaction subfile even when I didn't need a pre-update
review of the data. In addition to the previously stated reason, it also
served as the basis for a "what DID I process" report.  While QTP's
statistics are fine, they obviously don't list any details of exactly WHICH
rows were processed.

I've always been a big fan of subfiles. :-)
Matt

-----Original Message-----
From: Bill D Michael [mailto:Bill.Michael@ipaper.com]
Sent: Friday, May 18, 2001 9:40 AM
To: powerh-l@lists.swau.edu
Subject: RE: QTP rollback & commit



This thread is of interest because we're hitting the same type of
problems... QTP 8.20D6, Oracle 8, on VMS. We may have a batch process with
two or three multi-request QTPs, followed by some Quizes; we'll get an
Oracle "attach failure" in the middle of one of the QTPs that terminates
the request or run. Anything running after this is getting garbage, or
nothing; we can work around that part easily enough (though it will require
code & DCL changes), but since the _first_ part of the QTP(s) were
successful, our data is now in an "in between" state. With 7.10 and RMS
files, failures "mid run" were exceedingly rare (a CPU crash or running out
of disk space) - now we're getting failures several times a week, and we're
spending all our time analyzing and fixing data.

One thought we've had - subfile off the tables that are _about_ to be
changed by the process. Check for overall success of the process at the end
(preferably programmatically). If there was a failure, truncate the table
and reload it from the subfile, then try the process again. Not ideal, by
far, but it's got to be better than the current "randomly works or not"
situation! Doing a "subfile first, then if it's good, the real data" won't
work, because the subfile might work fine, then the real run still fail; we
need to validate the 'real run'.

At least we've gotten past the "a failure locks up all PH users until
fixed" problem (subdict=search) with workarounds!

Bill