Any work arounds for this?
Peter Bateman
shediac92@hotmail.com
Tue, 26 Nov 2002 16:13:38 -0400
Harold:
Assuming the processing of data is resumable, I would put a counter in my
code and change the code so that if the counter reached 30000 Quick would
exit or if all the processing is complete set a status word and exit. I
would then start a new instance of my new Quick program while status word is
not zero.
Regards,
Peter Bateman
shediac92@hotmail.com
>From: "Johnson, Harold A EDUC:EX" <Harold.A.Johnson@gems1.gov.bc.ca>
>To: "'merol.newman@ramesys.com'" <merol.newman@ramesys.com>, Curt
>Morgan <ccmorgan@austin.rr.com>
>CC: powerh-l@lists.swau.edu
>Subject: RE: Any work arounds for this?
>Date: Tue, 26 Nov 2002 09:08:37 -0800
>
>Thanks! Don't get me wrong, though, we LOVE using quick-in-batch! I
>think
>this is a known issue (not a bug because quick was never designed to be run
>in batch mode) with Cognos - just hoping someone has run into this as well.
>We are currently running all processes on the Alpha maxed out in the qkgo
>file. This problem only surfaces when other quick screens are called
>repetitively. For a single quick screen (even VERY complicated ones) this
>problem does not happen.
>
>
>
>
>-----Original Message-----
>From: Merol Newman [mailto:merol.newman@ramesys.com]
>Sent: Monday, November 25, 2002 2:45 PM
>To: Johnson, Harold A EDUC:EX; Curt Morgan
>Cc: powerh-l@lists.swau.edu
>Subject: RE: Any work arounds for this?
>
>
>I'd rather use Quiz and QTP if possible, but some processes are just more
>suitable for Quick. Quick in batch does seem to vary a lot from very fast
>to
>weekends-only, with dramatic improvements sometimes possible by re-writing
>bits!
>
>Just a thought - you often need different qkgo values for batch processing
>from
>what you would use for terminal sessions, so do you call the quick programs
>via
>their own qkgo? You can then set them up according to the type of
>processing
>they do. Looking at some of ours, those that have been adjusted for complex
>processing of fairly large amounts of data have maximum values set for Max
>Paged
>Memory and Secondary Blocks (on an Alpha running VMS). I vaguely remember
>getting advice from Cognos on this one, so maybe one of them would like to
>chip
>in with something less hit-and-miss!
>
>Good luck
>Merol
>
>merol.newman@ramesys.com
>Ramesys, Eldon Way, Crick, Northamptonshire, UK. NN6 7SL
>phone 01788 822133/823831, fax 01788 823601
>
>
>-----Original Message-----
>From: powerh-l-admin@cube.swau.edu
>[mailto:powerh-l-admin@cube.swau.edu]On Behalf Of Curt Morgan
>Sent: 25 November 2002 20:37
>To: Johnson, Harold A EDUC:EX; powerh-l@lists.swau.edu
>Subject: Re: Any work arounds for this?
>
>Harold
>Take heart from this: I once replaced a Quick-in-batch process that took
>24
>hours (...!) to run, with simple Quiz and QTP code that took all of 3
>hours.
>Broke my arm from patting myself on the back, too.
>Curt
>
>----- Original Message -----
>From: "Johnson, Harold A EDUC:EX" <Harold.A.Johnson@gems1.gov.bc.ca>
>To: <powerh-l@lists.swau.edu>
>Sent: Monday, November 25, 2002 2:23 PM
>Subject: Any work arounds for this?
>
>
> > Hi all. We have a fairly complicated process which has grown over the
>years
> > and includes several quick-in-batch and C code external calls. We have
> > noticed that it seems to "run out of steam" (out of memory or
>environment
> > space?) after processing a number of records and issues the following
>error:
> >
> > *Fatal Error* *635* Notify Cognos Customer Support
> >
> > The actual number of records that the process has been able to work
>through
> > has been steadily decreasing over the years and we are now looking at
> > ripping it apart in order to allow it to finish properly without
>crashing.
> > Information that we have received in the past always contain the warning
>the
> > "quick was never intended to be run in batch", even though technical
> > articles have been produced on how to run quick in batch. We know that
>it
> > really has nothing to do with the C calls, as this crashing behavious
>has
> > been duplicated on other processes with no C calls.
> >
> > We *have* noticed that the number of records our quick-in-batch
>processes
> > can go through is dependent on the type, complexity and amount of work
>that
> > is done and is related to how many times a screen is called from
>another.
> > (the simpler, the better - no called screens means no crashing at all,
> > whereas a single called screen will crash after about 30 - 32,000 calls)
> >
> > We have MANY quick-in-batch routines, which greatly enhance our systems
> > productivity and flexibility. Are there any known work-arounds to this
> > problem? (aside from actually changing the code)
> >
> > OpenVMS on Alpha server.
> >
> > thnx
>
>
>= = = = = = = = = = = = = = = = = = = = = = = = = = = =
>Mailing list: powerh-l@lists.swau.edu
>Subscribe: "subscribe" in message body to powerh-l-request@lists.swau.edu
>Unsubscribe: "unsubscribe" in message body to
>powerh-l-request@lists.swau.edu
>http://lists.swau.edu/mailman/listinfo/powerh-l
>This list is closed, thus to post to the list you must be a subscriber.
_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963