1. Bjarne Anker
  2. PowerBuilder
  3. Tuesday, 18 June 2019 15:46 PM UTC

Hi.

 

We have a small application (PB2017 R3) which reads and interprets small text files, adds rows to 3-4 datastores and the commits to 4 tables in the database. Sometimes it can be as much as 1500 rows, but for the most between 700 and 1000.

There is a trigger on one of the tables defined as "FOR EACH ROW" which writes some additional logging to another table (for after update/insert/delete).

It looks like the trigger fires for each row inserted by the datastore, which is of course correct when it's defined like that.

Is it perhaps an idea to alter the trigger to "FOR EACH STATEMENT" so that all the rows in the main table is commited before the logging is done by the trigger?

 

Any pros and/or cons?

 

Regards,

 

Bjarne Anker

Accepted Answer
Roland Smith Accepted Answer Pending Moderation
  1. Tuesday, 18 June 2019 20:16 PM UTC
  2. PowerBuilder
  3. # Permalink

I think you will find that each row is also a statement. When you call .Update(), a separate SQL statement is generated for each row that is updated or inserted.

Comment
  1. Bjarne Anker
  2. Wednesday, 19 June 2019 06:33 AM UTC
Thanks, that's the answer I was looking for.

I guess we'll try to refine the code a bit to avoid to much transactions to the production tables.



Thanks!
  1. Helpful
  1. Roland Smith
  2. Wednesday, 19 June 2019 13:06 PM UTC
I think you might be able to load the data to a temp table and then issue a single inline statement that would apply it to the permanent table.
  1. Helpful
There are no comments made yet.
Olan Knight Accepted Answer Pending Moderation
  1. Tuesday, 18 June 2019 17:33 PM UTC
  2. PowerBuilder
  3. # 1

FWIW:

As a rule, I like to have detailed logging for errors, but not when everything works correctly.

One possibility:
For log entries you WRITE into a table, I would keep the BY ROW setting enabled.
Check the status when the event ends (all rows finished).
If NO ERROR, then rollback the row-by-row entries in the log table, and insert an overview entry*


*Something like "Processed rows x - y, all rows processed correctly."


Olan

Comment
There are no comments made yet.
Bjarne Anker Accepted Answer Pending Moderation
  1. Tuesday, 18 June 2019 18:32 PM UTC
  2. PowerBuilder
  3. # 2

I should mention that this is not a traditional log of errors, but rather a notification to another system to process the data further to the cloud.

We don't want to add any more columns to the table since it is a production table with lots of queries as is.

That's why we chose a trigger.

And that's why I would like the records to be all written before the trigger fires, and I hope that "FOR EACH STATEMENT" does the trick when usiing datastore.update() and a sqlca.of_commit() after that?

 

Bjarne

Comment
There are no comments made yet.
Michael Kramer Accepted Answer Pending Moderation
  1. Wednesday, 19 June 2019 02:38 AM UTC
  2. PowerBuilder
  3. # 3

Hey Bjarne,

DW fires separate SQL statement for each row in the DataWindow. FOR EACH ROW and FOR EACH STATEMENT will have same performance characteristic in this case.

However, say you have a statement like

UPDATE employee SET Salary = Salary * 1.03

and that the table contains 1000 rows

FOR EACH STATEMENT will fire once having 1000 rows in its "OLD" pseudo-table and 1000 rows in its "NEW" pseudo-table. In a "statement" level trigger as I remember you ar NOT guaranteed that row number X in OLD is the before-data matching row number X in NEW. So additional code may be required to compare OLD vs. NEW effectively.

FOR EACH ROW will fire 1000 times having each time just one row in its "OLD" pseudo-table and one row in its "NEW" pseudo-table. In a "row" level trigger you now that OLD and NEW correspond to same row.

That has potentially different performance characteristics.

 

HTH /Michael

Comment
There are no comments made yet.
  • Page :
  • 1


There are no replies made for this question yet.
However, you are not allowed to reply to this question.