January 14, 2011 3 Comments
My previous experience of compound triggers showed that, in 126.96.36.199 at least, they had a problem.
Regarding standard non-compound triggers, I said on the forums earlier today that I thought a FORALL statement doing an update based on a collection would fire multiple statement level triggers whereas for an INSERT…SELECT statement, the statement level triggers would fire the once.
Turns out I’m wrong, the statement level triggers fire just once from both as was demonstrated easily:
SQL> SET serveroutput ON SQL> SQL> CREATE TABLE test_bulk 2 (test_id NUMBER 3 , id_test_1 NUMBER 4 , id_test_2 NUMBER 5 ); Table created. SQL> SQL> CREATE OR REPLACE TRIGGER test_asi 2 AFTER INSERT 3 ON TEST_BULK 4 REFERENCING NEW AS New OLD AS Old 5 DECLARE 6 begin 7 DBMS_OUTPUT.PUT_LINE('expensive action'); 8 end; 9 / Trigger created. SQL> SQL> select count(*) from test_bulk; COUNT(*) ---------- 0 SQL> SQL> DECLARE 2 TYPE my_collection IS TABLE OF NUMBER; 3 my_collection_test my_collection; 4 BEGIN 5 SELECT LEVEL BULK COLLECT 6 INTO my_collection_test 7 FROM dual 8 CONNECT BY LEVEL < 15; 9 FORALL i IN 1 .. my_collection_test.COUNT 10 INSERT INTO test_bulk(id_test_1,id_test_2) VALUES (1,my_collection_test(i)); 11 END; 12 / expensive action <--------- one message from the statement trigger PL/SQL procedure successfully completed. SQL> SQL> select count(*) from test_bulk; COUNT(*) ---------- 14 SQL>
A pretty basic thing to get wrong.
Anyway, then revisiting the general trigger and bulk documentation, I came across a statement in the documentation about compound triggers that I just didn’t get.
It is when the triggering statement affects many rows that a compound trigger has a performance benefit. This is why it is important to use the BULK COLLECT clause with the FORALL statement. For example, without the BULK COLLECT clause, a FORALL statement that contains an INSERT statement simply performs a single-row insertion operation many times, and you get no benefit from using a compound trigger.
If the triggering statement of a compound trigger is an INSERT statement that includes a subquery, the compound trigger retains some of its performance benefit. For example, suppose that a compound trigger is triggered by the following statement:
INSERT INTO Target
SELECT c1, c2, c3
WHERE Source.c1 > 0
For each row of Source whose column c1 is greater than zero, the BEFORE EACH ROW and AFTER EACH ROW sections of the compound trigger execute. However, the BEFORE STATEMENT and AFTER STATEMENT sections each execute only once (before and after the INSERT statement executes, respectively).
Ignore the first bit for now, but I can’t figure out why there would be any difference in benefit between the FORALL and the INSERT … SELECT.
I would assume (given that I’ve now changed my understanding of the statement level trigger firing for FORALL) that in both cases any statement level triggers will once and the row level triggers will fire for each row. So… how does the INSERT… SELECT only have “some of the performance benefit”? Perhaps it’s also talking about the benefits of a bulk statement versus a non-bulk statement – i.e. including the FORALL benefits.
And looking at the 11.2 documentation, this wording has changed so maybe it was just not quite right.
From the 11.2 PLSQL reference:
Performance Benefit of Compound DML Triggers
A compound DML trigger has a performance benefit when the triggering statement affects many rows.
For example, suppose that this statement triggers a compound DML trigger that has all four timing-point sections in Table 9-2:
INSERT INTO Target
SELECT c1, c2, c3
WHERE Source.c1 > 0
Although the BEFORE EACH ROW and AFTER EACH ROW sections of the trigger run for each row of Source whose column c1 is greater than zero, the BEFORE STATEMENT section runs only before the INSERT statement runs and the AFTER STATEMENT section runs only after the INSERT statement runs.
A compound DML trigger has a greater performance benefit when it uses bulk SQL, described in “Bulk SQL and Bulk Binding”.
OK so, no mention there of any difference in performance benefits between FORALL and INSERT…SELECT.
But then talking about the benefits purely for FORALL:
Using Compound DML Triggers with Bulk Insertion
A compound DML trigger is useful for accumulating rows destined for a second table so that you can periodically bulk-insert them. To get the performance benefit from the compound trigger, you must specify BULK COLLECT INTO in the FORALL statement (otherwise, the FORALL statement does a single-row DML operation multiple times).
So, I do wonder what exactly that is alluding to…. will have to do some investigations (but not right now… as usual?).
Anyway, it’s interesting and relevant for me currently because I’m just finishing the refactoring of a process using expensive row-by-row dml and then further expensive row-by-row trigger DML operations. Although I’m not doing this on 11g so compound triggers are an irrelevance.
It’s a shame that I can’t remove the triggers concerned altogether but there are just too many other legacy processes involved.
But the obvious work is to change the row-by-row inserts and updates to work in bulk and then use a combination of before/after statement and before or after row triggers to populate a collection on which the row-by-row trigger DML is done in bulk on the collection.
It will come as no surprise that I’m using INSERT…SELECT not FORALL given my previous misunderstanding of statement level firing timing for FORALL.
But I’d rather use a SQL type collection anyway with INSERT…SELECT to avoid PLS_00436 anyway (still waiting for an upgrade to a recent decent version).
However, what is also interesting about the compound trigger documentation is the usage of a flush threshold in the examples.
CREATE OR REPLACE TRIGGER maintain_employee_salaries FOR UPDATE OF salary ON employees COMPOUND TRIGGER -- Declarative Part: -- Choose small threshhold value to show how example works: threshhold CONSTANT SIMPLE_INTEGER := 7; TYPE salaries_t IS TABLE OF employee_salaries%ROWTYPE INDEX BY SIMPLE_INTEGER; salaries salaries_t; idx SIMPLE_INTEGER := 0; PROCEDURE flush_array IS n CONSTANT SIMPLE_INTEGER := salaries.count(); BEGIN FORALL j IN 1..n INSERT INTO employee_salaries VALUES salaries(j); salaries.delete(); idx := 0; DBMS_OUTPUT.PUT_LINE('Flushed ' || n || ' rows'); END flush_array; -- AFTER EACH ROW Section: AFTER EACH ROW IS BEGIN idx := idx + 1; salaries(idx).employee_id := :NEW.employee_id; salaries(idx).change_date := SYSDATE(); salaries(idx).salary := :NEW.salary; IF idx >= threshhold THEN flush_array(); END IF; END AFTER EACH ROW; -- AFTER STATEMENT Section: AFTER STATEMENT IS BEGIN flush_array(); END AFTER STATEMENT; END maintain_employee_salaries;
The alternative is to controlling the number of rows (via the size of the collection) being updated at anyone time in the INSERT or UPDATE statement.
This example above though seems much better and in hindsight I wish I’d done that in this piece of work that I’ve done recently.