From: | "D'Arcy J(dot)M(dot) Cain" <darcy(at)druid(dot)net> |
---|---|
To: | "Mark Cave-Ayland" <m(dot)cave-ayland(at)webbased(dot)co(dot)uk> |
Cc: | jdavis-pgsql(at)empires(dot)org, alvherre(at)dcc(dot)uchile(dot)cl, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Much Ado About COUNT(*) |
Date: | 2005-01-20 11:55:00 |
Message-ID: | 20050120065500.5344da8a.darcy@druid.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 20 Jan 2005 10:12:17 -0000
"Mark Cave-Ayland" <m(dot)cave-ayland(at)webbased(dot)co(dot)uk> wrote:
> Thanks for the information. I seem to remember something similar to
> this being discussed last year in a similar thread. My only real issue
> I can see with this approach is that the trigger is fired for every
> row, and it is likely that the database I am planning will have large
> inserts of several hundred thousand records. Normally the impact of
> these is minimised by inserting the entire set in one transaction. Is
> there any way that your trigger can be modified to fire once per
> transaction with the number of modified rows as a parameter?
I don't believe that such a facility exists but before dismissing it you
should test it out. I think that you will find that disk buffering (the
system's as well as PostgreSQL's) will effectively handle this for you
anyway.
--
D'Arcy J.M. Cain <darcy(at)druid(dot)net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2005-01-20 12:17:13 | Re: ARC patent |
Previous Message | Mark Cave-Ayland | 2005-01-20 10:12:17 | Re: Much Ado About COUNT(*) |