From: | Adrian Klaver <aklaver(at)comcast(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Cc: | sabrina miller <sabrina(dot)miller(at)gmail(dot)com> |
Subject: | Re: Triggers made with plpythonu performance issue |
Date: | 2009-12-19 20:44:09 |
Message-ID: | 200912191244.09746.aklaver@comcast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Friday 18 December 2009 11:00:33 am sabrina miller wrote:
> Hi everybody,
> My requirements was:
> + Made a table charge to be partitioned by carrier and month
> + summarize by charges
> + summarize by users,
> + each summarization must be by month and several others columns.
>
>
>
> Doesn't sound like too much? As I say, im new and I didn't found any
> better. But an insert takes around 135ms in the worst case (create tables
> and insert rows) and about 85 ms in best case (only updates). There are
> something better?
If I am following this it means there is an average of 50ms extra overhead to do
an INSERT on charges.charges then an UPDATE correct? If so you have to consider
that an INSERT is actually doing quite a lot besides creating a new row in
charges.charges. There is a time cost to querying the database for existence of
objects , making decisions based on the result, creating new database objects
and the populating those objects. The issue then becomes where you want to pay
it? So the something better question then becomes where is the best place to
incur that cost. If the 135ms worst case works and does not impede your process
then it may be the best solution. Unfortunately there is not enough information
to give a definitive answer.
>
> Thanks in advance, Sabrina
--
Adrian Klaver
aklaver(at)comcast(dot)net
From | Date | Subject | |
---|---|---|---|
Next Message | Clayton Graf | 2009-12-19 20:45:15 | Re: AccessShareLock question |
Previous Message | Ralph Graulich | 2009-12-19 20:43:17 | \dt doesn't show all relations in user's schemas (8.4.2) |