From: | Jon Nelson <jnelson+pgsql(at)jamponi(dot)net> |
---|---|
To: | |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres insert performance and storage requirement compared to Oracle |
Date: | 2010-10-26 21:27:23 |
Message-ID: | AANLkTinntqfT1b9xaRfGGqVwwsGhyA7=JhLsuQ10Xzyy@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On Tue, Oct 26, 2010 at 4:02 PM, Mladen Gogala
<mladen(dot)gogala(at)vmsinfo(dot)com> wrote:
> On 10/26/2010 11:41 AM, Merlin Moncure wrote:
>>
>> yup, that's exactly what I mean -- this will give you more uniform
>> insert performance (your temp table doesn't even need indexes). Every
>> N records (say 10000) you send to permanent and truncate the temp
>> table. Obviously, this is more fragile approach so weigh the
>> pros/cons carefully.
>>
>> merlin
>
> Truncate temporary table? What a horrible advice! All that you need is the
> temporary table to delete rows on commit.
I believe Merlin was suggesting that, after doing 10000 inserts into
the temporary table, that something like this might work better:
start loop:
populate rows in temporary table
insert from temporary table into permanent table
truncate temporary table
loop
I do something similar, where I COPY data to a temporary table, do
lots of manipulations, and then perform a series of INSERTS from the
temporary table into a permanent table.
--
Jon
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-10-26 21:32:58 | Re: Simplifying replication |
Previous Message | Mladen Gogala | 2010-10-26 21:02:55 | Re: Postgres insert performance and storage requirement compared to Oracle |
From | Date | Subject | |
---|---|---|---|
Next Message | Mladen Gogala | 2010-10-26 21:54:57 | Re: Postgres insert performance and storage requirement compared to Oracle |
Previous Message | André Volpato | 2010-10-26 21:04:10 | Re: AIX slow buffer reads |