From: | James Mansion <james(at)mansionfamily(dot)plus(dot)com> |
---|---|
To: | david(at)lang(dot)hm |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: performance for high-volume log insertion |
Date: | 2009-04-21 19:22:10 |
Message-ID: | 49EE1CE2.8020902@mansionfamily.plus.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
david(at)lang(dot)hm wrote:
>>> 2. insert into table values (),(),(),()
>>
>> Using this structure would be more database agnostic, but won't perform
>> as well as the COPY options I don't believe. It might be interesting to
>> do a large "insert into table values (),(),()" as a prepared statement,
>> but then you'd have to have different sizes for each different number of
>> items you want inserted.
>
> on the other hand, when you have a full queue (lots of stuff to
> insert) is when you need the performance the most. if it's enough of a
> win on the database side, it could be worth more effort on the
> applicaiton side.
Are you sure preparing a simple insert is really worthwhile?
I'd check if I were you. It shouldn't take long to plan.
Note that this structure (above) is handy but not universal.
You might want to try:
insert into table
select (...)
union
select (...)
union
select (...)
...
as well, since its more univeral. Works on Sybase and SQLServer for
example (and v.quickly too - much more so than a TSQL batch with lots of
inserts or execs of stored procs)
James
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2009-04-21 19:25:31 | Re: performance for high-volume log insertion |
Previous Message | Stephen Frost | 2009-04-21 19:17:43 | Re: performance for high-volume log insertion |