From: | "Steven Flatt" <steven(dot)flatt(at)gmail(dot)com> |
---|---|
To: | "Andreas Kretschmer" <akretschmer(at)spamfence(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Very long SQL strings |
Date: | 2007-06-21 20:09:31 |
Message-ID: | 357fa7590706211309w3bc072s321e546f4c5fd976@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks everyone for your responses. I don't think it's realistic to change
our application infrastructure to use COPY from a stream at this point.
It's good to know that multi-row-VALUES is good up into the thousands of
rows (depending on various things, of course). That's a good enough answer
for what I was looking for and we can revisit this if performance does start
to hurt.
On 6/21/07, Andreas Kretschmer <akretschmer(at)spamfence(dot)net> wrote:
>
> I guess you can obtain the same if you pack all INSERTs into one
> transaction.
Well the 20% gain I referred to was when all individual INSERTs were within
one transaction. When each INSERT does its own commit, it's significantly
slower.
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Steven Flatt | 2007-06-21 20:37:49 | Re: Database-wide VACUUM ANALYZE |
Previous Message | Bill Moran | 2007-06-21 20:07:32 | Re: Database-wide VACUUM ANALYZE |