From: | Greg Smith <gsmith(at)gregsmith(dot)com> |
---|---|
To: | david(at)lang(dot)hm |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: performance for high-volume log insertion |
Date: | 2009-04-21 03:12:25 |
Message-ID: | alpine.GSO.2.01.0904202302140.19983@westnet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 20 Apr 2009, david(at)lang(dot)hm wrote:
> any idea what sort of difference binary mode would result in?
The win from switching from INSERT to COPY can be pretty big, further
optimizing to BINARY you'd really need to profile to justify. I haven't
found any significant difference in binary mode compared to overhead of
the commit itself in most cases. The only thing I consistently run into
is that timestamps can bog things down considerably in text mode, but you
have to be pretty efficient in your app to do any better generating those
those in the PostgreSQL binary format yourself. If you had a lot of
difficult to parse data types like that, binary might be a plus, but it
doesn't sound like that will be the case for what you're doing.
But you don't have to believe me, it's easy to generate a test case here
yourself. Copy some typical data into the database, export it both ways:
COPY t to 'f';
COPY t to 'f' WITH BINARY;
And then compare copying them both in again with "\timing". That should
let you definitively answer whether it's really worth the trouble.
--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2009-04-21 03:15:18 | Re: performance for high-volume log insertion |
Previous Message | Stephen Frost | 2009-04-21 02:44:58 | Re: performance for high-volume log insertion |