From: | Kelly Burkhart <kelly(at)kkcsm(dot)net> |
---|---|
To: | Steve Eckmann <eckmann(at)computer(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: improving write performance for logging application |
Date: | 2006-01-04 15:40:31 |
Message-ID: | fa1e4ce70601040740t7f20ac13w9fddc80c10921c6@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 1/4/06, Steve Eckmann <eckmann(at)computer(dot)org> wrote:
>
> Thanks, Steinar. I don't think we would really run with fsync off, but I
> need to document the performance tradeoffs. You're right that my explanation
> was confusing; probably because I'm confused about how to use COPY! I could
> batch multiple INSERTS using COPY statements, I just don't see how to do it
> without adding another process to read from STDIN, since the application
> that is currently the database client is constructing rows on the fly. I
> would need to get those rows into some process's STDIN stream or into a
> server-side file before COPY could be used, right?
Steve,
You can use copy without resorting to another process. See the libpq
documentation for 'Functions Associated with the copy Command". We do
something like this:
char *mbuf;
// allocate space and fill mbuf with appropriately formatted data somehow
PQexec( conn, "begin" );
PQexec( conn, "copy mytable from stdin" );
PQputCopyData( conn, mbuf, strlen(mbuf) );
PQputCopyEnd( conn, NULL );
PQexec( conn, "commit" );
-K
From | Date | Subject | |
---|---|---|---|
Next Message | Ian Westmacott | 2006-01-04 16:00:38 | Re: improving write performance for logging |
Previous Message | Tom Lane | 2006-01-04 15:39:25 | Re: improving write performance for logging application |