From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Chris Ochs" <chris(at)paymentonline(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: speeding up inserts |
Date: | 2004-01-01 17:47:19 |
Message-ID: | 21703.1072979239@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Chris Ochs" <chris(at)paymentonline(dot)com> writes:
> Is this a crazy way to handle this?
Depends. Do you care if you lose that data (if the system crashes
before your daemon can insert it into the database)? I think the
majority of the win you are seeing comes from the fact that the data
doesn't actually have to get to disk --- your "write to file" never
gets further than kernel disk buffers in RAM.
I would think that you could get essentially the same win by aggregating
your database transactions into bigger ones. From a reliability point
of view you're doing that anyway --- whatever work the daemon processes
at a time is the real transaction size.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-01-01 18:07:42 | Re: Mnogosearch (Was: Re: website doc search is ... ) |
Previous Message | Marc G. Fournier | 2004-01-01 17:46:56 | Re: Mnogosearch (Was: Re: website doc search is ... ) |