From: | "Chris Ochs" <chris(at)paymentonline(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: speeding up inserts |
Date: | 2004-01-01 19:06:20 |
Message-ID: | 018901c3d09a$5784fc60$d9072804@chris2 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> "Chris Ochs" <chris(at)paymentonline(dot)com> writes:
> > Is this a crazy way to handle this?
>
> Depends. Do you care if you lose that data (if the system crashes
> before your daemon can insert it into the database)? I think the
> majority of the win you are seeing comes from the fact that the data
> doesn't actually have to get to disk --- your "write to file" never
> gets further than kernel disk buffers in RAM.
>
> I would think that you could get essentially the same win by aggregating
> your database transactions into bigger ones. From a reliability point
> of view you're doing that anyway --- whatever work the daemon processes
> at a time is the real transaction size.
>
> regards, tom lane
>
The transactions are as big as they can be, all the data is committed at
once. I'm guessing that for any database to be as fast as I want it, it
just needs bigger/better hardware, which isnt' an option at the moment.
I was also thinking about data loss with the disk queue. Right now it's
such a small risk, but as we do more transactions it gets bigger. So right
now yes it's an acceptable risk given the chance of it happening and what a
worst case scenario would look like. but at a point it wouldnt' be.
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Marc G. Fournier | 2004-01-01 19:10:00 | Mnogosearch: Comparing PgSQL 7.4 to MySQL 4.1 |
Previous Message | Marc G. Fournier | 2004-01-01 19:00:28 | Re: Mnogosearch (Was: Re: website doc search is ... ) |