From: | Marko Kreen <markokr(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-cluster-hackers(at)postgresql(dot)org |
Subject: | Re: GDQ iimplementation |
Date: | 2010-05-17 22:52:26 |
Message-ID: | 1274136746.1495.9.camel@Nokia-N900-42-11 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-cluster-hackers |
----- Original message -----
> Jan, Marko, Simon,
>
> I'm concerned that doing anything about the write overhead issue was
> discarded almost immediately in this discussion. This is not a trivial
> issue for performance; it means that each row which is being tracked by
> the GDQ needs to be written to disk a minimum of 4 times (once to WAL,
> once to table, once to WAL for queue, once to queue). That's at least
> one time too many, and effectively doubles the load on the master server.
>
> This is particularly unacceptable overhead for systems where users are
> not that interested in retaining the queue after an unexpected shutdown.
>
> Surely there's some way around this? Some kind of special
> fsync-on-write table, for example? The access pattern to a queue is
> quite specialized.
Uh, this seems like purely theoretical speculation, which
also ignores the "generic queue" aspect.
In practice, with databases where there is more reads than
writes, the additional queue write seems insignificant.
So I guess it's up to you to bring hard proofs that the
additional writes are problem.
If we already are speculating, I'd guess that writing to
WAL and INSERT-only queue table involves lot less
seeking than writing to actual table.
But feel free to edit the "Goals" section, unless you are
talking about non-transactional queueing, which seems
off-topic here.
--
marko
From | Date | Subject | |
---|---|---|---|
Next Message | Hannu Krosing | 2010-05-17 23:53:32 | Re: GDQ iimplementation |
Previous Message | Josh Berkus | 2010-05-17 21:46:13 | Re: GDQ iimplementation |