From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Devrim GÜNDÜZ <devrim(at)gunduz(dot)org> |
Cc: | PostgreSQL Hackers ML <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Keeping separate WAL segments for each database |
Date: | 2010-06-30 21:05:51 |
Message-ID: | 1277931682-sup-9058@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Excerpts from Devrim GÜNDÜZ's message of mié jun 30 14:54:06 -0400 2010:
> One of the things that interested me was parallel recovery feature. They
> said that they are keeping separate xlogs for each database, which
> speeds ups recovery in case of a crash. It also would increase
> performance, since we could write xlogs to separate disks.
I'm not sure about this. You'd need to have one extra WAL stream, for
shared catalogs; and what would you do to a transaction that touches
both shared catalogs and also local objects? You'd have to split the
WAL entries in those two WAL streams.
I think you could try to solve this by having yet another WAL stream for
transaction commit, and have the database-specific streams reference
that one. Operations touching shared catalogs would act as barriers:
all other databases' WAL streams would have to be synchronized to that
one. This would still allow you to have some concurrency because,
presumably, operations on shared catalogs are rare.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2010-06-30 21:11:01 | Re: Issue: Deprecation of the XML2 module 'xml_is_well_formed' function |
Previous Message | Simon Riggs | 2010-06-30 19:08:16 | Re: Keepalive for max_standby_delay |