From: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
---|---|
To: | Sergey Konoplev <gray(dot)ru(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS? |
Date: | 2011-11-30 23:44:45 |
Message-ID: | 4ED6BFED.6090302@ringerc.id.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/30/2011 10:32 PM, Sergey Konoplev wrote:
> Would it be more compact from the point of view of streaming
> replication if we make the application accumulate changes and do one
> COPY instead of lots of INSERTS say once a minute? And if it will be
> so how to estimate the effect approximately?
Streaming replication works on a rather lower level than that. It
records information about transaction starts, rollbacks and commits, and
records disk block changes. It does not record SQL statements. It's not
using INSERT, so you can't switch to COPY. Streaming replication
basically just copies the WAL data, and WAL data is not all that compact.
Try to run streaming replication over a compressed channel. PostgreSQL
might gain the ability to do this natively - if someone cares enough to
implement it and if it doesn't already do it without my noticing - but
in the mean time you can use a compressed SSH tunnel, compressed VPN, etc.
Alternately, investigate 3rd party replication options like Slony and
Bucardo that might be better suited to your use case.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | David Johnston | 2011-12-01 00:00:34 | Re: Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS? |
Previous Message | Tomas Vondra | 2011-11-30 23:41:15 | Re: Query Optimizer makes a poor choice |