From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | "'Richard Huxton'" <dev(at)archonet(dot)com>, "Anibal David Acosta" <aa(at)devshock(dot)com>, "'Sergey Konoplev'" <gray(dot)ru(at)gmail(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org>, "'Stephen Frost'" <sfrost(at)snowman(dot)net> |
Subject: | Re: unlogged tables |
Date: | 2011-11-14 17:50:25 |
Message-ID: | 4EC100810200002500042E7C@gw.wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Anibal David Acosta" <aa(at)devshock(dot)com> wrote:
> I am doing asynchronous commit but sometimes I think that there
> are so many "things" in an insert/update transaction, for a table
> that has not too much important information.
>
> My table is a statistics counters table, so I can live with a
> partial data loss, but not with a full data loss because many
> counters are weekly and monthly.
>
> Unlogged table can increase speed, this table has about 1.6
> millions of update per hour, but unlogged with a chance of loss
> all information on a crash are not a good idea for this.
pg_dump -t 'tablename' from a cron job? (Make sure to rotate dump
file names, maybe with day of week or some such.)
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Cody Caughlan | 2011-11-14 18:16:46 | Slow queries / commits, mis-configuration or hardware issues? |
Previous Message | Anibal David Acosta | 2011-11-14 17:38:31 | Re: unlogged tables |