From: | Jay Manni <JManni(at)FireEye(dot)com> |
---|---|
To: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>, Merlin Moncure <mmoncure(at)gmail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: High Frequency Inserts to Postgres Database vs Writing to a File |
Date: | 2009-11-05 08:01:36 |
Message-ID: | 60B0F2124D07B942988329B5B7CA393D01E5B94187@mail2.FireEye.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks to all for the responses. Based on all the recommendations, I am going to try a batched commit approach; along with data purging policies so that the data storage does not grow beyond certain thresholds.
- J
-----Original Message-----
From: Craig Ringer [mailto:craig(at)postnewspapers(dot)com(dot)au]
Sent: Wednesday, November 04, 2009 5:12 PM
To: Merlin Moncure
Cc: Jay Manni; pgsql-performance(at)postgresql(dot)org
Subject: Re: [PERFORM] High Frequency Inserts to Postgres Database vs Writing to a File
Merlin Moncure wrote:
> Postgres can handle multiple 1000 insert/sec but your hardware most
> likely can't handle multiple 1000 transaction/sec if fsync is on.
commit_delay or async commit should help a lot there.
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
http://www.postgresql.org/docs/8.3/static/runtime-config-wal.html
Please do *not* turn fsync off unless you want to lose your data.
> If you are bulk inserting 1000+ records/sec all day long, make sure
> you have provisioned enough storage for this (that's 86M records/day),
plus any index storage, room for dead tuples if you ever issue UPDATEs, etc.
--
Craig Ringer
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
From | Date | Subject | |
---|---|---|---|
Next Message | Eduardo Morras | 2009-11-05 09:04:05 | Re: Followup: vacuum'ing toast |
Previous Message | Scott Carey | 2009-11-05 05:47:01 | Re: Optimizer + bind variables |