From: | Odd Man <valodzka(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Using postgresql in situation with high write/read ratio |
Date: | 2010-08-12 20:09:36 |
Message-ID: | AANLkTi=FUR27=DqsC+JKUA6OWcsCzpmmN7YTv2FRdaDF@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
In my current project we have unusual (at least for me) conditions for
relation db, namely:
* high write/read ratio (writes goes from bulk data updates/inserts (every
couple or minutes or so))
* loosing some recent part of data (last hour for example) is OK, it can be
easy restored
First version of app that used plain updates worked too long, it was
replaced with version two that uses partitions, truncate, copy and cleanup
of old data once daily. It works reasonably fast with current amount of
data. But this amount will grow, so I'm looking for possible optimisations.
The main idea (exept using some non relational db) I have is to say postgres
to make more operation in memory and use fsync and other operations less.
For example, I have idea to setup partition in memory, corresponding
tablespace and use it for that data. Main problem here that amount of data
is big and only part is going to be updated realy frequently.
Are there any ideas, best practies or so in such conditions?
From | Date | Subject | |
---|---|---|---|
Next Message | Thom Brown | 2010-08-12 20:17:00 | Re: Using postgresql in situation with high write/read ratio |
Previous Message | Carlo Stonebanks | 2010-08-12 19:09:56 | Very bad plan when using VIEW and IN (SELECT...*) |