From: | Marinos Yannikos <mjy(at)geizhals(dot)at> |
---|---|
To: | Lists <lists(at)on-track(dot)ca> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Best replication solution? |
Date: | 2009-04-08 11:45:04 |
Message-ID: | 49DC8E40.2010503@geizhals.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Heikki Linnakangas wrote:
> Lists wrote:
>> Server is a dual core xeon 3GB ram and 2 mirrors of 15k SAS drives (1
>> for most data, 1 for wal and a few tables and indexes)
>>
>> In total all databases on the server are about 10G on disk (about 2GB
>> in pgdump format).
>
> I'd suggest buying as much RAM as you can fit into the server. RAM is
> cheap, and with a database of that size more cache could have a dramatic
> effect.
I'll second this. Although it doesn't really answer the original
question, you have to keep in mind that for read-intensive workloads,
caching will give you the biggest benefit by far, orders of magnitude
more than replication solutions unless you want to spend a lot of $ on
hardware (which I take it you don't if you are reluctant to add new
disks). Keeping the interesting parts of the DB completely in RAM makes
a big difference, common older (P4-based) Xeon boards can usually be
upgraded to 12-16GB RAM, newer ones to anywhere between 16 and 192GB ...
As for replication solutions - Slony I wouldn't recommend (tried it for
workloads with large writes - bad idea), but PgQ looks very solid and
you could either use Londiste or build your own very fast non-RDBMS
slaves using PgQ by keeping the data in an optimized format for your
queries (e.g. if you don't need joins - use TokyoCabinet/Berkeley DB).
Regards,
Marinos
From | Date | Subject | |
---|---|---|---|
Next Message | Marinos Yannikos | 2009-04-08 13:42:12 | bad query plans for ~ "^string" (and like "string%") (8.3.6) |
Previous Message | Matthew Wakeling | 2009-04-08 10:35:51 | Re: plpgsql arrays |