From: | Stefan Keller <sfkeller(at)gmail(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Summaries on SSD usage? |
Date: | 2011-09-02 22:04:30 |
Message-ID: | CAFcOn2_P6TQS2exBgsFe9xY296WPe4tHs2_tYaaJWbLg0tu3-A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
2011/9/2 Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:
> On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller <sfkeller(at)gmail(dot)com> wrote:
> How big is your DB?
> What kind of reads are most common, random access or sequential?
> How big of a dataset do you pull out at once with a query.
>
> SSDs are usually not a big winner for read only databases.
> If the dataset is small (dozen or so gigs) get more RAM to fit it in
> If it's big and sequentially accessed, then build a giant RAID-10 or RAID-6
> If it's big and randomly accessed then buy a bunch of SSDs and RAID them
My dataset is a mirror of OpenStreetMap updated daily. For Switzerland
it's about 10 GB total disk space used (half for tables, half for
indexes) based on 2 GB raw XML input. Europe would be about 70 times
larger (130 GB) and world has 250 GB raw input.
It's both randomly (= index scan?) and sequentially (= seq scan?)
accessed with queries like: " SELECT * FROM osm_point WHERE tags @>
hstore('tourism','zoo') AND name ILIKE 'Zoo%' ". You can try it
yourself online, e.g.
http://labs.geometa.info/postgisterminal/?xapi=node[tourism=zoo]
So I'm still unsure what's better: SSD, NVRAM (PCI card) or plain RAM?
And I'm eager to understand if unlogged tables could help anyway.
Yours, Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | C Pond | 2011-09-03 00:25:00 | Embedded VACUUM |
Previous Message | Shaun Thomas | 2011-09-02 14:30:07 | Re: Summaries on SSD usage? |