From: | AJ Weber <aweber(at)comcast(dot)net> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | how to improve perf of 131MM row table? |
Date: | 2014-06-25 20:10:25 |
Message-ID: | 53AB2CB1.8080201@comcast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Sorry for the semi-newbie question...
I have a relatively sizable postgresql 9.0.2 DB with a few large tables
(keep in mind "large" is relative, I'm sure there are plenty larger out
there).
One of my queries that seems to be bogging-down performance is a join
between two tables on each of their BIGINT PK's (so they have default
unique constraint/PK indexes on them). One table is a detail table for
the other. The "master" has about 6mm rows. The detail table has about
131mm rows (table size = 17GB, index size = 16GB).
I unfortunately have limited disks, so I can't actually move to multiple
spindles, but wonder if there is anything I can do (should I partition
the data, etc.) to improve performance? Maybe some further tuning to my
.conf, but I do think that's using as much mem as I can spare right now
(happy to send it along if it would help).
DB is vacuumed nightly with stats updates enabled. I can send the
statistics info listed in pgAdmin tab if that would help.
Any suggestions, tips, tricks, links, etc. are welcomed!
Thanks in advance,
AJ
From | Date | Subject | |
---|---|---|---|
Next Message | Shaun Thomas | 2014-06-25 20:49:16 | Re: how to improve perf of 131MM row table? |
Previous Message | Niels Kristian Schjødt | 2014-06-25 08:48:28 | Guidelines on best indexing strategy for varying searches on 20+ columns |