From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Joshua Marsh <icub3d(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Large Database Performance suggestions |
Date: | 2004-10-22 03:37:00 |
Message-ID: | 2568.1098416220@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Joshua Marsh <icub3d(at)gmail(dot)com> writes:
> ... We did some original testing and with a server with 8GB or RAM and
> found we can do operations on data file up to 50 million fairly well,
> but performance drop dramatically after that.
What you have to ask is *why* does it drop dramatically? There aren't
any inherent limits in Postgres that are going to kick in at that level.
I'm suspicious that you could improve the situation by adjusting
sort_mem and/or other configuration parameters; but there's not enough
info here to make specific recommendations. I would suggest posting
EXPLAIN ANALYZE results for your most important queries both in the size
range where you are getting good results, and the range where you are not.
Then we'd have something to chew on.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-10-22 04:12:44 | Re: mmap (was First set of OSDL Shared Mem scalability results, some wierdness ... |
Previous Message | Gavin Sherry | 2004-10-22 03:29:57 | Re: Large Database Performance suggestions |