From: | Gavin Sherry <swm(at)alcove(dot)com(dot)au> |
---|---|
To: | Joshua Marsh <icub3d(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Large Database Performance suggestions |
Date: | 2004-10-22 03:29:57 |
Message-ID: | Pine.LNX.4.58.0410221326570.31567@linuxworld.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, 21 Oct 2004, Joshua Marsh wrote:
> Recently, we have found customers who are wanting to use our service
> with data files between 100 million and 300 million records. At that
> size, each of the three major tables will hold between 150 million and
> 700 million records. At this size, I can't expect it to run queries
> in 10-15 seconds (what we can do with 10 million records), but would
> prefer to keep them all under a minute.
To provide any useful information, we'd need to look at your table schemas
and sample queries.
The values for sort_mem and shared_buffers will also be useful.
Are you VACUUMing and ANALYZEing? (or is the data read only?))
gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-10-22 03:37:00 | Re: Large Database Performance suggestions |
Previous Message | Joshua Marsh | 2004-10-22 03:14:59 | Large Database Performance suggestions |