From: | Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> |
---|---|
To: | Nick <nboutelier(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: At what point does a big table start becoming too big? |
Date: | 2012-08-23 07:04:02 |
Message-ID: | 5035D5E2.2040605@archidevsys.co.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 23/08/12 11:06, Nick wrote:
> I have a table with 40 million rows and haven't had any performance issues yet.
>
> Are there any rules of thumb as to when a table starts getting too big?
>
> For example, maybe if the index size is 6x the amount of ram, if the table is 10% of total disk space, etc?
>
>
I think it would be good to specify the context.
For example:
The timeliness of a database required to support an ship based
anti-missile system would require far more stringent timing
considerations than a database used to retrieve scientific images based
on complicated criteria.
The size of records, how often updated/deleted, types of queries, ...
would also be useful.
Unfortunately it might simply be a case of "It depends..."!
Cheers,
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Martijn van Oosterhout | 2012-08-23 07:09:46 | Re: Alternatives to very large tables with many performance-killing indicies? |
Previous Message | Thomas Kellerer | 2012-08-23 06:22:53 | Re: What text format is this and can I import it into Postgres? |