From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Joshua Tolley <eggyknap(at)gmail(dot)com> |
Cc: | Data Growth Pty Ltd <datagrowth(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Partitioning into thousands of tables? |
Date: | 2010-08-06 14:22:52 |
Message-ID: | 18594.1281104572@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Joshua Tolley <eggyknap(at)gmail(dot)com> writes:
> On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>> Is there any significant performance problem associated with partitioning
>> a table into 2500 sub-tables? I realise a table scan would be horrendous,
>> but what if all accesses specified the partitioning criteria "sid". Such
>> a scheme would be the simplest to maintain (I think) with the best
>> localisation of writes.
> I seem to remember some discussion on pgsql-hackers recently about the number
> of partitions and its effect on performance, especially planning time.
> Unfortunately I can't find it right now, but in general the conclusion was
> it's bad to have lots of partitions, where "lots" is probably 100 or more.
It's in the fine manual: see last para of
http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | John Gage | 2010-08-06 15:41:42 | Re: MySQL versus Postgres |
Previous Message | Joshua Tolley | 2010-08-06 14:08:29 | Re: Partitioning into thousands of tables? |