Re: Performance on Bulk Insert to Partitioned Table

From: Charles Gomes <charlesrg(at)outlook(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Ondrej Ivanič <ondrej(dot)ivanic(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance on Bulk Insert to Partitioned Table
Date: 2012-12-21 14:14:46
Message-ID: BLU002-W49D4893B3D847BB4CD7B4FAB360@phx.gbl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Tom, I may have to rethink it, so I'm going to have about 100 Million rows per day (5 days a week) 2 Billion per month. My point on partitioning was to be able to store 6 months of data in a single machine. About 132 partitions in a total of 66 billion rows.

----------------------------------------
> From: tgl(at)sss(dot)pgh(dot)pa(dot)us
> To: charlesrg(at)outlook(dot)com
> CC: ondrej(dot)ivanic(at)gmail(dot)com; pgsql-performance(at)postgresql(dot)org
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> Date: Thu, 20 Dec 2012 18:39:07 -0500
>
> Charles Gomes <charlesrg(at)outlook(dot)com> writes:
> > Using rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.
>
> You should probably rethink that plan anyway. The existing support for
> partitioning is not meant to support hundreds of partitions; you're
> going to be bleeding performance in a lot of places if you insist on
> doing that.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Charles Gomes 2012-12-21 14:30:07 Re: Performance on Bulk Insert to Partitioned Table
Previous Message Ghislain ROUVIGNAC 2012-12-21 10:48:58 Re: Slow queries after vacuum analyze