Re: Table performance with millions of rows (partitioning)

From: Justin Pryzby <pryzby(at)telsasoft(dot)com>
To: Robert Blayzor <rblayzor(dot)bulk(at)inoc(dot)net>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Table performance with millions of rows (partitioning)
Date: 2017-12-28 01:20:09
Message-ID: 20171228012009.GI4172@telsasoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Dec 27, 2017 at 07:54:23PM -0500, Robert Blayzor wrote:
> Question on large tables…
>
> When should one consider table partitioning vs. just stuffing 10 million rows into one table?

IMO, whenever constraint exclusion, DROP vs DELETE, or seq scan on individual
children justify the minor administrative overhead of partitioning. Note that
partitioning may be implemented as direct insertion into child tables, or may
involve triggers or rules.

> I currently have CDR’s that are injected into a table at the rate of over
> 100,000 a day, which is large.
>
> At some point I’ll want to prune these records out, so being able to just
> drop or truncate the table in one shot makes child table partitions
> attractive.

That's one of the major use cases for partitioning (DROP rather than DELETE and
thus avoiding any following vacuum+analyze).
https://www.postgresql.org/docs/10/static/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW

Justin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Blayzor 2017-12-28 01:27:08 Re: Table performance with millions of rows (partitioning)
Previous Message Robert Blayzor 2017-12-28 00:54:23 Table performance with millions of rows