Table performance with millions of rows

From: Robert Blayzor <rblayzor(dot)bulk(at)inoc(dot)net>
To: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Table performance with millions of rows
Date: 2017-12-28 00:54:23
Message-ID: 7DF18AB9-C4A4-4C28-957D-12C00FCB5F71@inoc.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Question on large tables…

When should one consider table partitioning vs. just stuffing 10 million rows into one table?

I currently have CDR’s that are injected into a table at the rate of over 100,000 a day, which is large.

At some point I’ll want to prune these records out, so being able to just drop or truncate the table in one shot makes child table partitions attractive.

From a pure data warehousing standpoint, what are the do’s/don’t of keeping such large tables?

Other notes…
- This table is never updated, only appended (CDR’s)
- Right now daily SQL called to delete records older than X days. (costly, purging ~100k records at a time)

--
inoc.net!rblayzor
XMPP: rblayzor.AT.inoc.net
PGP: https://inoc.net/~rblayzor/

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Justin Pryzby 2017-12-28 01:20:09 Re: Table performance with millions of rows (partitioning)
Previous Message David Miller 2017-12-27 17:15:31 Re: Batch insert heavily affecting query performance.