Re: performance expectations for table(s) with 2B recs

From: Marc Millas <marc(dot)millas(at)mokadb(dot)com>
To: David Gauthier <davegauthierpg(at)gmail(dot)com>
Cc: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: performance expectations for table(s) with 2B recs
Date: 2021-12-09 11:38:18
Message-ID: CADX_1aa2pYr3eaqgF1KZu9eWW7t90eeCmuecGdjUrj-xeFZEow@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Partitioning ?
if you have some ideas about how this data is accessed, splitting those
big tables into partitions may help:
-vaccum done at the partition level,
-index partitioned too, so much easier to manage (reindex local to a given
partition, so quite easy to reindex the whole thing one partition at a time)

great on perf, too IF the partition key is in the where clause. if not,
postgres will need to aggregate and sort the results gathered for each
partition.
Could still be efficient if it allows to parallel execution.

my 2 cents

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com

On Wed, Dec 8, 2021 at 8:45 PM David Gauthier <davegauthierpg(at)gmail(dot)com>
wrote:

> 11.5 on linux
> server = VM provided by our IT dept (IOW, can be grown if needed)
> DB itself is on NFS
>
> So far, the tables I have in my DB have relatively low numbers of records
> (most are < 10K, all are < 10M). Things have been running great in terms
> of performance. But a project is being brainstormed which may require some
> tables to contain a couple billion records.
>
> I'm familiar with the need to properly index columns in these tables which
> will be used for table joining and query performance (candidates are
> columns used in query predicate, etc...). Let's assume that's done right.
> And let's assume that the biggest table will have 20 varchars (<= 16 chars
> per) and 20 ints. No blobs or jsonb or anything like that.
>
> What else should I be worried about ?
>
> I suspect that part of why things are running really well so far is that
> the relatively small amounts of data in these tables ends up in the DB
> cache and disk I/O is kept at a minimum. Will that no longer be the case
> once queries start running on these big tables ?
>
> What about DBA stuff... vacuum and fragmentation and index maintenance,
> etc... ?
>
> I don't want to step into this completely blind. Any warnings/insights
> would be appreciated.
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Laurenz Albe 2021-12-09 11:51:02 Re: Identity/Serial Column In Subscriber's Tables
Previous Message Shaozhong SHI 2021-12-09 11:03:19 Use tsquery to check out occurrence of the same phrase in a cell