Re: How to do faster DML

From: Greg Sabino Mullane <htamfids(at)gmail(dot)com>
To: Lok P <loknath(dot)73(at)gmail(dot)com>
Cc: dwhite(at)seawardmoon(dot)com, Marcos Pegoraro <marcos(at)f10(dot)com(dot)br>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>, Francisco Olarte <folarte(at)peoplecall(dot)com>, haramrae(at)gmail(dot)com, hjp-pgsql(at)hjp(dot)at
Subject: Re: How to do faster DML
Date: 2024-02-04 15:47:45
Message-ID: CAKAnmm+=nS4d44Bn-sspXFA6HxQiUfnW2uCwHVr-fd9dCsbNUw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Partitioning will depend a lot on how you access the data. Is it always
using that transaction_date column in the where clause? Can you share the
table definition via \d? We've talked about this table quite a bit, but not
actually seen what it looks like. We are flying blind a little bit. You
mentioned your queries are slow, but can you share an explain analyze on
one of these slow queries?

45 billion rows is quite manageable. How many rows are in each day? You may
want to do a larger bucket than 24 hours per partition.

1)Even after partitioning the target table , to speed up the data load on
> this table , Is there an option to disable the primary and foreign keys and
> re-enable them post data load finishes. Will that be a good idea
>

No.

3)As the size of the table or each partition is going to be very large and
> this will be a read intensive application, compressing the historical
> partition will help us save the storage space and will also help the read
> queries performance

I am not sure what this means. If you are not going to need the data
anymore, dump the data to deep storage and drop the partition.

Cheers,
Greg

>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ron Johnson 2024-02-04 15:54:54 Re: How to do faster DML
Previous Message Lok P 2024-02-04 15:30:08 Re: How to do faster DML