Re: Big table and partition

From: Rajesh Kumar <rajeshkumar(dot)dba09(at)gmail(dot)com>
To: Ron <ronljohnsonjr(at)gmail(dot)com>
Cc: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Big table and partition
Date: 2023-10-16 01:53:36
Message-ID: CAJk5AtYPRfyhPfvkUVCq6=mPKe6i-wanrS6c4u1FNSB3WFPnmQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Timestamp is in jsonb. Why I am hesitant for partition is , there are only
2 queries running 1) insert block no (no where condition used)
2) check whether that column is inserted using select query, whic is the
recently inserted into row(max blockno)...

This is automatic.

Can I show them some stats from pg_stat_statement that it doesn't impact
performance already

On Mon, 16 Oct, 2023, 2:37 AM Ron, <ronljohnsonjr(at)gmail(dot)com> wrote:

> On 10/15/23 15:46, Rajesh Kumar wrote:
>
> The columns are id, blocknumber and jsondata.
>
>
> No date column?
>
> How much json data?
>
>
> I have not been told anything ABT data retention and I will ask them.
>
> Problem I am trying to resolve is, 30gb is big
>
>
> Big is relative. 30GB in 2023 is pretty darned small.
>
> and it will be every growing always. By doing this, i am making the scan
> easier (despite having indexes).
>
> Growth will be 10gb per year data.
>
>
> That's less than a gigabyte per month.
>
>
> Incase I am doing partition, Plz provide any script that does partitioning
> automatically. We are using postgres version 15.
>
>
> *Keep It Simple, Stanley* and just DELETE WHERE id < *nnnnnnn*;. Do that
> every month.
>
>
> On Sun, 15 Oct, 2023, 8:07 PM Christophe Courtois, <
> christophe(dot)courtois(at)dalibo(dot)com> wrote:
>
>> Hi,
>>
>> Le 14/10/2023 à 22:46, Rajesh Kumar a écrit :
>> > In my env, I have a table 30GB where data is getting inserted directly
>> > whenever the new block is created in block chain...
>> > I have been asked to do partition, i said manager since we are not at
>> > all using the past data, partition is not required. Instead we can stop
>> > the sync (downtime not required), change the name of table , create a
>> > new table with same structure with original name and keep only tis
>> month
>> > data. Then sync the blockchain so that new data will come to newly
>> > created table and old data will be there simply as a storage as we r
>> not
>> > selecting any query from old data...
>>
>> In fact, you want to do manual partitioning.
>> Why reinvent the wheel?
>>
>> > Note: on a longer run, data will keep on coming in whenever new block
>> is
>> > created.
>> > Is that right ? Any suggestions ? Better options ?
>>
>> What problem do you have to solve? 30 GB is not so big, but since when
>> do you store data? Will you purge the table? What is the main access key
>> of your queries (critical for the partitioning key)?
>>
>> Without other information I'd say that if you never read the former
>> months again, partition by month, perhaps by year.
>>
>> --
>> Christophe Courtois
>> Consultant DALIBO
>>
>
> --
> Born in Arizona, moved to Babylonia.
>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Rajesh Kumar 2023-10-16 13:09:34 Re: Big table and partition
Previous Message JP Pozzi 2023-10-15 21:13:47 Re: Big table and partition