Re: Big table and partition

From: Ron <ronljohnsonjr(at)gmail(dot)com>
To: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Big table and partition
Date: 2023-10-14 21:06:05
Message-ID: 7a373e01-e93a-4059-93cb-cdcd484aa972@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On 10/14/23 15:46, Rajesh Kumar wrote:
> Hi
>
> In my env, I have a table 30GB

That's not necessarily huge.

> where data is getting inserted directly whenever the new block is created
> in block chain...

How many records for that 30GB?

How many days (or months)?

Will the rate of insertion increase?

What retention is required?

> Sometimes, there is a query running to check whether the recent block is
> created ?
>
> I have been asked to do partition, i said manager since we are not at all
> using the past data, partition is not required. Instead we can stop the
> sync (downtime not required), change the name of table , create a new
> table with same structure with original name and keep only tis month data.
> Then sync the blockchain so that new data will come to newly created table
> and old data will be there simply as a storage as we r not selecting any
> query from old data...

I've been burned by inefficient queries when having to add a date column to
the primary key.  We "departitioned" every table except the two that had
large bytea columns and stored up to 200GB per month.

>
> Note: on a longer run, data will keep on coming in whenever new block is
> created.
>
> Is that right ? Any suggestions ? Better options ?

In my experience, Postgresql deletes based on an indexed date field are
pretty efficient.  Sometimes I delete a month of data at once, and sometimes
one day of data at a time.

--
Born in Arizona, moved to Babylonia.

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message M Sarwar 2023-10-14 22:14:30 Re: Connecting 2 databases within the same instance
Previous Message Rajesh Kumar 2023-10-14 20:46:08 Big table and partition