From: | veem v <veema0000(at)gmail(dot)com> |
---|---|
To: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, Greg Sabino Mullane <htamfids(at)gmail(dot)com> |
Cc: | yudhi s <learnerdatabase99(at)gmail(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Moving delta data faster |
Date: | 2024-04-06 14:02:43 |
Message-ID: | CAB+=1TW0weW5XPkSdSjeY3nvmta-fxVEdwcMD1ySEhYz_fKs9Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 5 Apr 2024 at 06:10, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
wrote:
>
> > S3 is not a database. You will need to be more specific about '...
> then
> > from the S3 it will be picked and gets merged to the target postgres
> > database.'
> >
> >
> > The data from S3 will be dumped into the stage table and then the
> > upsert/merge from that table to the actual table.
>
> The S3 --> staging table would be helped by having the data as CSV and
> then using COPY. The staging --> final table step could be done as
> either ON CONFLICT or MERGE, you would need to test in your situation to
> verify which works better.
>
Just a thought , in case the delta record changes are really higher(say
>30-40% of the total number of rows in the table) can OP also evaluate the
"truncate target table +load target table" strategy here considering
DDL/Trunc is transactional in postgres so can be done online without
impacting the ongoing read queries and also performance wise, it would be
faster as compared to the traditional Update/Insert/Upsert/Merge?
From | Date | Subject | |
---|---|---|---|
Next Message | yudhi s | 2024-04-06 15:47:45 | Re: Moving delta data faster |
Previous Message | Andreas Joseph Krogh | 2024-04-06 10:09:11 | Parallel GIN index? |