From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Lian Jiang <jiangok2006(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: speed up full table scan using psql |
Date: | 2023-06-01 15:03:53 |
Message-ID: | 8bee33fc-9b8d-94d9-6fe3-256db822f75f@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 5/31/23 22:51, Lian Jiang wrote:
> The whole command is:
>
> psql %(pg_uri)s -c %(sql)s | %(sed)s | %(pv)s | %(split)s) 2>&1 | %(tr)s
>
> where:
> sql is "copy (select row_to_json(x_tmp_uniq) from public.mytable
> x_tmp_uniq) to stdout"
> sed, pv, split, tr together format and split the stdout into jsonl files.
Well that is quite the pipeline. At this point I think you need to do
some testing on your end. First create a table that is a subset of the
original data to make testing a little quicker. Then break the process
down into smaller actions. Start with just doing a COPY direct to CSV
and one with the row_to_json to see if that makes a difference. Then
COPY directly to a file before applying the above pipeline. There are
more ways you can slice this depending on what the preceding shows you.
>
> Hope this helps.
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | vignesh C | 2023-06-01 15:42:16 | Re: Support logical replication of DDLs |
Previous Message | Jim Vanns | 2023-06-01 13:45:05 | Re: Dynamic creation of list partitions in highly concurrent write environment |