From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Lian Jiang <jiangok2006(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: speed up full table scan using psql |
Date: | 2023-05-31 21:50:10 |
Message-ID: | 79615d08-0bbb-066b-6092-7f59703969bb@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 5/31/23 13:57, Lian Jiang wrote:
> The command is: psql $db_url -c "copy (select row_to_json(x_tmp_uniq)
> from public.mytable x_tmp_uniq) to stdout"
> postgres version: 14.7
> Does this mean COPY and java CopyManager may not help since my psql
> command already uses copy?
>
> Regarding pg_dump, it does not support json format which means extra
> work is needed to convert the supported format to jsonl (or parquet) so
> that they can be imported into snowflake. Still exploring but want to
> call it out early. Maybe 'custom' format can be parquet?
Oops I read this:
'...Using spark to read the postgres table...'
and missed that you are trying to load into Snowflake.
It seems Snowflake supports CSV as well:
https://docs.snowflake.com/en/user-guide/data-load-prepare
So the previous advice should still hold.
>
>
> Thanks
> Lian
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Thorsten Glaser | 2023-05-31 22:01:07 | Re: speed up full table scan using psql |
Previous Message | Adrian Klaver | 2023-05-31 21:43:09 | Re: speed up full table scan using psql |