From: | Scott Ribe <scott_ribe(at)elevated-dev(dot)com> |
---|---|
To: | Chris Travers <chris(dot)travers(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: export to parquet |
Date: | 2020-08-26 19:29:48 |
Message-ID: | 789D5942-1FBE-4727-8C7E-2EBEC8B0B08D@elevated-dev.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> On Aug 26, 2020, at 1:11 PM, Chris Travers <chris(dot)travers(at)gmail(dot)com> wrote:
>
> For simple exporting, the simplest thing is a single-node instance of Spark.
Thanks.
> You can read parquet files in Postgres using https://github.com/adjust/parquet_fdw if you so desire but it does not support writing as parquet files are basically immutable.
Yep, that's the next step. Well, really it is what I am interested in testing, but first I need my data in parquet format (and confirmation that it gets decently compressed).
From | Date | Subject | |
---|---|---|---|
Next Message | George Woodring | 2020-08-26 19:39:10 | Re: export to parquet |
Previous Message | Chris Travers | 2020-08-26 19:11:13 | Re: export to parquet |