From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | David Gauthier <davegauthierpg(at)gmail(dot)com> |
Cc: | Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: storing zipped SQLite inside PG ? |
Date: | 2021-12-22 05:23:55 |
Message-ID: | CAKFQuwbyR6hGYHJuqCBhy1ZgxyJoTZj0svuMfYFXOxs9qQMeww@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Dec 21, 2021 at 10:06 PM David Gauthier <davegauthierpg(at)gmail(dot)com>
wrote:
> I'll have to read more about sqlite_fdw. Thanks for that Steve !
>
> Each SQLite isn't that big (billions of records), more like 30K records or
> so. But there are lots and lots of these SQLite DBs which add up over time
> to perhaps billions of records.
>
> This is for a big corp with an IT dept. Maybe I can get them to upgrade
> the DB itself.
> Thank You too David !
>
>>
>>
So, more similar to the image storage question than I first thought, but
still large enough where the specific usage patterns and needs end up being
the deciding factor (keeping in mind you can pick multiple solutions - so
that really old data, ideally on a partition, can be removed from the DB
while still remaining accessible if just more slowly or laboriously).
One possibility to consider - ditch the SQLite dependency and just store
CSV (but maybe with a funky delimiter sequence). You can then us
"string_to_table(...)" on that delimiter to materialize a table out of the
data right in a query.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Withers | 2021-12-22 09:58:18 | Re: surprisingly slow creation of gist index used in exclude constraint |
Previous Message | David Gauthier | 2021-12-22 05:05:59 | Re: storing zipped SQLite inside PG ? |