From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Nico Williams <nico(at)cryptonector(dot)com> |
Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: WIP Incremental JSON Parser |
Date: | 2024-01-04 15:06:18 |
Message-ID: | CA+TgmoYhd=Tg07oMimZBa94+w4fOAZyXu6L+f5GsBbFRMtbrGg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jan 3, 2024 at 6:36 PM Nico Williams <nico(at)cryptonector(dot)com> wrote:
> On Tue, Jan 02, 2024 at 10:14:16AM -0500, Robert Haas wrote:
> > It seems like a pretty significant savings no matter what. Suppose the
> > backup_manifest file is 2GB, and instead of creating a 2GB buffer, you
> > create an 1MB buffer and feed the data to the parser in 1MB chunks.
> > Well, that saves 2GB less 1MB, full stop. Now if we address the issue
> > you raise here in some way, we can potentially save even more memory,
> > which is great, but even if we don't, we still saved a bunch of memory
> > that could not have been saved in any other way.
>
> You could also build a streaming incremental parser. That is, one that
> outputs a path and a leaf value (where leaf values are scalar values,
> `null`, `true`, `false`, numbers, and strings). Then if the caller is
> doing something JSONPath-like then the caller can probably immediately
> free almost all allocations and even terminate the parse early.
I think our current parser is event-based rather than this ... but it
seems like this could easily be built on top of it, if someone wanted
to.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-01-04 15:22:15 | Re: the s_lock_stuck on perform_spin_delay |
Previous Message | Tomas Vondra | 2024-01-04 14:55:01 | Re: index prefetching |