From: | David Steele <david(at)pgmasters(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: backup manifests |
Date: | 2019-09-20 15:09:34 |
Message-ID: | 146910fc-3df7-eeb4-ac5b-2f57a01f646f@pgmasters.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 9/20/19 9:46 AM, Robert Haas wrote:
> On Thu, Sep 19, 2019 at 11:06 PM David Steele <david(at)pgmasters(dot)net> wrote:
>
>> My experience is that JSON is simple to implement and has already dealt
>> with escaping and data structure considerations. A home-grown solution
>> will be at least as complex but have the disadvantage of being non-standard.
>
> I think that's fair and just spent a little while investigating how
> difficult it would be to disentangle the JSON parser from the backend.
> It has dependencies on the following bits of backend-only
> functionality:
> - elog() / ereport(). Kind of a pain. We could just kill the program
> if an error occurs, but that seems a bit ham-fisted. Refactoring the
> code so that the error is returned rather than thrown might be the way
> to go, but it's not simple, because you're not just passing a string.
Seems to me we are overdue for elog()/ereport() compatible
error-handling in the front end. Plus mem contexts.
It sucks to make that a prereq for this project but the longer we kick
that can down the road...
> https://www.pgcon.org/2013/schedule/events/595.en.html
This talk was good fun. The largest number of tables we've seen is a
few hundred thousand, but that still adds up to more than a million
files to backup.
> This gets at another problem that I just started to think about. If
> the file is just a series of lines, you can parse it one line and a
> time and do something with that line, then move on. If it's a JSON
> blob, you have to parse the whole file and get a potentially giant
> data structure back, and then operate on that data structure. At
> least, I think you do.
JSON can definitely be parsed incrementally, but for practical reasons
certain structures work better than others.
> There's probably some way to create a callback
> structure that lets you presuppose that the toplevel data structure is
> an array (or object) and get back each element of that array (or
> key/value pair) as it's parsed, but that sounds pretty annoying to get
> working.
And that's how we do it. It's annoying and yeah it's complicated but it
is very fast and memory-efficient.
> Or we could just decide that you have to have enough memory
> to hold the parsed version of the entire manifest file in memory all
> at once, and if you don't, maybe you should drop some tables or buy
> more RAM.
I assume you meant "un-parsed" here?
> That still leaves you with bypassing the 1GB size limit on
> StringInfo, maybe by having a "huge" option, or perhaps by
> memory-mapping the file and then making the StringInfo point directly
> into the mapped region. Perhaps I'm overthinking this and maybe you
> have a simpler idea in mind about how it can be made to work, but I
> find all this complexity pretty unappealing.
Our String object has the same 1GB limit. Partly because it works and
saves a bit of memory per object, but also because if we find ourselves
exceeding that limit we know we've probably made a design error.
Parsing in stream means that you only need to store the final in-memory
representation of the manifest which can be much more compact. Yeah,
it's complicated, but the memory and time savings are worth it.
Note that our Perl implementation took the naive approach and has worked
pretty well for six years, but can choke on really large manifests with
out of memory errors. Overall, I'd say getting the format right is more
important than having the perfect initial implementation.
> Here's a competing proposal: let's decide that lines consist of
> tab-separated fields. If a field contains a \t, \r, or \n, put a " at
> the beginning, a " at the end, and double any " that appears in the
> middle. This is easy to generate and easy to parse. It lets us
> completely ignore encoding considerations. Incremental parsing is
> straightforward. Quoting will rarely be needed because there's very
> little reason to create a file inside a PostgreSQL data directory that
> contains a tab or a newline, but if you do it'll still work. The lack
> of quoting is nice for humans reading the manifest, and nice in terms
> of keeping the manifest succinct; in contrast, note that using JSON
> doubles every backslash.
There's other information you'll want to store that is not strictly file
info so you need a way to denote that. It gets complicated quickly.
> I hear you saying that this is going to end up being just as complex
> in the end, but I don't think I believe it. It sounds to me like the
> difference between spending a couple of hours figuring this out and
> spending a couple of months trying to figure it out and maybe not
> actually getting anywhere.
Maybe the initial implementation will be easier but I am confident we'll
pay for it down the road. Also, don't we want users to be able to read
this file? Do we really want them to need to cook up a custom parser in
Perl, Go, Python, etc.?
--
-David
david(at)pgmasters(dot)net
From | Date | Subject | |
---|---|---|---|
Next Message | Konstantin Knizhnik | 2019-09-20 15:12:46 | Re: Global temporary tables |
Previous Message | Robert Haas | 2019-09-20 14:59:55 | Re: backup manifests |