From: | Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com> |
---|---|
To: | stepan rutz <stepan(dot)rutz(at)gmx(dot)de> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Detoasting optionally to make Explain-Analyze less misleading |
Date: | 2023-09-12 12:25:40 |
Message-ID: | CAEze2WhXYCJ8=ZBTbRTp9rGbvju9LK6UO7hodECKWgU-_a==aw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, 12 Sept 2023 at 12:56, stepan rutz <stepan(dot)rutz(at)gmx(dot)de> wrote:
>
> Hi,
>
> I have fallen into this trap and others have too. If you run
> EXPLAIN(ANALYZE) no de-toasting happens. This makes query-runtimes
> differ a lot. The bigger point is that the average user expects more
> from EXPLAIN(ANALYZE) than what it provides. This can be suprising. You
> can force detoasting during explain with explicit calls to length(), but
> that is tedious. Those of us who are forced to work using java stacks,
> orms and still store mostly documents fall into this trap sooner or
> later. I have already received some good feedback on this one, so this
> is an issue that bother quite a few people out there.
Yes, the lack of being able to see the impact of detoasting (amongst
others) in EXPLAIN (ANALYZE) can hide performance issues.
> It would be great to get some feedback on the subject and how to address
> this, maybe in totally different ways.
Hmm, maybe we should measure the overhead of serializing the tuples instead.
The difference between your patch and "serializing the tuples, but not
sending them" is that serializing also does the detoasting, but also
includes any time spent in the serialization functions of the type. So
an option "SERIALIZE" which measures all the time the server spent on
the query (except the final step of sending the bytes to the client)
would likely be more useful than "just" detoasting.
> 0001_explain_analyze_and_detoast.patch
I notice that this patch creates and destroys a memory context for
every tuple that the DestReceiver receives. I think that's quite
wasteful, as you should be able to create only one memory context and
reset it before (or after) each processed tuple. That also reduces the
differences in measurements between EXPLAIN and normal query
processing of the tuples - after all, we don't create new memory
contexts for every tuple in the normal DestRemote receiver either,
right?
Kind regards,
Matthias van de Meent
From | Date | Subject | |
---|---|---|---|
Next Message | Martín Marqués | 2023-09-12 12:33:56 | Re: Possibility to disable `ALTER SYSTEM` |
Previous Message | Hayato Kuroda (Fujitsu) | 2023-09-12 11:50:35 | RE: [PoC] pg_upgrade: allow to upgrade publisher node |