From: | Michael Paquier <michael(at)paquier(dot)xyz> |
---|---|
To: | Sami Imseih <samimseih(at)gmail(dot)com> |
Cc: | David Rowley <dgrowleyml(at)gmail(dot)com>, Bykov Ivan <i(dot)bykov(at)modernsys(dot)ru>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Query ID Calculation Fix for DISTINCT / ORDER BY and LIMIT / OFFSET |
Date: | 2025-03-12 00:51:06 |
Message-ID: | Z9Daem3nZEyUnqTx@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Mar 11, 2025 at 05:35:10PM -0500, Sami Imseih wrote:
> I have not benchmarked the overhead, so maybe there is not much to
> be concerned about. However, it just seems to me that tracking the extra
> data for all cases just to only deal with corner cases does not seem like the
> correct approach. This is what makes variant A the most attractive
> approach.
I suspect that the overhead will be minimal for all the approaches I'm
seeing on this thread, but it would not hurt to double-check all that.
As the overhead of a single query jumbling is weightless compared to
the overall query processing, the fastest method I've used in this
area is a micro-benchmark with a hardcoded loop in JumbleQuery() with
some rusage to get a more few metrics. This exagerates the query
jumbling computing, but it's good enough to see a difference once you
take an average of the time taken for each loop.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Tender Wang | 2025-03-12 01:00:22 | Re: Question about duplicate JSONTYPE_JSON check |
Previous Message | Michael Paquier | 2025-03-12 00:46:27 | Re: Back-patch of: avoid multiple hard links to same WAL file after a crash |