From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Noah Misch <noah(at)leadboat(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ALTER TABLE lock strength reduction patch is unsafe |
Date: | 2012-01-03 18:13:46 |
Message-ID: | CA+TgmoZf5kktkvo8bGf8rEyNzL_OU6iHy8D0Q=tq3mgrWsqv-g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jan 3, 2012 at 12:55 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
>> That was acceptable to *me*, so I didn't try measuring using just SnapshotNow.
>
>> We can do a lot of tests but at the end its a human judgement. Is 100%
>> correct results from catalog accesses worth having when the real world
>> speed of it is not substantially very good? (Whether its x1000000
>> times slower or not is not relevant if it is still fast enough).
>
> That argument is just nonsense AFAICT. Yes, 2.5 s to drop 10000
> functions is probably fine, but that is an artificial test case whose
> only real interest is to benchmark what a change in SnapshotNow scans
> might cost us. In the real world it's hard to guess what fraction of a
> real workload might consist of such scans, but I suspect there are some
> where a significant increase in that cost might hurt. So in my mind the
> point of the exercise is to find out how much the cost increased, and
> I'm just baffled as to why you won't use a benchmark case to, um,
> benchmark.
Ditto.
> Another point that requires some thought is that switching SnapshotNow
> to be MVCC-based will presumably result in a noticeable increase in each
> backend's rate of wanting to acquire snapshots. Hence, more contention
> in GetSnapshotData can be expected. A single-threaded test case doesn't
> prove anything at all about what that might cost under load.
This is obviously true at some level, but I'm not sure that it really
matters. It's not that difficult to construct a test case where we
have lots of people concurrently reading a table, or reading many
tables, or writing a table, or writing many tables, but what kind of
realistic test case involves enough DDL for any of this to matter? If
you're creating or dropping tables, for example, the filesystem costs
are likely to be a much bigger problem than GetSnapshotData(), to the
point where you probably can't get enough concurrency for
GetSnapshotData() to matter. Maybe you could find a problem case
involving creating or dropping lots and lots of functions
concurrently, or something like that, but who does that? You'd have
to be performing operations on hundreds of non-table SQL objects per
second, and it is hard for me to imagine why anyone would be doing
that. Maybe I'm not imaginative enough?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2012-01-03 18:14:59 | Re: ALTER TABLE lock strength reduction patch is unsafe |
Previous Message | Simon Riggs | 2012-01-03 18:11:58 | Re: Should I implement DROP INDEX CONCURRENTLY? |