From: | Hannu Krosing <hkrosing(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Why we lost Uber as a user |
Date: | 2016-08-01 12:31:09 |
Message-ID: | 9dfc4e1f-05dd-8067-5306-3a1273b31590@2ndQuadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 07/27/2016 12:07 AM, Tom Lane wrote:
>
>> 4. Now, update that small table 500 times per second.
>> That's a recipe for runaway table bloat; VACUUM can't do much because
>> there's always some minutes-old transaction hanging around (and SNAPSHOT
>> TOO OLD doesn't really help, we're talking about minutes here), and
>> because of all of the indexes HOT isn't effective.
> Hm, I'm not following why this is a disaster. OK, you have circa 100%
> turnover of the table in the lifespan of the slower transactions, but I'd
> still expect vacuuming to be able to hold the bloat to some small integer
> multiple of the minimum possible table size. (And if the table is small,
> that's still small.) I suppose really long transactions (pg_dump?) could
> be pretty disastrous, but there are ways around that, like doing pg_dump
> on a slave.
Is there any theoretical obstacle which would make it impossible to
teach VACUUM not to hold back the whole vacuum horizon, but just
to leave a single transaction alone in case of a long-running
REPEATABLE READ transaction ?
--
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2016-08-01 13:09:01 | Re: [Patch] Temporary tables that do not bloat pg_catalog (a.k.a fast temp tables) |
Previous Message | Etsuro Fujita | 2016-08-01 12:14:58 | Confusing docs about GetForeignUpperPaths in fdwhandler.sgml |