From: | "Shridhar Daithankar" <shridhar_daithankar(at)persistent(dot)co(dot)in> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: possible vacuum improvement? |
Date: | 2002-09-03 07:07:15 |
Message-ID: | 3D74ACFB.31774.4BD40BC3@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 3 Sep 2002 at 8:55, Mario Weilguni wrote:
> I know everyone is busy with the 7.3beta, but maybe this is something to think of before releasing the beta. Currently VACUUM will vacuum every table, but sometimes
> it's desireable to leave tables untouched because the're mostly static or protocol tables. In my case this would be the pg_largeobject which is around 4GB of data, while the
> other tables are ~40MB. Vacuuming the data is important, the large object table however rarely changes. The same goes for a protocol table which is around 1GB and never is
> changed beside INSERTS, so it's just growing, but never needs vacuum. VACUUM on the 4GB table needs a long long time and no improvements, it just hurts performance and
> fills OS buffers.
>
> If pg_class would have a field for storing misc flags (e.g. a bitfield). This would allow to set a flag like NO_AUTO_VACUUM and modify the vacuum code to leave that tables untouched
> if not specified by hand. Maybe there are other uses for such a bitfield too, and will help prevent an initdb for simple improvements.
>
> Any comments?
I suggest vacumming only the table that changes. Further I believe,
updates/deletes should be watched for performance as they cause dead tuples. Of
course insert impacts statistics and should be monitored but something like a
log table does not need vacuuming that often..
Knowing the application load can help a lot in tuning the DB, in short.
I was running a banking simulation for benchmarking. I know that accounts table
gets updated for each transaction but log table is just an insert. So rather
than vacumming entire db, just doing 'vacuum analyze accounts' give me almost
same results.
Performance was far better in earlier case. Without any vacuum I got something
like 50 tps for 80K transactions. With 'vacuum analyze accounts' for each 5K
transactions I got 200tps.
Personally I would prefer to have a trigger on a metadata table where I could
trigger vacuuming a particular table each n number of transactions(Oh it would
be great if that vacuum runs in background not blocking meta data table.. just
a wishlist...). Can anybody tell me which table I could write such a trigger? I
went thr. pg_* for some time but didn't find what I was looking for..
Bye
Shridhar
--
Reisner's Rule of Conceptual Inertia: If you think big enough, you'll never
have to do it.
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2002-09-03 07:14:23 | Re: possible vacuum improvement? |
Previous Message | Mario Weilguni | 2002-09-03 07:00:22 | Re: pg_dump compatibility between 7.3 and 7.2? |