From: | Henrik <henke(at)mac(dot)se> |
---|---|
To: | Bill Moran <wmoran(at)collaborativefusion(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Database growing. Need autovacuum help. |
Date: | 2008-06-03 13:41:21 |
Message-ID: | 9881295B-C31D-48BE-BD2D-84FEC6FEC9F0@mac.se |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
3 jun 2008 kl. 15.23 skrev Bill Moran:
> In response to Henrik <henke(at)mac(dot)se>:
>>
>> We are running a couple of 8.3.1 servers and the are growing a lot.
>>
>> I have the standard autovacuum settings from the 8.3.1 installation
>> and we are inserting about 2-3 million rows every night and cleaning
>> out just as many every day.
>
> Is this a batch job? If so, autovac might not be your best friend
> here. There _are_ still some cases where autovac isn't the best
> choice. If you're doing a big batch job that deletes or updates a
> bunch of rows, you'll probably be better off making a manual vacuum
> the last step of that batch job. Remember that you can vacuum
> individual tables.
Well, sort of. We have different jobs that usually runs at night
filling the database with document information. After that is dont we
have maintenance jobs that clean out old versions of those documents.
Maybe autovacuum is not for us on at least this table. I know that it
is an specific table that has most bloat.
>
>
>> The database size rose to 80GB but after a dump/restore its only 16GB
>> which shows that there where nearly 65GB bloat in the database.
>
> Does it keep growing beyond 80G? While 65G may seem like a lot of
> bloat,
> it may be what your workload needs as working space. I mean, you
> _are_
> talking about shifting around 2-3 million rows/day.
>
> Crank up the logging. I believe the autovac on 8.3 can be configured
> to log exactly what tables it operates on ... which should help you
> determine if it's not configured aggressively enough.
I will do that. But I already which table is the bad boy in this
case. :)
>
> If it's just a single table that's bloating, a VACUUM FULL or CLUSTER
> of that table alone on a regular schedule might take care of things.
> If your data is of a FIFO nature, you could benefit from the old trick
> of having two tables and switching between them on a schedule in order
> to truncate the one with stale data in it.
It is somewhat FIFO but I can't guarantee it...
I will look at CLUSTER and see.
Maybe de design is flawed :) To put it simple we have a document
storing system and the 3 major table is tbl_folder, tbl_file and the
many-to-many table tbl_file_folder.
In the tbl_file we only have unique documents.
But a file can be stored in many folders and a folder can have many
files so we have the tbl_file_folder with fk_file_id and fk_folder_id.
To be able to handle versions we always insert new folders even though
nothing has changed but it seemd like the best way to do it.
E.g
First run:
tbl_file 500k new files.
tbl_folder 50k new rows.
tbl_file_folder 550k new rows.
Second run with no new files.
tbl_file unchanged.
tbl_folder 50k new rows
tbl_file_folder 550k new rows.
The beauty with this is that it is very effective to retrieve the
exact file/folder structure at a given point in time but the drawback
is that it is a lot of overhead in the database.
Maybe someone has some kool new idea about this. :)
Thanks Bill!
Cheers,
henke
>
>
> Hope some of these ideas help.
>
> --
> Bill Moran
> Collaborative Fusion Inc.
> http://people.collaborativefusion.com/~wmoran/
>
> wmoran(at)collaborativefusion(dot)com
> Phone: 412-422-3463x4023
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | Henrik | 2008-06-03 13:44:13 | Strange statistics |
Previous Message | Mattias.Arbin | 2008-06-03 13:38:54 | Insert into master table ->" 0 rows affected" -> Hibernate problems |