From: | "Matt Clark" <matt(at)ymogen(dot)net> |
---|---|
To: | "'Neil Cooper'" <Neil(dot)Cooper(at)scigames(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Disabling transaction/outdated-tuple behaviour |
Date: | 2004-08-26 18:18:52 |
Message-ID: | 015f01c48b99$24739bf0$8300a8c0@solent |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Immediately after performing a vaccum, updates take upto 50
> milliseconds to occur, however the update performance
> degrades over time, such that after a few hours of continuous
> updates, each update takes about half a second. Regular
> vacuuming improves the performance temporarily, but during
> the vacuum operation (which takes upto 2 minutes),
> performance of concurrent updates falls below an acceptable
> level (sometimes > 2 seconds per update).
You must be doing an enormous number of updates! You can vacuum as often as
you like, and should usually do so at least as often as the time it takes
for 'all' tuples to be updated. So, in your case, every 10 updates. OK,
that seems unnecessary, how about every 100 updates?
> According to the documentation, PostgreSQL keeps the old
> versions of the tuples in case of use by other transactions
> (i.e. each update is actually extending the table). I believe
> this behaviour is what is causing my performance problem.
Yes, it probably is.
> Is there a way to disable this behaviour such that an update
> operation would overwrite the current record and does not
> generate an outdated tuple each time? (My application does
> not need transactional support).
No, I don't believe there is. If you really don't need transaction support
then you might want to reconsider whether postgres is really the right tool.
M
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2004-08-26 18:18:53 | Re: Equivalent praxis to CLUSTERED INDEX? |
Previous Message | J. Andrew Rogers | 2004-08-26 18:14:32 | Re: Equivalent praxis to CLUSTERED INDEX? |