From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Hannu Krosing <hannu(at)krosing(dot)net> |
Cc: | Hannu Krosing <hannu(at)tm(dot)ee>, Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp>, pgman(at)candle(dot)pha(dot)pa(dot)us, pgsql-hackers(at)postgresql(dot)org, jwbaker(at)acm(dot)org |
Subject: | Re: LWLock contention: I think I understand the problem |
Date: | 2002-01-07 01:37:05 |
Message-ID: | 29890.1010367425@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-odbc |
Hannu Krosing <hannu(at)krosing(dot)net> writes:
> Should this not be 'vacuum full' ?
>>
>> Don't see why I should expend the extra time to do a vacuum full.
>> The point here is just to ensure a comparable starting state for all
>> the runs.
> Ok. I thought that you would also want to compare performance for different
> concurrency levels where the number of dead tuples matters more as shown by
> the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,
> 1-25 concurrent backends and 10000 trx per run
VACUUM and VACUUM FULL will provide the same starting state as far as
number of dead tuples goes: none. So that doesn't explain the
difference you see. My guess is that VACUUM FULL looks better because
all the new tuples will get added at the end of their tables; possibly
that improves I/O locality to some extent. After a plain VACUUM the
system will tend to allow each backend to drop new tuples into a
different page of a relation, at least until the partially-empty pages
all fill up.
What -B setting were you using?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-01-07 01:59:19 | Re: Effects of pgbench "scale factor" |
Previous Message | Tom Lane | 2002-01-07 01:04:46 | Effects of pgbench "scale factor" |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-01-07 02:32:39 | Re: LWLock contention: I think I understand the problem |
Previous Message | Hannu Krosing | 2002-01-06 23:12:07 | Re: LWLock contention: I think I understand the problem |