From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | jao(at)geophile(dot)com |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Insert rate drops as table grows |
Date: | 2006-01-31 01:50:02 |
Message-ID: | 27652.1138672202@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
jao(at)geophile(dot)com writes:
> I have this table and index:
> create table t(id int, hash int);
> create index idx_t on t(hash);
> The value of the hash column, which is indexed, is a pseudo-random
> number. I load the table and measure the time per insert.
> What I've observed is that inserts slow down as the table grows to
> 1,000,000 records. Observing the pg_stat* tables, I see that the data
> page reads per unit time stay steady, but that index page reads grow
> quickly, (shared_buffers was set to 2000).
Define "quickly" ... the expected behavior is that cost to insert into
a btree index grows roughly as log(N). Are you seeing anything worse
than that?
shared_buffers of 2000 is generally considered too small for high-volume
databases. Numbers like 10000-50000 are considered reasonable on modern
hardware. It's possible that you could go larger without too much
penalty if using the 8.1 buffer manager code, but I don't know if anyone
has benchmarked that systematically.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-01-31 01:58:12 | Re: incremental backups |
Previous Message | Devrim GUNDUZ | 2006-01-31 01:45:31 | Re: [HACKERS] New project launched : PostgreSQL GUI |