From: | Leonardo F <m_lists(at)yahoo(dot)it> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | performance problems inserting random vals in index |
Date: | 2010-04-19 16:30:05 |
Message-ID: | 790487.49845.qm@web29017.mail.ird.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I have a simple table that has indexes on 2 integer columns.
Data is inserted very often (no updates, no deletes, just inserts):
at least 4000/5000 rows per second.
The input for the 2 indexed columns is very random.
Everything is "fine" for the first 10-20M rows; after that, performance
gets worse and worse, and by 50M rows I can't insert more than
1500 rows per second.
How can I improve it?
1) the table is already partitioned; at the moment it's based on
30 partitions. What kind of "select" performance impact would I
hit going to 3000 partitions? I know in the 3000 partition case there
will be less rows per partition, but index seek times are O(logN),
right? So:
3000*(log(N/3000)) is much bigger than 30*(log(N/30))
2) How much improvement can I expect going to RAID1+0 on
15rpm disks instead of raid5 on 7200 disks? I know, I know, I
shouldn't even be testing such a huge workload with such a
setup, but can't try anything else at the moment. Disk array is
pretty fast anyway: 190MB seq output, 320MB seq input, 1600
random seeks / s (bonnie++)
3) Is there anything else I can try to "help" postgres update those
index faster?
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2010-04-19 16:46:41 | Re: How to insert Ecoded values into postrgresql |
Previous Message | Scott Marlowe | 2010-04-19 15:39:31 | Re: best solution to backup full user databse |