| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> |
| Cc: | "Craig Ringer" <craig(at)postnewspapers(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
| Date: | 2008-03-10 14:33:58 |
| Message-ID: | 23588.1205159638@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-patches pgsql-performance |
"Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:
> For 8.4, it would be nice to improve that. I tested that on my laptop
> with a similarly-sized table, inserting each row in a pl/pgsql function
> with an exception handler, and I got very similar run times. According
> to oprofile, all the time is spent in TransactionIdIsInProgress. I think
> it would be pretty straightforward to store the committed subtransaction
> ids in a sorted array, instead of a linked list, and binary search.
I think the OP is not complaining about the time to run the transaction
that has all the subtransactions; he's complaining about the time to
scan the table that it emitted. Presumably, each row in the table has a
different (sub)transaction ID and so we are thrashing the clog lookup
mechanism. It only happens once because after that the XMIN_COMMITTED
hint bits are set.
This probably ties into the recent discussions about eliminating the
fixed-size allocations for SLRU buffers --- I suspect it would've run
better if it could have scaled up the number of pg_clog pages held in
memory.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Craig Ringer | 2008-03-10 14:48:39 | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
| Previous Message | Bruce Momjian | 2008-03-10 14:31:53 | Re: [PATCHES] Include Lists for Text Search |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Craig Ringer | 2008-03-10 14:48:39 | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
| Previous Message | Craig Ringer | 2008-03-10 12:16:27 | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |