From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | andrew klassen <aptklassen(at)yahoo(dot)com> |
Cc: | James Mansion <james(at)mansionfamily(dot)plus(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: insert/update tps slow with indices on table > 1M rows |
Date: | 2008-06-04 22:52:19 |
Message-ID: | 6656.1212619939@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
andrew klassen <aptklassen(at)yahoo(dot)com> writes:
> I am using the c-library interface and for these particular transactions
> I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
> followed by at most 300 EXECUTES and then a COMMIT. That is the
> general scenario. What value beyond 300 should I try?
Well, you could try numbers in the low thousands, but you'll probably
get only incremental improvement.
> Also, how might COPY (which involves file I/O) improve the
> above scenario?
COPY needn't involve file I/O. If you are using libpq you can push
anything you want into PQputCopyData. This would involve formatting
the data according to COPY's escaping rules, which are rather different
from straight SQL, but I doubt it'd be a huge amount of code. Seems
worth trying.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2008-06-05 00:32:29 | Re: RAM / Disk ratio, any rule? |
Previous Message | PFC | 2008-06-04 21:50:09 | Re: insert/update tps slow with indices on table > 1M rows |