From: | Ofer Israeli <oferi(at)checkpoint(dot)com> |
---|---|
To: | Samuel Gendler <sgendler(at)ideasculptor(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | Claudio Freire <klaussfreire(at)gmail(dot)com>, Andy <angelflow(at)yahoo(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: TCP Overhead on Local Loopback |
Date: | 2012-04-03 15:24:07 |
Message-ID: | 217DDBC2BB1E394CA9E7446337CBDEF20102DEDB4DCF@il-ex01.ad.checkpoint.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sun, Apr 2, 2012 at 11:25 AM, Samuel Gendler < sgendler(at)ideasculptor(dot)com > wrote:
> But suggesting moving away from TCP/IP with no actual evidence that it is network overhead that is the problem is a little premature, regardless.
Agreed, that's why I'd like to understand what tools / methodologies are available in order to test whether TCP is the issue.
> What, exactly, are the set of operations that each update is performing and is there any way to batch them into fewer statements
> within the transaction. For example, could you insert all 60,000 records into a temporary table via COPY, then run just a couple of queries to do
> bulk inserts and bulk updates into the destination tble via joins to the temp table?
I don't see how a COPY can be faster here as I would need to both run the COPY into the temp table and then UPDATE all the columns in the real table.
Are you referring to saving the time where all the UPDATEs would be performed via a stored procedure strictly in the db domain without networking back and forth?
> 60,000 rows updated with 25 columns, 1 indexed in 3ms is not exactly slow. That's a not insignificant quantity of data which must be transferred from client to server,
> parsed, and then written to disk, regardless of TCP overhead. That is happening via at least 60,000 individual SQL statements that are not even prepared statements. I don't
> imagine that TCP overhead is really the problem here. Regardless, you can reduce both statement parse time and TCP overhead by doing bulk inserts
> (COPY) followed by multi-row selects/updates into the final table. I don't know how much below 3ms you are going to get, but that's going to be as fast
> as you can possibly do it on your hardware, assuming the rest of your configuration is as efficient as possible.
The 3ms is per each event processing, not the whole 60K batch. Each event processing includes:
5 SELECTs
1 DELETE
2 UPDATEs
where each query performed involves TCP connections, that is, the queries are not grouped in a stored procedure or such.
For all these queries does 3ms sound like a reasonable time? If so, do you have an estimation of how long the network portion would be here?
Thanks,
Ofer
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2012-04-03 15:32:35 | Re: H800 + md1200 Performance problem |
Previous Message | Istvan Endredy | 2012-04-03 15:11:55 | bad planning with 75% effective_cache_size |