>>> "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> I'm going to get the latest snapshot to see if the issue has changed
> for 8.4devel
In testing under today's snapshot, it seemed to take 150,000 writes to
create and drop 1,000 temporary tables within a database transaction.
The numbers for the various versions might be within the sampling
noise, since the testing involved manual steps and required saturating
the queues in PostgreSQL, the OS, and the RAID controller to get
meaningful numbers. It seems like the complaints of slowness result
primarily from these writes saturating the bandwidth when a query
generates a temporary table in a loop, with the increased impact in
later releases resulting from it getting through the loop faster.
I've started a thread on the hackers' list to discuss a possible
PostgreSQL enhancement to help such workloads. In the meantime, I
think I know which knobs to try turning to mitigate the issue, and
I'll suggest rewrites to some of these queries, to avoid the temporary
tables.
If I find a particular tweak to the background writer or some such is
particularly beneficial, I'll post again.
-Kevin