From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: LWLock Queue Jumping |
Date: | 2009-08-30 20:46:59 |
Message-ID: | f67928030908301346i1c4cfae4l511cbb623927dfbd@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Aug 30, 2009 at 11:01 AM, Stefan Kaltenbrunner
<stefan(at)kaltenbrunner(dot)cc> wrote:
> Jeff Janes wrote:
>
>> ---------- Forwarded message ----------
>> From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
>> To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com
>> <mailto:heikki(dot)linnakangas(at)enterprisedb(dot)com>>
>> Date: Sun, 30 Aug 2009 11:48:47 +0200
>> Subject: Re: LWLock Queue Jumping
>> Heikki Linnakangas wrote:
>>
>>
>> I don't have any pointers right now, but WALInsertLock does
>> often show
>> up as a bottleneck in write-intensive benchmarks.
>>
>>
>> yeah I recently ran accross that issue with testing concurrent COPY
>> performance:
>>
>>
>> http://www.kaltenbrunner.cc/blog/index.php?/archives/27-Benchmarking-8.4-Chapter-2bulk-loading.html
>> discussed here:
>>
>> http://archives.postgresql.org/pgsql-hackers/2009-06/msg01019.php
>>
>>
>> It looks like this is the bulk loading of data into unindexed tables. How
>> good is that as a target for optimization? I can see several (quite
>> difficult to code and maintain) ways to make bulk loading into unindexed
>> tables faster, but they would not speed up the more general cases.
>>
>
> well bulk loading into unindexed tables is quite a common workload - apart
> from dump/restore cycles (which we can now do in parallel) a lot of analytic
> workloads are that way.
> Import tons of data from various sources every night/weeek/month, index,
> analyze & aggregate, drop again.
In those cases where you end by dropping the tables, we should be willing to
bypass WAL altogether, right? Is the problem we can bypass WAL (by doing
the COPY in the same transaction that created or truncated the table), or we
can COPY in parallel, but we can't do both simultaneously?
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2009-08-30 21:02:41 | \d+ for long view definitions? |
Previous Message | Josh Berkus | 2009-08-30 20:05:25 | Re: 8.5 release timetable, again |