From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Florian Pflug <fgp(at)phlo(dot)org> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: spinlock contention |
Date: | 2011-07-07 16:09:44 |
Message-ID: | CA+TgmoYthwiV32XjUsDHMkVRsccp5eV54sj0cGBJymU0r-5oPg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug <fgp(at)phlo(dot)org> wrote:
> In effect, the resulting thing is an LWLock with a partitioned shared
> counter. The partition one backend operates on for shared locks is
> determined by its backend id.
>
> I've added the implementation to the lock benchmarking tool at
> https://github.com/fgp/lockbench
> and also pushed a patched version of postgres to
> https://github.com/fgp/postgres/tree/lwlock_part
>
> The number of shared counter partitions is current 4, but can easily
> be adjusted in lwlock.h. The code uses GCC's atomic fetch and add
> intrinsic if available, otherwise it falls back to using a per-partition
> spin lock.
I think this is probably a good trade-off for locks that are most
frequently taken in shared mode (like SInvalReadLock), but it seems
like it could be a very bad trade-off for locks that are frequently
taken in both shared and exclusive mode (e.g. ProcArrayLock,
BufMappingLocks).
I don't want to fiddle with your git repo, but if you attach a patch
that applies to the master branch I'll give it a spin if I have time.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-07-07 16:11:08 | Re: dropping table in testcase alter_table.sql |
Previous Message | mike beeper | 2011-07-07 16:01:16 | Creating temp tables inside read only transactions |