From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Jignesh Shah <jkshah(at)gmail(dot)com> |
Cc: | Ivan Voras <ivoras(at)freebsd(dot)org>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance under contention |
Date: | 2010-12-07 17:37:31 |
Message-ID: | AANLkTimRPXTs0DXNLQT1nubeNT+tH2JHVXg1-KidvBcH@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah <jkshah(at)gmail(dot)com> wrote:
> That's exactly what I concluded when I was doing the sysbench simple
> read-only test. I had also tried with different lock partitions and it
> did not help since they all go after the same table. I think one way
> to kind of avoid the problem on the same table is to do more granular
> locking (Maybe at page level instead of table level). But then I dont
> really understand on how to even create a prototype related to this
> one. If you can help create a prototype then I can test it out with my
> setup and see if it helps us to catch up with other guys out there.
We're trying to lock the table against a concurrent DROP or schema
change, so locking only part of it doesn't really work. I don't
really see any way to avoid needing some kind of a lock here; the
trick is how to take it quickly. The main obstacle to making this
faster is that the deadlock detector needs to be able to obtain enough
information to break cycles, which means we've got to record in shared
memory not only the locks that are granted but who has them. However,
I wonder if it would be possible to have a very short critical section
where we grab the partition lock, acquire the heavyweight lock, and
release the partition lock; and then only as a second step record (in
the form of a PROCLOCK) the fact that we got it. During this second
step, we'd hold a lock associated with the PROC, not the LOCK. If the
deadlock checker runs after we've acquired the lock and before we've
recorded that we have it, it'll see more locks than lock holders, but
that should be OK, since the process which hasn't yet recorded its
lock acquisition is clearly not part of any deadlock.
Currently, PROCLOCKs are included in both a list of locks held by that
PROC, and a list of lockers of that LOCK. The latter list would be
hard to maintain in this scheme, but maybe that's OK too. We really
only need that information for the deadlock checker, and the deadlock
checker could potentially still get the information by grovelling
through all the PROCs. That might be a bit slow, but maybe it'd be
OK, or maybe we could think of a clever way to speed it up.
Just thinking out loud here...
> Also on the subject whether this is a real workload: in fact it seems
> all social networks uses this frequently with their usertables and
> this test actually came from my talks with Mark Callaghan which he
> says is very common in their environment where thousands of users pull
> up their userprofile data from the same table. Which is why I got
> interested in trying it more.
Yeah.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Compan
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-12-07 17:50:43 | Re: Performance under contention |
Previous Message | Tom Polak | 2010-12-07 17:34:25 | Compared MS SQL 2000 to Postgresql 9.0 on Windows |