From: | Hrishikesh Mehendale <hashinclude(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Partitions and max_locks_per_transaction |
Date: | 2009-11-20 16:42:11 |
Message-ID: | 200911200842.13089.hashinclude@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
It was Thursday 19 November 2009 11:08:10 pm that the wise Tom Lane thus
wrote:
> <hashinclude(at)gmail(dot)com> writes:
> > To make make the retrieval faster, I'm using a
> > partitioning scheme as follows:
> >
> > stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
> > (where t2 - t1 = 2 hrs), i.e. 12 tables in one day
> > stats_3600: data gathered / calculated over 1 hour, child tables
> > similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days
> > (i.e. 15 tables a month)
> > stats_86400: data gathered / calculated over 1 day, stored as
> > stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).
>
> So you've got, um, something less than a hundred rows in any one child
> table? This is carrying partitioning to an insane degree, and your
> performance is NOT going to be improved by it.
Sorry I forgot to mention - in the "normal" case, each of those tables will
have a few hundred thousand records, and in the worst case (the tables store
info on up to 2000 endpoints) it can be around 5 million.
Also, the partitioning is not final yet (we might move it to 6 hours / 12
hours per partition) - which is why I need to run the load test :)
> I'd suggest partitioning on boundaries that will give you order of a
> million rows per child. That could be argued an order of magnitude or
> two either way, but what you've got is well outside the useful range.
>
> > I'm running into the error "ERROR: out of shared memory HINT: You
> > might need to increase max_locks_per_transaction.
>
> No surprise given the number of tables and indexes you're forcing
> the system to deal with ...
How many locks per table/index does PG require? Even with my current state
(<50 tables, < 250 (tables + indexes)) is it reasonable to expect 2000 locks
to run out?
Thanks,
Hrishi
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2009-11-20 16:47:01 | Re: SSD + RAID |
Previous Message | Віталій Тимчишин | 2009-11-20 16:34:09 | Re: View based upon function won't use index on joins |