Postgresql partitioning limit - possible bottlenecks

From: David Susa <dsg122(at)gmail(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Postgresql partitioning limit - possible bottlenecks
Date: 2015-01-27 19:51:06
Message-ID: CAOoE0Pm8MLshfuT048+Ec9ZgGLe+p9-XC3368cxRET8THZP01A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi,

We have a new customer who has an application written in JEE using
PostgreSQL 8.4. The database designer originally determined that the best
way to handle multitenancy was to create a partition for each new "client"
in two critical tables (table_a and table_b).

So lets say there are 500 clients created, there are 1000 partitions,
(table_a_1 ... table_a_500, table_b_1...table_b_500).
Those tables each have arround 8 million records each.

Reading postgres documentation I found that suggested limit for
partitioning goes arround 100 tables, so I guess this must be affecting
performance somehow. (here
<http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html>)

Good thing is that every query that uses these 2 tables can actually be
directed to the exact partition by client ID. So I guess that helps with
query planning time?

The question is then, how can we mitigate performance issues in the
meantime? Are there configuration parameters that we can tune? Will more
computational power help?

Thank you.

Browse pgsql-admin by date

  From Date Subject
Next Message John Scalia 2015-01-29 17:54:22 hugepage configuration for V.9.4.0
Previous Message Mathis, Jason 2015-01-27 16:39:54 Re: What monitoring tools to watch Pgsql 8.4?