From: | Grzegorz Tańczyk <goliatus(at)polzone(dot)pl> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Many thousands of partitions |
Date: | 2013-10-08 15:23:51 |
Message-ID: | 8426950.601381245582255.JavaMail.root@Polzone |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I have question regarding one of caveats from docs:
http://www.postgresql.org/docs/8.3/static/ddl-partitioning.html
"Partitioning using these techniques will work well with up to perhaps a
hundred partitions; don't try to use many thousands of partitions."
What's the alternative? Nested partitioning could do the trick? I have
milions of rows(numbers, timestamps and text(<4kb), which are frequently
updated and there are also frequent inserts. Partitioning was my first
thought about solution of this problem. I want to avoid long lasting
locks, index rebuild problems and neverending vacuum.
Write performance may be low if at the same time I will have no problem
selecting single rows using primary key(bigint).Partitioning seems to be
the solution, but I'm sure I will end up with several thousands of
automatically generated partitions.
Thanks
--
Regards,
Grzegorz
From | Date | Subject | |
---|---|---|---|
Next Message | shailesh singh | 2013-10-08 15:25:00 | Re: [HACKERS] Urgent Help Required |
Previous Message | bricklen | 2013-10-08 15:18:00 | Re: [HACKERS] Urgent Help Required |